The double-edged promise of AI

Until very recently, using a computer to challenge a professional in the ancient Chinese board game of Go would have been fruitless; the human would have won hands down. Machines may have defeated the best human competitors in chess, draughts and backgammon, but Go’s complexity and subtlety were thought to set the game apart. In a match played in Seoul in 2016, however, a program known as AlphaGo beat South Korea’s Lee Sedol, widely considered to be the world’s finest player, by four games to one.

AlphaGo’s victory stunned many Go enthusiasts, but it also provided further proof of the power of an artificial-intelligence (AI) technique known as deep learning, which uses a brain-like processing architecture to recognise patterns in large data sets. In the case of AlphaGo, designed by Google’s London subsidiary DeepMind, the data sets in question encode board configurations, which are used to select the software’s winning moves. But data can come in a wide variety of forms, and in the last few years the technique has been used to automate many other tasks that conventional programs struggle with, including identifying images, recognising speech and even driving cars.

Made practicable by today’s powerful computer hardware and almost limitless online data, as well as better algorithms, deep learning has spurred renewed interest in AI. In part, it has stimulated fresh discussion about the possible threats posed by the technology; physicist Stephen Hawking warned in 2014 that AI could “spell the end of the human race”. But it has also led to huge cash injections from industry, with many high-tech companies investing to produce mobile devices and other consumer products with more intelligent features and human-like qualities. “When I started out in AI in the 1980s there was no World Wide Web, so it was a struggle to get hold of data, and most things weren’t in machine-readable form,” recalls Boi Faltings, head of the Artificial Intelligence Laboratory at the École Polytechnique Fédérale de Lausanne (EPFL). “But now computers are much faster, there are far more data, and things just work.”

Artificial neurons

AI involves studying how machines can be made to simulate aspects of intelligence. Some scientists see the main goal as trying to achieve a human-like intelligence, while others aim, as Faltings puts it, “to do more and better” than people. Officially born and christened during a scientific workshop in New Hampshire in 1956, AI spawned numerous approaches to creating machine intelligence. The one that underpins deep learning – neural networks – was first demonstrated in a sort of mechanical brain known as the Perceptron built by psychologist Frank Rosenblatt in the 1950s and 1960s.

Neural networks consist of simulated neurons known as units that are arranged in layers. Like real neurons, every unit is connected to several others in its neighbouring layers, and each connection is weighted. A unit will only “fire”, in other words send a signal to the next layer, when the sum of the weighted signals from the previous layer exceeds some threshold. The idea is that certain groups of data sets at the input generate a specific output. In other words, those data sets are recognised as being examples of a particular “object” – be it a cat, a chair or a person’s face.

Networks are trained by being fed multiple examples of each object and seeing how close they get to producing the right output. The difference between the right answer and the answer they actually produce is then used to tune the weights so that the next time around they get a bit closer. In this way networks “learn” to recognise different objects. When a network is then set loose on fresh unlabelled data it should be able to correctly identify the object or objects within.

Despite early promise, researchers struggled for decades to build reliable, efficient neural networks. What they wanted to do was build networks with multiple layers. The idea was that layers would provide progressively more abstract descriptions of objects within the previous layer, so that, for example, the first layer might quantify light and dark pixels within an image, the second layer could then identify edges and basic shapes within the pixels, and the third layer might use that information to then recognise whole objects. Unfortunately, the process of adjusting the weights involved sending information from a device’s output backwards through successive layers, and that information degraded as it passed from one layer to another. In other words, the use of multiple layers – deep learning – at that stage wasn’t in the cards.

This problem was addressed in the first years of this century by a number of scientists, including Geoffrey Hinton at the University of Toronto. The solution involved clearly separating layers of neurons during the learning process, thereby avoiding degradation of the weighting adjustments. This breakthrough, together with the availability of “big” data and cheap, fast processors has contributed to many of the practical applications of AI that we now see.

Fooling the humans

One of the most discussed applications is self-driving cars. Tesla Motors applies deep learning by first observing countless hours of human driving and using sensors to record both a car’s changing environment – particularly the speed and position of nearby vehicles – and how the driver reacts. The environmental and driver data are then matched up for each step along the journey, with the former serving as input to the deep-learning model and the latter as the label. Setting up the model so that the car behaves as a human driver would involves continually adjusting the weights so that the expected and actual output from the model eventually line up.

Beyond driving, deep learning has also been used to improve image recognition. In 2012, Google reported a 16% success rate after training a neural network with a billion connections to distinguish 22,000 different objects contained in millions of randomly selected YouTube videos. DeepMind, meanwhile, has shown its versatility by creating a deep-learning algorithm that can play 49 different arcade games – including the classic Space Invaders – by learning them from scratch and then going on to beat professional players.

When it comes to mastering human language, so-called chatbots are becoming an increasingly common feature in mobile devices and around the home. These use pre-defined scripts together with deep learning to answer queries and carry out simple conversations. Apple’s Siri and Amazon’s Echo speaker are two examples, both of which can report sports scores, recommend restaurants and relay calendar entries. Meanwhile, a chatbot called Eugene Goostman, developed by three Russian scientists, apparently passed the Turing test in 2014 when it convinced more than 30% of a panel of judges that it was human during five-minute typed conversations at the Royal Society in London. (It would appear, however, that Eugene could still do with a bit of practice: asked by American computer scientist Scott Aaronson how many legs a camel has, it replied: “Something between two and four. Maybe, three?”)

AI is also being used to improve other types of language processing. Another pioneer of deep learning, Jürgen Schmidhuber, director at the Dalle Molle Institute for Artificial Intelligence (IDSIA) in Switzerland, led a team that developed “long short-term memory”. Now used in Google’s machine translation and speech recognition software for smartphones, this technology uses what are known as recurrent neural networks, which contain feedback loops allowing data to be stored.

What’s in a brain?

Schmidhuber’s aim now is to build what is known as a general-purpose or real AI machine, the kind of device beloved of science fiction writers that has all the important attributes of human intelligence, including reasoning, goal-seeking, curiosity and creativity. His idea is to combine a deep-learning recurrent neural network with a second program, also a recurrent neural network, whose job is to design experiments that allow the first algorithm to learn as much as it can about the world. Schmidhuber’s “motto” since the 1970s, he says, has been to “build an AI smarter than myself such that I can retire.”

According to Schmidhuber, physics demands that true intelligence can be generated only by a brain-like recurrent neural network: a device that crams as many processors as possible into a given volume and which connects those processors with many short wires, to save energy, and a few long ones, to allow for longer-distance communication. And he reckons that such an all-singing, all-dancing device could be with us within the next 25 years. He bases that claim on a continuing trend of computer processors getting cheaper by a factor of 10 every five years and the fact that the human brain contains about 100,000 times as many neurons as today’s largest artificial neural network.

EPFL’s Faltings doesn’t buy that. He says that scientists are still a long way from understanding how the human brain works, particularly the temporal properties of brain phenomena, and points out that, unlike humans, AI software is unable to carry out multiple complex operations at the same time. He recognises that deep learning has “changed the game” for applications such as visual and speech recognition because there is an “almost infinite amount of data” available. But he believes that the technology will fail to make a significant impact in other areas where data are harder to come by. “I take my hat off to deep learning people because they have finally had success after many years of hard work,” he says. “But now they are maybe going over the top.”

While acknowledging that machines can do useful things by trying to replicate human behaviour, Faltings is interested in their carrying out tasks that are beyond people, such as complex planning and coordination. He trains software via explicit, logical reasoning, an approach he hopes will achieve success in several areas, including medicine. Here, he explains, probabilistic models can be used to work out the optimum treatment for a patient given certain symptoms.

This is also the approach taken by Google with its self-driving cars. Each car uses a variety of sensors to monitor surrounding traffic and feeds the data to a probabilistic model created by computer programmers that predicts the next moves of each vehicle, cyclist and pedestrian in its vicinity. Faltings points out that Tesla’s contrasting deep-learning technology allowed it to bypass years of painstaking design work and therefore get its cars on the market much more quickly. But, he says, the downside of this approach was highlighted by a fatal accident in May that involved a Tesla Model S driving straight into an articulated lorry that was turning in front of it – a scenario that evidently none of the drivers used to train the software had encountered. “Because there is no underlying model there is no guarantee that the software makes the right interpolation from the data,” he says.

Easy parking

According to a recent Stanford University study of future trends in AI, self-driving vehicles could bring significant benefits, including the elimination of traffic jams and parking problems. But according to Malte Helmert of the University of Basel, the publicity surrounding the Tesla incident highlights just how sensitive people are to dangers posed by AI, and therefore how little they will tolerate crashes caused by computers rather than people. “Qualitatively it is a very different thing if a human makes an error or if the car misbehaves, because you have this feeling of loss of control,” he says.

Ethical questions – such as what should be done, if anything, to protect the livelihoods of the many people who could lose their jobs to AI machines – are raised in the Stanford report and have also been discussed by a recently created group of researchers from five major US high-tech companies. Indeed, according to Helmert, scientists in general have become far more aware of AI’s potential impact on society as machine intelligence “begins to rival human intelligence in more and more areas of life.”

Some researchers, like Schmidhuber, are considering AI’s ultimate challenge: the “technological singularity”, or the point at which machine intelligence would surpass that of humans and thereafter rapidly accelerate away. Simon Hegelich, an expert on chatbots at the Technical University of Munich, agrees with EPFL’s Faltings that the singularity can occur only once we have a better understanding of the human brain. However, he believes that recent progress in AI, especially machine learning, suggests the necessary breakthrough could happen within the next 10 or 15 years. As to the possible consequences, he is decidedly optimistic. He argues that for an AI machine to be genuinely intelligent it would have to be empathetic, and therefore not a threat to humans.

For others, such discussion is superfluous. Faltings says expectations surrounding deep learning have been raised too high, leading, as he sees it, to misguided talk of “robots or computers taking over from humans”. He adds: “We in AI laugh at those things.” But he is also eager to put a positive spin on recent achievements. “Even if we don’t get to the singularity, there are still amazing things that can happen with AI. It’s good that people realise this.”

Human or machine?

In 1950 British mathematician Alan Turing created a test to distinguish humans from machines. A human uses a keyboard and screen to communicate with both another human and a machine. If, after the conversation, the interrogator believes that the machine could be human, the machine wins. Contemporary scientists point out, however, that the Turing test can’t really tell if a machine has human intelligence or not. A machine would rather learn search tricks to pretend that it’s human.

AI for farmers

Gamaya, a spin-off of the École Polytechnique Fédérale de Lausanne, uses agricultural drones equipped with miniature hyperspectral cameras to monitor crops – an example of the potential of AI in the field of agriculture. The software applies AI to turn the wavelength spectral signature of plants into important information about crop conditions, thereby helping farmers decide when to use chemicals and fertilisers. An algorithm also predicts outcomes based on the analysed patterns. This significantly improves the efficiency of food production, says CEO Yosef Akthman. Founded in 2015, the company operates mainly in Latin America, where it estimates the market for the successfully tested system to be worth €4.5 billion.


by