The evolution of artificial intelligence
Many people think that computers were invented after the discovery of electricity. In reality, the firsts were mechanical and dated back to the second half of the 1600s and were already able to carry out operations of addition, subtraction, and multiplication. But it was in the thirties of the last century that the foundations for computers and information technology were laid as we know them today. Claude Shannon, from Yale University, showed the potential of using the binary system in telecommunications and started what we now call the digital age, made of ones and zeros. In the same period, Alan Turing (famous for having deciphered the codes created by the German Enigma machine during World War II) published an article about the Turing Machine, a theoretical machine capable of executing algorithms (a procedure of logical instructions) and managing data. What Turing defined in this paper is still the basis of our computers today.
The machine that imitates man
Already during the forties of the last century, people talked about artificial intelligence (even if the name was born during the fifties). Artificial neural networks were born, at least on paper. It had been demonstrated that any computable problem, that is solvable by an algorithm, was solvable with a network of neurons. This network was made up of elements that could be “off” or “on”, and the state of each element was determined by the set of stimuli from the surrounding neurons (we will talk more about neural networks in the next post). In 1950, two Harvard students, based on the discoveries made in the previous decade, created the first rudimentary artificial neural network which they called SNARC (Stochastic neural analog reinforcement calculator). In the following years, a group of ten researchers led by John McCarthy took up the challenge to build a machine that would imitate the learning and intelligence of the human mind in all respects, and all to be completed in two months. Surely you are thinking that their ambition went a little beyond reality and I don’t think I need to tell you how it ended. Unlike them, Allen Newell and Herbert Simon, thanks to a more realistic but still surprising goal for the time, had already managed to build a machine that was able to prove mathematical theorems. In addition to having made various contributions to this subject, McCarthy is the author of the term artificial intelligence, which, from its first use in 1956, officially gave birth to the discipline of the same name.
Ambitions and difficulties
Imagine the enthusiasm that this newborn discipline created. The ability to realise machines able to reason seemed to have catapulted science fiction into reality. The first attempts were very promising. It was possible to prove even very complex geometry theorems and in 1966 James Slagle of MIT (Boston) succeeded in creating a program that solved integrals. This is not an easy operation and I think we can all confirm it by thinking of when we were in high school or university and we faced examples that we just hadn’t got a clue about how to solve. Unfortunately, all this euphoria died down soon. The researchers realised that in reality it was enough to increase the complexity of the system, even slightly, and the computational time became unsustainable. For example, if we apply a resolution system (which can be neural networks or another type of algorithm) to find the value of a variable and we say that in a few hours of computation we arrive at the result, by increasing the number of variables, this time can become longer than the age of the universe. This can depend on both the type of algorithm or system that is used to find the solution, and the computing power of the machine used. Sometimes it is even enough to increase the number of variables from one to two to make the system unsolvable in a reasonable time. For this reason, during the sixties, the interest in artificial intelligence faded and the funds were progressively cut.
From the 80s to now
In the 80s there was a rekindling of this discipline, the spark was fired thanks to the transition from the academic to the industrial world. In 1982, the first commercial intelligent software was used. It was a system to manage the sales of computers, able to recognize if there were missing components in an order. To do this, it had to understand and analyze the relationships that existed between the various components and manage any errors or deficiencies. This “toy” saved the company 40 million dollars a year. It didn’t take long before most of the big companies introduced intelligent systems to manage many different processes.
From 1986 there was also a return of neural networks, which remained somewhat on the sidelines after the limitations encountered during the sixties. New approaches were applied which led to promising results. Furthermore, in 1996 there was the famous chess victory of the computer Deep Blue (built by IBM) against the world champion Kasparov. Although this machine did not use neural networks, it was anyway huge publicity for the machine that imitates and surpasses man. A year later the intelligent systems landed at NASA, which used them to manage various aspects of space missions.
Today, intelligent software is making its way into all sectors. Think self-driving cars or sales forecasting systems. From weather forecasts to failure predictions in industry, passing through finance. Everything is transforming and becoming smart. The loT (internet of things) is leading to the connection of more and more objects to the network, therefore more and more data to be interpreted and exploited to improve efficiency, as well as the ability to remotely control everything. Intelligent systems lead to a breakthrough in the management of what is called Big Data and allow us to partially replace humans in making predictions, as well as optimizing complex systems.
Where will the future lead?
Today, artificial intelligence is able, for example, to recognize a person’s face or spoken language, but it is still far from many other characteristics of the human mind. This doesn’t make its possibilities less surprising or less effective, as we’ll see more in future posts, but if the question is how close it is to a human’s mind, the answer is not much. Many have speculated on where research in this field will arrive and when, but obviously, there is no certainty. Looking at the technological evolution over the centuries, we see an exponential trend. Homo sapiens took 150,000 years to arrive at agriculture with permanent settlements, only 7,000 to build the first empires, and science as we know it today was born just 500 years ago. As we saw at the beginning of the post, the IT era is about a hundred years old and only in the last 40 it has had a dizzying evolution. Looking at this trend, one would think that if it will continue in this way it would not be so strange in a hundred years to witness (for our posterity) machines capable of replicating all or almost all the skills of the human mind. And if we got to that point, maybe we could go even further, with machines that are much more intelligent than humans. On one hand, this fascinates, but frightens on the other. With such a scenario, would human beings still be at the top of the pyramid on our planet? We leave science fiction to answer this question, which has already given vent to many fantasies with the most disparate scenarios.