What You Need To Know About Artificial Intelligence And Why (1)
Many things have been said about the impact that the development of Artificial Intelligence (AI) will have on society, especially in the labor market. And, perhaps also, in a more distant future, on the feasibility that at some point the machines have greater power than the same people, and that somehow they turn us into their “pets”.
Therefore, it is important to make a distinction between what can be imagined and what could materialize in the following decades, considering what are the current advances in AI and Machine Learning.
”Artificial intelligence is changing the world, but there needs to be more context and understanding. AI refers to many different things at various stages of evolution. Have you built a computer like HAL 9000 or written a thousand IF statements?”
In an MIT Review article published in November 2018, Karen Hao clarifies that the vast majority of advances and applications in the field of AI that make the news belong to a specific category of algorithms called Machine Learning.
This new generation of algorithms is more sophisticated than the previous ones because through these algorithms it is possible to carry out tasks (recognition, categorization, decision-making) that were previously inaccessible to computers.
Unlike traditional algorithms, which consist of programming a series of logical or mathematical instructions as simple or as complicated as necessary to perform a specific task. The Machine Learning category allows a computer to perform tasks as advanced as facial recognition or the tasks performed by virtual assistants (Siri, Alexa, etc).
It is almost impossible to imagine how these tasks could be carried out using only logical instructions or mathematical formulas.
These spectacular results are possible due to the combination of three factors.
Kevin Kelly, in his book The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future ( 2017), mentions that, even though AI has been a subject of constant speculation since the creation of the first mainframes, concrete advances in this field they seemed to be stagnant. This changed with the appearance of three technological advances in the first decade of the 21st century:
After the development of the GPU ( Graphics Processing Unit ) chips, originally designed to support the graphics processing calculations of modern video games, in 2009 Andrew Ng and a team from Stanford began using these chips for processing in parallel neural networks.
To take a concrete example, a neural network with 100 million parameters could be cascaded (that is, all possible combinations could be calculated) in one day, when it took several weeks before.
Just as a human brain needs to be trained in its first years of life to learn to speak and recognize the environment, chess programs need to study and practice before becoming effective.
This also applies to image recognition, speech, language translation, etc. Currently, it is possible to store enormous amounts of information, which serve as training. These data allow algorithms to learn when they are wrong and when they are right in such a way that they optimize their decision criteria to be able to get it right as many times as possible.
Better neural network algorithms
Neural network algorithms have been around since the 1950s; however, they were not practical due to the large number of combinations they handle.
Eventually, the use of “levels” was adopted to be able to segment these calculations, which made it possible for these algorithms to have practical utility. Taking facial recognition as an example, one layer could recognize an eye, and pass this result to the top layer in such a way that in the end the image is identified as a face.
In 2006, Geoff Hinton managed to mathematically optimize the results for each layer so that they could be accumulated more quickly. This technique forms the basis of Deep Learning.
In the words of Andrew Ng, a pioneer in the use of GPU chips for neural network processing: “Artificial Intelligence is like building a rocket. You need a great engine and a lot of fuel. The engine is the algorithm (neural learning network, Deep Learning ) and the fuel is the enormous amount of data available to feed and train these algorithms () ”.
The machines that use this type of technique have greater autonomy, which is reflected in that they can make decisions that involve movement (in the case of robots), categorization (in the case of image processing), or other more specific functions such as language translation.
The fact that machines can perform this type of task still surprises us, and therefore using concepts such as Intelligence or Learning to describe these technologies is understandable.
To better understand the current applications of this technology we can use the categories mentioned by Karen Hao in her article What is AI? We drew you a flowchart to work it out to distinguish the use of Machine Learning compared to the use of sensors, mathematical formulas, preprogrammed routines, and other more traditional tools:
When a machine can identify (within a predetermined range) the content of an image. Some examples include facial recognition (identifying the facial features of a specific person) or detecting breast cancer on X-rays.
A machine that can recognize the sounds emitted by the human voice to generate automatic transcripts. These transcripts can later be used for other actions (translations, virtual personal assistants that obey orders, etc.).
Language recognition ( Natural Language Processing ):
When a machine is capable of interpreting words to carry out specialized actions such as SPAM detection, tagging of emails in mailboxes, or Chatbots that can solve certain basic doubts on a web page.
Machines with movement capacity that do not need to be remotely controlled since they can identify the obstacles around them and determine a safe route to move. Self-driving cars belong to this category.
Other Machine Learning applications:
More generically, this category refers to any machine that can analyze large amounts of information, find patterns within that information, and use the patterns found to make complex decisions; for example, to make a weather forecast.
For Benedict Evans, IT Consultant and Analyst, AI refers to many ideas, concepts, capabilities very different from each other, and with different applications in the real world. Since the origin of the first computers, there has been the principle that a machine should not ask for data that it can obtain by itself.
Although before it was possible to program routines so that a machine could be autonomous, Machine Learning opens up new possibilities where the recognition of images, texts, speech, and autonomous movement are just the first examples.
The next article examines possible advances in the use of machine learning across industries and the impact of this achievement on the labor market and society at large.