Source- worldcrunch.com
In the Harry Potter series, evoking even the name of the villain — Voldemort — spreads terror. In real life, Voldemort doesn’t exist. But simple words can still be enough to provoke mental instability, or even a panicked fear. Such is the case today for the term Artificial Intelligence, AI for short.
The phrase covers a range of incredibly effective tools, but also evokes such strong emotions of excitement and fear that people forget what it is in the first place — and at the risk of causing errors, blockages and frenzies. It is therefore essential that we stop talking about AI, assuming it isn’t too late.
When researchers invented the tools of Information Technology, they also created the vocabulary we use to refer to them. To believe the Robert dictionary, the word “computer” made its first appearance in text in 1955. The first “internet” reference came 40 years later, in 1995. But “Artificial Intelligence” comes from a combination of two old terms, both of which have a strong significance.
Initially, it was a marketing move. In 1995, four U.S. universities sent out invitations for a research seminar, held the following year at Dartmouth College, to conduct a “study on Artificial Intelligence,” assuming that “every aspect of learning or any other characteristic of intelligence can, in principle, be described so precisely that it is possible for a machine to simulate.” The goal was to attract researchers and funding, but the name stuck.
Recently, experts have put forward other suggestions. Joel de Rosnay, a specialist in future trends, proposes “auxiliary intelligence.” The researcher Luc Julia, director of Samsung’s Laboratory of Artificial Intelligence, prefers “augmented intelligence.” And consultant Pierre Blanc likes “algorithmic computing.”
Blanc is right to want to replace the word “intelligence,” which is what poses the main problem.
Intelligence has long been considered a distinctive trait of humanity. In the 17th century (again according to the Robert dictionary) the word was employed to designate a “human being as a thinking being, capable of reflection.” With artificial intelligence, a machine is supposed to acquire this human capacity. It could distinguish, discuss, and even decide, like HAL 9000; the famous computer from Stanley Kubrick’s 2001: A Space Odyssey, released in 1968.
Machines could, therefore, supplant man not just in physical capability (as has been the case for centuries) but also intellectually. According to an Ipsos survey for BCG Gamma, 50% of French and German people fear the effects of AI on their jobs, as do 47% in the United States, 45% in Britain, and 38% in Spain.
A powerful tool
For the moment, AI remains a myth. The concept presented in that Dartmouth seminar has yet to materialize. Machines “know,” of course, how to beat the world’s most capable humans at StarCraft II, Jeopardy!, or the Chinese game of Go. But these are extremely narrow competencies, which devour infinitely greater amounts of energy than is needed by the human brain. The most powerful machines in the world are like mathematical geniuses incapable of stopping someone in the road to ask directions.
And often, hidden behind artificial intelligence, is human stupidity — as Microsoft demonstrated last year with its chat software Tay, which was disconnected from Twitter due to horrific sexist and racist content less than one day after being put in service. Or by Amazon in 2018, with its fully automated recruitment system that automatically eliminated women.
So what is really behind what we conveniently call Artificial Intelligence? The truth is simple: it’s a combination of the internet and the computer! The computer, with an information processing capacity that has grown for half a century at the exponential rate of Moore’s Law (the density of transistors on a chip double every two years). And the internet, with its colossal capacity to gather and transmit data. As spelled out by Michel Volle, co-president of the Institute of Economic and Statistical Training: “Artificial Intelligence = Statistics + Computing”
Short and sweet, this equation still needs one further point to be completed: The calculating power and the mountains of data permitted by forms of automated learning (“machine learning” and then “deep learning”). This is how researchers were able to make great strides for a good decade in the matter of visual and vocal recognition. They will surely make more spectacular progress in the years to come.
And yet, what we call Artificial Intelligence is still nothing but a tool. A tool of fantastic power that will transform how businesses are organized, but a tool nonetheless. It’s a “technological platform,” explains economists Darren Acemoglu and Pascal Restrepo, that “could be deployed not just to automate, but also to reorganize production to create new heights of human productivity.” But here too, Artificial Intelligence will only do that which human intelligence decides.