Source: zdnet.com
Late last year, I complained to Richard Socher, chief scientist at Salesforce and head of its AI projects, about the term “artificial intelligence” and that we should use more accurate terms such as machine learning or smart machine systems, because “AI” creates unreasonably high expectations when the vast majority of applications are essentially extremely specialized machine learning systems that do specific tasks — such as image analysis — very well but do nothing else.
Socher said that when he was a post-graduate it rankled him also, and he preferred other descriptions such as statistical machine learning. He agrees that the “AI” systems that we talk about today are very limited in scope and misidentified, but these days he thinks of AI as being “Aspirational Intelligence.” He likes the potential for the technology even if it isn’t true today.
I like Socher’s designation of AI as Aspirational Intelligence but I’d prefer not to further confuse the public, politicians and even philosophers about what AI is today: It is nothing more than software in a box — a smart machine system that has no human qualities or understanding of what it does. It’s a specialized machine that is nothing to do with systems that these days are called Artificial General Intelligence (AGI).
Before ML systems co-opted it, the term AI was used to describe what AGI is used to describe today: computer systems that try to mimic humans, their rational and logical thinking, and their understanding of language and cultural meanings to eventually become some sort of digital superhuman, which is incredibly wise and always able to make the right decisions.
There has been a lot of progress in developing ML systems but very little progress on AGI. Yet the advances in ML are being attributed to advances in AGI. And that leads to confusion and misunderstanding of these technologies.
Machine learning systems unlike AGI, do not try to mimic human thinking — they use very different methods to train themselves on large amounts of specialist data and then apply their training to the task at hand. In many cases, ML systems make decisions without any explanation and it’s difficult to determine the value of their black box decisions. But if those results are presented as artificial intelligence then they get far higher respect from people than they likely deserve.
For example, when ML systems are being used in applications such as recommending prison sentences but are described as artificial intelligence systems — they gain higher regard from the people using them. It implies that the system is smarter than any judge. But if the term machine learning is used it would underline that these are fallible machines and allow people to treat the results with some skepticism in key applications.
Even if we do develop future advanced AGI systems we should continue to encourage skepticism and we should lower our expectations for their abilities to augment human decision making. It is difficult enough to find and apply human intelligence effectively — how will artificial intelligence be any easier to identify and apply? Dumb and dumber do not add up to a genius. You cannot aggregate IQ.
As things stand today, the mislabeled AI systems are being discussed as if they were well on their way of jumping from highly specialized non-human tasks to becoming full AGI systems that can mimic human thinking and logic. This has resulted in warnings from billionaires and philosophers that those future AI systems will likely kill us all — as if a sentient AI would conclude that genocide is rational and logical. It certainly might appear to be a winning strategy if the AI system was trained on human behavior across recorded history but that would never happen.
There is no rational logic for genocide. Future AI systems would be designed to love humanity and be programmed to protect and avoid human injury. They would likely operate very much in the vein of Richard Brautigan’s 1967 poem All Watched Over By Machines Of Loving Grace — the last stanza: