Source – deccanchronicle.com
Artificial intelligence (AI) is developing quickly. Apple’s intelligent personal assistant, Siri, can listen to your voice and find the nearest restaurant; self-driving cars have become a reality, and IBM’s quiz contest-winning AI model ‘Watson’ is now being deployed to improve cancer treatment. While researchers and experts continue to exploit and harness AI’s “revolutionary” potential, the celebration could be premature. Microsoft chatbot on Twitter transformed into a Hitler-loving, incest-promoting robot in 2016; Wikipedia edit bots have repeatedly engaged in feuds over editing pages; and two chatbots on popular messaging application QQ in China were taken offline after they went off-script last week. Recently, Facebook also had to shut down one of its AI systems after the chatbots allegedly developed their own language. However, the social media giant clarified that its AI system had not gone rogue and the programme was closed as it could not have brought any benefit to the company.
As the instances of AI machines going awry grow, experts and researchers in the field of artificial intelligence have cautioned that the technology is incomprehensible. “Artificial intelligence is, of course, going to be unpredictable. Any really complicated controller can behave in unexpected ways. We’ll always have to be careful about what aspects of our lives we put into the “hands” of artificial intelligence. We’d want to vet these things really well before handing life-or-death tasks over to them — like driving, to give just one topical example,” said Michael Graziano, a neuroscientist and author of the book Consciousness And The Social Brain.
The two iconic entrepreneurs, Facebook CEO Mark Zuckerberg and inventor Elon Musk, are locked in a bitter tussle over the use of artificial intelligence. In 2014, addressing students at MIT, Musk likened AI to “summoning the demons”. “AI is a fundamental existential risk for human civilisation, and I don’t think people fully appreciate that,” he had said. Calling for oversight in 2017, Musk stated, “We need to be proactive about regulation instead of reactive. Governments couldn’t afford to wait until a whole bunch of bad things happen.” Responding to Musk’s remarks, Zuckerberg, on July 23, called his comment “irresponsible”. “I think people who are naysayers and try to drum up these doomsday scenarios — I just don’t understand it. It’s really negative and in some way, I actually think it is pretty irresponsible,” said Zuckerberg.
Two days later the war of words got ugly, with Musk tweeting, “I’ve talked to Mark (Zuckerberg) about this. His understanding of the subject is limited.” However, Musk isn’t the only one who takes a grim view of AI. Aaron M. Bornstein, a Princeton neuroscientist, believes that AI may worsen inequality and oppression. “More likely, and it is already happening — the ways humans use machine learning, it will worsen existing inequality and oppression by making it seem objective, and harder to overcome,” said Bornstein.
If AI machines are tipped to take over the important aspects of human life eventually, can experts instil values and human-like motivation in them? Michael Graziano, who is researching on engineering consciousness in AI, believes that AI can be made conscious. “The mind is something migratable to artificial devices. The technology is moving in that direction rapidly. A really convincing version, like Data, the android from Star Trek, might be beyond our lifetime, but that sort of thing and more will inevitably come,” said Graziano.
Oxford philosopher Nick Bostrom holds a diametrically opposite view. He argued, “We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans: scientific curiosity, benevolent concern for others…” Facebook’s suicide prevention AI system had failed to prevent people from taking their lives in India. Two cases of live-streaming of suicide were reported in India after Facebook deployed AI in January to avert cases of suicide. “Using AI to identify people who are thinking about suicide, and then reaching out to them, may be very helpful. But even if it helps to some degree, for some people, it obviously won’t solve the whole problem, so you’ll always be able to point to some spectacular tragedies. Communication technology seems to enable certain kinds of behaviours. I don’t think giving emotions to AI would make any obvious difference to that effort, at least not right now. Human beings are good at emotions, and yet not very good at suicide prevention,” said Graziano.