Source: businessworld.in
The economist David Ricardo, when he had in 1821 written about the impact of machines, could not have imagined their pervasive influence on our lives. Today, robots controlled by supercomputers, self-driven cars, board games that are programmed to outsmart humans, complex algorithms that replicate the human brain, and neural networks – have become ubiquitous. AI helps, for instance, to triangulate and analyze data from radio-collared endangered animals and their ingrained behavioural patterns – if a natural predator comes near a deer they scatter, but if a man with a gun comes they all move in opposite directions.
As more and more data is captured and as data processing becomes more ubiquitous, the implications of AI are not just to jobs and workers, but to all of society and our very existence. Are we sure we are not creating a Frankenstein like entity that will imperil its own creator? Like HAL 9000 in Arthur C Clarke’s 2001 a Space Odyssey, a potential rogue that attempted to destroy its creators?
In the race to lead in AI, Canada has created the Canadian Institute for Advanced Research and the Neural Computation and Adaption Group, largely by allowing for openness in approach. Top companies like Facebook and Samsung have opened AI development centres in Canada. China is surpassing even the US in AI research today and it has 700 million smartphones. With a massive number of people using digital payment systems, a large volume of data is being generated and that is being consumed and analyzed effectively by firms such as Baidu, Tencent etc for higher AI proficiency in fields such as messaging BOTs and facial recognition.
While India must prepare to ramp up its act on the AI front, we should be careful about AI applications in sensitive areas like for instance the justice delivery system – it is important that the data that is used for analysis does not have latent biases. In the US, for instance, it was found that AI determined criminal justice caused Africa Americans to be targeted because the system had been trained on historical data that had them as the subject of historical prejudices. In America, the technology giants of Silicon Valley have pledged to work together to make sure that any AI tools they develop are safe. All the leading AI researchers in the West are signatories to an open letter from 2015 calling for a ban on the creation of autonomous weapons. But that should not make us complacent.
Unlike conventional technology, the depth and effectiveness of AI is hard to fathom. In areas of health, autonomous vehicles and defence it can make a difference between life and death – the attack on the Aramco plant and the killing of the Iranian military commander in Iraq are examples of the sharpness that AI can bring into military operations. But, we must also factor in quirky possibilities with fatal implications. What happens if an AI system goes berserk and triggers a reactive missile attack on the basis of its own disintermediated and perhaps miscalculated assessment of a preemptive attack from the enemy? India should take the lead in raising the subject of ethics, safety and transparent standards that cannot be manipulated in the use of AI. At the same time, it must work strategically towards raising its own AI competencies in a coherent and rational manner.