Source – livemint.com
Voices for the need of an ethical Artificial Intelligence are getting louder. AI has a great many applications, and has been solving various problems for humans. But such a powerful technology raises equal concerns about its possible misuse. And there have already been multiple cases of AI being used for malicious purposes.
Take Deepfakes for example. It is an advanced AI software that creates an artificial version of a person’s face accurately matching that of another. The software has been abused to create fake pornographic content using facial data of Hollywood celebrities such as Gal Gadot, Arianna Grande and Taylor Swift, among others.
Just recently, MIT experimented with an AI that was fed with data from the “darkest corners of Reddit.” The result was the world’s first psychopath AI, Norman.
Another recent example is Google’s Duplex AI, a natural-sounding robo assistant that makes calls on your behalf. Even though Google is still developing this AI, people have raised concerns over privacy and possible misuse by marketers or political campaigns.
Perhaps this is why the likes of Elon Musk have long warned about an AI apocalypse. In the case of Musk, he has even gone on to say that AI can one day lead to World War 3.
So, where do we go from here? There are two options we can consider: Abandon AI completely and forget about any benefits it can deliver, or explore the solutions that can prevent a possible apocalypse. Although governments across the world are yet to formulate regulations for AI, technology firms are already working towards implementing an “ethical AI”.
Google is already setting the standard by pledging that it will not allow its AI technology to be used for weapons or combat. Google will not pursue AI applications in “technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints”, wrote CEO Sundar Pichai in a blog post.
“AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognise that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief,” he added.
But Google is not alone in this crusade. Technology firms like Microsoft and Accenture are already making giant strides in the direction of ethical AI.
Brad Smith, the President of Microsoft who co-authored ‘The Future Computed: Artificial Intelligence and its role in society’ with Harry Shum, laid out six principles for the AI to consider – fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. According to Smith, just a consensus on ethical AI is not enough.
“Because if we have a consensus on ethics and that is the only thing we do, then what we are going to find is that only ethical people will design AI systems ethically. That will not be good enough … We need to take these principles and put them into law,” he said at an event during a visit to Singapore in April this year. “Only by creating a future where AI law is as important a decade from now as, say, privacy law is today, will we ensure that we live in a world where people can have confidence that computers are making decisions in ethical ways.”
Accenture is taking a slightly different approach by monitoring the data input at the very initial levels.
“Accenture’s ‘Teach and Test’ methodology ensures that AI systems produce the right decisions in two phases. The ‘Teach’ phase focuses on the choice of data, models and algorithms that are used to train machine learning. This phase experiments and statistically evaluates different models to select the best performing model to be deployed into production, while avoiding gender, ethnic and other biases, as well as ethical and compliance risks,” the company said in a note.
“The adoption of AI is accelerating as businesses see its transformational value to power new innovations and growth,” said Bhaskar Ghosh, group chief executive, Accenture Technology Services. “As organisations embrace AI, it is critical to find better ways to train and sustain these systems – securely and with quality – to avoid adverse effects on business performance, brand reputation, compliance and humans.”
The general outlook is that the onus is on humans to build an AI that meets ethical parameters.
“As AI algorithms are created and trained by humans, there is a very high possibility of human bias being built into these algorithms. AI systems are superior to humans in speed and capability which, when used in malicious ways, can cause damage that is much higher in magnitude. Just as in humans, mistakes are unavoidable for AI algorithms too. The issue is the lack of accountability that would otherwise be a deterrent to negative actions,” said Arjun Pratap, CEO & Founder, EdGE Networks, a HR Technology Solutions provider that uses Data Science and Artificial Intelligence.
Back in India, people have started taking AI seriously. Niti Aayog recently released a discussion paper on the technology. Titled “National Strategy for Artificial Intelligence”, the paper touched on the issue of ethics along with exploring the massive benefits of the AI.
“As AI-based solutions permeate the way we live and do business, questions on ethics, privacy and security will also emerge. Most discussions on ethical considerations of AI are a derivation of the FAT framework (Fairness, Accountability and Transparency). A consortium of Ethics Councils at each Centre of Research Excellence can be set up and it would be expected that all COREs adhere to standard practice while developing AI technology and products,” said the paper.
“Data is one of the primary drivers of AI solutions, and thus appropriate handling of data, ensuring privacy and security is of prime importance. Challenges include data usage without consent, risk of identification of individuals through data, data selection bias and the resulting discrimination of AI models, and asymmetry in data aggregation. The paper suggests establishing data protection frameworks and sectorial regulatory frameworks, and promotion of adoption of international standards,” it added.
Experts suggest that AI can have a positive impact, provided the government is able to implement the right regulations.
“In India, businesses have only started harnessing AI in the recent past. It is definitely an area India should focus on as AI can create a huge impact in a positive and ethical way – provided the Indian government implements the necessary regulations and standards to ensure its responsible usage. A team that audits and certifies AI systems on factors such as trust, safety, accountability would certainly help in keeping malicious use of AI under check. It would foster safe, explainable and unbiased use of AI,” said Arjun on the possible steps needed to have ethical AI in India.