Source:- thehindu.com
What is Artificial Intelligence (AI)? How do you build it? How can it go wrong and what does our future look like with increased use of AI?
These were just some of the questions that were discussed at the launch of the book, A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control by Kartik Hosanagar, at the Bangalore International Centre on June 8. Kartik, an alumnus of BITS Pilani, and at present, a tenured professor at Wharton, was in conversation with Ravi Gururaj, founder-CEO, QikPod.com
Turing test
Addressing a packed auditorium, Ravi kicked off the interaction by asking Kartik about a story mentioned in the book that took place all the way back in 1780.
Hungarian inventor Wolfgang von Kempelen claimed to have invented a chess playing robot and invited Benjamin Franklin, then US ambassador to France to play against it. However, as Kartik put it, to much laughter from the audience, it was “jugaad AI” as there was a person inside the robot.
The true origin of AI came with Alan Turing in the early 1950s when he wrote a paper, ‘Can Machines Think’, and devised what came to be known as the Turing test.
Kartik said, “If you want computers to do intelligent things, the question is how? There are many ways to build AI. One, which is used quite a lot, is the expert system approach. You observe human experts doing intelligent things and you ask them how they did they do it. These systems work reasonably well but at the end of the day, they cannot match human intelligence. The second approach, which is the dominant approach today, involves observing people in action, collecting a lot of data and the system extracts patterns from that data. This is the statistical approach known as machine learning.”
Kartik added that modern AI systems are highly capable.
He said, “Facebook’s algorithm figures out what stories to show you and which not to bother showing, companies are working on AI-based medical diagnosis, recruitment is done based on algorithms screening resumes, and in courtrooms in the US, there are algorithms that predict the likelihood that the defendant will re-offend and the judge passes the sentence based on that.”Ravi then asked the big question of what happens if the algorithm is wrong? Kartik said that for every example he provided in the book, he also gave counter examples of things going wrong.
Things going wrong
“In 2010, there was a ‘flash crash’ in the stock market. One trillion US dollars of market value was wiped out in 34 minutes. Analysis showed that the algorithms were feeding off each other and selling based on what the other algorithm was doing. Another example is of a study that found that Amazon’s resume screening had a gender bias.”
As for where these biases come from, Kartik said: “It’s not like a programmer is programming these biases in and saying reject the application when it’s a woman.
Origin of biases
“As I mentioned, machine learning is all about learning from data. So, the question is what kind of data do you feed the system? So, Amazon would have said let us look at all the applicants, let us look at who got the actual job offer, let us look at who actually got promotions in the workplace and let’s now create an algorithm that mimics that. If there was gender bias, conscious or unconscious, then the algorithm picks that [up] and becomes biased. The data becomes the driver of the AI behaviour.”
As for who controls the knob and polices this as Ravi put it, Kartik said, “Right now, it’s the data scientist who is writing 20 to 30 lines of code. The question is, is the person also adding a bunch of tests. For example, test for gender bias. It comes down to the data scientists and what tests they do. I don’t think the mindset of testing for bias, testing for security issues is there in data science. ”
He stated, “Companies that are deploying algorithms in socially important settings should have an audit process that goes through very formal checks. Even things such as data security, privacy and fairness. Tests like these are going to be important when looking to deploy algorithms in socially important settings.”
The floor was then thrown open to the audience. Questions ranged from ethical AI, about when AI would be used in the farm sector, the risk to privacy, conscious AI and fintech.
Infosys founder NR Narayana Murthy, who was present in the audience, said, “The ability to define purpose and goal is truly what distinguishes a human being from a machine…”
“I think you just made a brilliant point,” Kartik replied. “We keep asking as AI can match what humans are doing, what is left for humans to do? Well-defined tasks are very easy for AI systems to do. For example, if we are talking about companies, creative tasks such as where should the company be headed? Those are very different kind of problems. The AI system can’t say, ‘okay the company should be headed here’. It can help inform that decision. That’s where AI will be relevant.
Role of AI
“In fact, a lot of people worry about AI and job loss and indeed, that will happen. But, I believe AI will play a huge augmentation role. This question of purpose and goal, it will be informed by AI systems that help us identify what is the most relevant data we should look at and make us more efficient.”
He added, “Overall, I am an optimist about AI.” The interaction ended with Ravi inviting Kartik back every two years to give an update on AI.
Kartik responded, to much laughter from the audience, “Two years from now, I will have an AI system to answer your questions!”