Source- theguardian.com
In a modern company like Amazon, almost all human activity is directed by computer programs. They not only monitor workers’ actions but are used to choose who should be employed. Yet it emerged last week that the company had scrapped an attempt to use artificial intelligence to select workers on the basis of their CVs since the results consistently discriminated against women.
This is a welcome decision that illuminates two important facts about machine learning, the most widely used technique of AI at the moment. The technical or operational point is that these programs, no matter how fast they learn, can only learn from the data presented to them. If this data reflects historic patterns of discrimination, the results will perpetuate those patterns.
That’s what Amazon found: by training its AI with the records of those job applicants who had been hired in the past, who were overwhelmingly men, it taught the program to discriminate against applications from women. Since the program had access to immense amounts of data about the applicants, it was able to infer their sex from factors such as whether they had attended an all-woman college. And since it had neither conscience nor consciousness, the machine behaved as if being female were a sign of inferiority, just as the industry it learned from had done.
This is an instance of a wider problem that has appeared in more sinister contexts, such as decisions over which prisoners should get parole. It is also one that is extremely hard to surmount. When you ask computers to detect patterns in data, which is the short description of machine learning, the patterns they find are usually genuine ones, even if we have not noticed them before.
This kind of mesh of inference is implicit in the way that language works, as Joanna Bryson, one of the authors of a ground-breaking study of the way that machine learning can expose the prejudices embedded in our use of language, points out. We can’t get away from it. It encodes both the wisdom and the folly of all those who have used language before us. Patterns of language describe the way the world is, whether or not it ought to be that way. So when we want to distinguish an “ought” from the “is” of usage, it requires a sustained collective effort.
The technical aspects of the story are not the only salient ones. What matters for the future is the recognition that the responsible actor in the story was Amazon itself, the company, and not the AI it built and used. Discussions of AI too often proceed as if the technology will appear among us like the monolith in the film 2001: something alien and immensely powerful but immediately recognisable and clearly distinguished from the hominids around it. It’s not happening like that at all.
AI is already all around us and is always a hybrid or symbiotic system, made up of the humans who tend the programs and feed them data quite as much as the computers themselves. Companies such as Google or Amazon – and even traditional media and retailers – are now partly constituted by the operations of their computer systems. It is therefore essential that moral and legal responsibility be attached to the human parts of the system.
We hold Facebook or Google responsible for the results of their algorithms. The example of Amazon shows that this principle must be more widely extended. AI is among us already, and the companies, the people, and the governments who use it must be accountable for the consequences.