Source: computerweekly.com
Experts invited to a House of Lords special inquiry committee meeting on artificial intelligence (AI) warned of the brittleness of the technology and lack of understanding around it.
Following on from an Artificial Intelligence Committee report, published in April 2018, the expert panel was invited to discuss how the opportunities and risks of AI have changed over the past year, particularly in light of the coronavirus pandemic. The lowering of controls to protect public data and ethics was one of the topics discussed by the committee.
Michael Wooldridge, professor of computer science at the University of Oxford, spoke about why healthcare would benefit immediately from AI. “If it is to be done right, transferring AI techniques from labs to GPs and hospitals is a long process,” he said.
From a risk perspective, Woolridge said there were “endless examples” of data abuse. “For AI to work, it needs data. This is a huge challenge. Society has not yet found its equilibrium in this new world of big data,” he said.
Woolridge also voiced concerns that people will rely too heavily on AI technology to make decisions. “The tech is really brittle. It is important we don’t become complacent and naively rely on AI instead of human judgement,” he warned.
When asked about ethical barriers, Wendy Hall, regius professor of computer science at the University of Southampton, said: “In the UK, we have a lot of people studying ethics. We need to develop some practical guidelines.”
Hall said AI ethics presents major issues for society. People also need to understand where they need to take control of their data, and where data is needed.
“Morals and ethics are not the same thing,” she said. “We have to self-regulate. We have to get people to understand their own responsibilities.”
Hall urged companies to get involved in the AI ethics conversation. She suggested that regulators should try to develop simple frameworks and audit arrangements which can be easily applied. Hall predicted actuaries, accountants, lawyers and new careers were likely to emerge to help companies audit algorithms for bias, fairness, accountability and ethics.
Daniel Susskind, a fellow in economics at Balliol College, urged the government to reinstate the data and privacy controls that were lowered to support coronavirus track and trace applications. “An important task in the months to come is to rein back that power we granted to tech companies and states around the world,” he said.
Discussing the need for ethical AI, Susskind said: “If we are honest, the finest computer scientists are not necessarily hired for the sensitivity of their moral reasoning. There is a burden on engineers to make these technologies as transparent as they can be to ensure users can scrutinise them.”
In the past, AI systems tended to be modeled on human decision-making, but systems now use deep learning. “Today’s systems are far more opaque and less transparent,” Susskind warned.
The availability of data to improve AI algorithms was another topic discussed at the meeting. When asked about the use of data for public good, Carly Kind, director of the Ada Lovelace Institute, described the situation as “a false dichotomy”.
While the pandemic has shown the use of data for public good, she said people wanted guarantees around privacy. Although the General Data Protection Regulation (GDPR) stood up well during the pandemic, Kind pointed out that there were numerous occasions when researchers needed to access data held in companies, such as to stem misinformation being spread on social media platforms, but that access was difficult.
She warned that public data is held by a few very large US companies, not the public sector. Kind said Apple, Amazon, Facebook and Google were in a much better position than public sector organisations to advance because they had monopolistic access to public data.
“To create a more even playing field, we need to break up monopolistic control of platforms,” she said.