Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Trustworthy artificial intelligence – is new EU regulation coming for AI?

Source: siliconrepublic.com

The new president of the European Commission, Ursula von der Leyen, committed to introducing a new European regulation for artificial intelligence (AI) in Europe during her first 100 days in office. While a fully fledged regulation is unlikely in that timeframe, we can expect to see a vision for a new regulatory framework for AI in Europe very soon, possibly this month.

What can we expect from such a regulation and what should AI developers and businesses be doing to prepare for it?

The EU is positioning itself as a leader in trustworthy, human-centric artificial intelligence. The European Commission set out its vision for AI, which supports “ethical, secure and cutting-edge AI made in Europe”.

Three pillars underpin the Commission’s vision: increasing public and private investments in AI to boost its uptake; preparing for socio-economic changes; and ensuring an appropriate ethical and legal framework to strengthen European values.

Trustworthy AI

To implement this vision, the Commission established the High-Level Expert Group on Artificial Intelligence (HLEG-AI), which published its Ethics Guidelines for Trustworthy AI in April 2019. These guidelines have received considerable international attention and are widely regarded as the most comprehensive framework for ethical and trustworthy AI in the world.

Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI systems. If AI systems – and the human beings behind them – are not demonstrably worthy of trust, the uptake of AI will be hindered.

Why would this matter? AI offers vast social and economic benefits in health, in education, in transport and in sustainable development. Trust is key. Like many other settings, such as trust in aviation technologies, nuclear power or food safety, it is not simply the components of the AI system but the system in its overall context that may or may not inspire trust.

Lawful, ethical, robust

In its work, the European Commission’s HLEG-AI, building upon the EU’s Charter of Fundamental Rights, defined ‘trustworthy AI’ applications along three axes: they must be lawful, ethical and robust. To make the concept more practical, the HLEG-AI translated these three components into a set of requirements that AI systems must satisfy in order to be considered trustworthy.

Trustworthy AI systems must: protect human agency and ensure human oversight of their operation and impact; be technically and environmentally robust and safe to use; respect individual privacy and be based on good governance; ensure they are non-discriminatory and fair; protect societal and environmental wellbeing; and be transparent and accountable.

The operationalisation of trustworthy AI in practice is defined through an assessment list that the HLEG-AI has developed and is currently refining based on broad public consultation across Europe, as well as deep-dive interviews with representative stakeholder organisations.

We can expect that the upcoming regulatory context for artificial intelligence in the EU will be closely aligned with the principles of trustworthy AI, although it is important to note that the HLEG-AI has no role in drafting such regulation.

Facial recognition

There is a growing sense that rather than introducing a generic AI regulation, there will be a more nuanced risk-based approach, possibly one that is application and technology-specific. Some of the high-risk domains that generate a significant amount of debate include healthcare, judicial decision-making, and mass citizen surveillance.

For example, facial recognition raises issues such as a person’s right to privacy, the gathering of personal data without consent and the potential for discrimination. We are likely to see restrictions on the use of facial recognition, possibly even a ban for some period of time in specific settings.

Some of the technology-specific issues relate to how we can ensure that, for example, data-driven AI systems are trained and rigorously evaluated in order to be confident that they are free of harmful bias.

Irish opportunity

Ireland has a great opportunity to become a leader in trustworthy AI. Ireland is the European home to many of the world’s leading companies in data, AI and technology. There is significant national strength in the commercial, academic and civil society spheres. Trustworthy AI will become a commercial imperative.

Even if a new European AI regulatory framework was not to materialise, consumers are becoming highly sensitive to personal data privacy, the impact of technology on the integrity of democracy, and the influence of personalisation and targeting on individual autonomy. Trustworthy AI will become the de-facto standard for AI-based technologies to be accepted by consumers.

We can, and should, take a leadership role here. Ireland has an excellent reputation in artificial intelligence and we have the opportunity to make Ireland a beacon of best practice in trustworthy AI.

Businesses in Ireland can lead by stepping up and grasping this opportunity by developing protocols, tools and services to support the auditability and transparency of the AI systems they build and deploy.

Ireland is a country that is respected and trusted throughout the world because of our reputation in areas such as safe food production, our environment, our tradition of international peace-keeping and diplomacy, as well as our achievements in arts and culture. We should establish a world-leading reputation for trustworthy AI. We have the ingredients, the expertise and the ecosystem to make it happen.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence