Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

UK: Regulation Of Artificial Intelligence And Big Data In The UK

Source: mondaq.com

As the seat of the first Industrial Revolution, the UK has a long history of designing regulatory solutions to the challenges posed by technological change. However, regulation has often lagged behind – sometimes very far behind – new technology. Artificial Intelligence (AI) is proving no exception to this historical trend.

Is a specialist regulator needed?

In the first place, there is currently no consensus on whether the development of AI requires its own dedicated regulator or specific statutory regime. Gathering evidence for its May 2018 report “AI in the UK”, the Select Committee on AI of the House of Lords found that opinions were divided into three camps: “those who considered existing laws could do the job; those who thought that action was needed immediately; and those who proposed a more cautious and staged approach to regulation”1.

The first of these categories – where it was argued that existing laws were sufficient – included strong interest groups such as TechUK (a major trade association) and the Law Society of England and Wales. The Committee did not explicitly endorse their view, but it did reject the second option of creating a new regulator, concluding that “AI-specific regulation, at this stage, would be inappropriate”2.

The Committee therefore favoured no more than an incremental approach to new regulation. Nonetheless, the caveat “at this stage” is important. The conclusion that AI-specific regulation is inappropriate is not universally accepted, and could easily change over time as difficult cases of algorithmic decision-making become more widely reported.

Moreover, by the time the Committee reported, the Government had already announced the creation of a Centre for Data Ethics and Innovation (CDEI), whose remit includes an ongoing inquiry into these questions.

The role of the CDEI

The establishment of the CDEI formed part of the UK Industrial Strategy, set out in November 20173. It is therefore explicitly viewed as a key part of the environment that will make the UK an attractive place for AIdevelopers.

The CDEI was created in large part as a response to two reports issued in the previous year by the Science and Technology Committee of the House of Commons. In the first of these reports, entitled “The Big Data Dilemma”, the Committee proposed a body with the remit to address “the growing legal and ethical challenges associated with balancing privacy, anonymisation, security and public benefit”4 .

In the second report, “Robotics and Artificial Intelligence”, the Committee recommended the creation of a Commission on AI that would focus on “examining the social, ethical and legal implications of recent and potential developments in AI … as well as advising the Government of any regulation required on limits to its progression”5.

In practice, the role of the CDEI combines both of these functions. As set out in the Industrial Strategy, its overriding purpose is to “review the existing governance landscape and advise the government on how we can enable and ensure ethical, safe and innovative uses of data, including AI“.

It took a further year, until November 2018, before the CDEI was established and the Government published its formal terms of reference6. These include: (i) “reviewing the existing regulatory framework to identify gaps”; (ii) “identifying steps to ensure that the law, regulation and guidance keep pace with developments”; and (iii) “publishing recommendations to government on how it can support safe and ethical innovation in data and AIthrough policy and legislation”.

The current regulatory landscape

It is important to note that the CDEI is not a regulator, nor even a proto-regulator, for AI. It is an advisory body to the Government whose work will cover the question of whether further regulatory provision needs to be made in respect of AI, but which itself has no regulatory powers. While the suggestion is that the CDEI will in due course be established on a statutory basis, there is no proposal that this fundamental limitation on its role will change.

Moreover, its resources are limited and its remit extends far beyond questions relating to AI. At the time of writing, the CDEI has recently published its first annual work programme. Within this, the main work of direct relevance is an inquiry into algorithmic bias, which is not due to report to the Government until March 2020.

The current UK regulatory landscape in relation to AI can therefore be summarised broadly as follows.

First, there is no specific legal provision for the regulation of the development of AI or the use of AIapplications; however, a range of existing regulatory regimes may overlap this territory and be used to some extent to regulate these activities.

Of these regimes, the most significant single case is the data protection regime overseen by the Information Commissioner’s Office (ICO). It is important both because it exhibits the greatest overlap of subject matter with algorithmic decision-making by AI, and because the ICO is one of the few regulators whose remit extends to other branches of Government, and therefore has the ability to regulate uses of AI in the public as well as the private sector. Its role and remit is considered more fully below.

However, the ICO is not unique in having some regulatory responsibility in this area. This is also true for the UK Equality and Human Rights Commission, Competition and Markets Authority, Office of Communications and a range of other sector regulators whose remit – and existing array of regulatory tools – provides them with the power to intervene when the use of AI affects citizens or consumers within the territory covered by their statutory powers.

The question is whether those regulators will have the institutional capacity and expertise to use those powers in respect of AI, or will sufficiently prioritise doing so against the competing demands on their limited resources. The answer is that this is highly doubtful. In its May 2018 report on “Algorithms in Decision-Making”, the House of Commons Science and Technology Committee thought that this was an important area for exploration by the CDEI8, although it does not feature as a key aspect of that body’s initial work programme.

Second, the UK can be expected to explore, over time, whether additional detailed regulatory arrangements need to be made for specific AI use-cases. Of these, currently the most important and advanced piece of work relates to the use of AI in autonomous vehicles (AVs). In March 2018, the Government referred the regulatory framework for AVs to the Law Commission for England and Wales, and the Scottish Law Commission – bodies whose role is to examine major areas of law reform.

These bodies have already carried out a preliminary consultation and are now in the detailed policy-consideration phase of their work. However, they are not due to report to the Government until March 2021 on their analysis and final recommendations. Moreover, like the CDEI, they are advisory and not law-making bodies. Although their report will have significant weight, and even if its recommendations were to be immediately accepted by the Government (which is far from certain), it would be at least an additional two or three more years before legislation to implement them could begin to find its way onto the statute book.

Third, it is inevitable that there will continue to be significant scrutiny of the adequacy of the regulation of AI, both by the nascent CDEI and by a range of Parliamentary select committees with an interest in this area (as well as many interested parties in the private sector).

While none of these bodies has the power to legislate to fill regulatory gaps that emerge, they may be expected, over time, to identify issues that Government, or existing regulatory bodies, will then be under pressure to address.

Conclusion

A great deal has been written and said about the regulation of AI in the UK. However, the reality is that there is currently no overall coherent approach to the regulatory challenges posed by the rapid development of AIapplications.

The current landscape involves pressing into service existing regulators to use their powers – none of which were designed to address the specific issues raised by AI – as the need arises, while at the same time creating new institutional capacity (in the form of the CDEI) to keep the area under review, and subjecting specific important use-cases (like AVs) to a more detailed process of policy consideration.

In the long run, a more coherent regulatory environment may develop out of this incremental approach. However, all things considered, it is hard to avoid the truth of the judgment expressed by Jacob Turner that, despite the amount of fine words expressed on the subject, with respect to the UK’s regulation of AI, “specific policy developments remain elusive”

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence