Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

In the era of artificial intelligence: safeguarding human rights

Source – opendemocracy.net

Humans and machines are destined to live in an ever-closer relationship. To make it a happy marriage, we have to better address the ethical and legal implications that data science carry.

Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal.

The benefits of grounding decisions on mathematical calculations can be enormous in many sectors of life. However, relying too heavily on AI inherently involves determining patterns beyond these calculations and can therefore turn against users, perpetrate injustices and restrict people’s rights.

AI in fact can negatively affect a wide range of our human rights. The problem is compounded by the fact that decisions are taken on the basis of these systems, while there is no transparency, accountability and safeguards on how they are designed, how they work and how they may change over time.

Encroaching on the right to privacy and the right to equality

The tension between advantages of AI technology and risks for our human rights becomes most evident in the field of privacy. Privacy is a fundamental human right, essential in order to live in dignity and security. But in the digital environment, including when we use apps and social media platforms, large amounts of personal data is collected – with or without our knowledge – and can be used to profile us, and produce predictions of our behaviours. We provide data on our health, political ideas and family life without knowing who is going to use this data, for what purposes and why.

Machines function on the basis of what humans tell them. If a system is fed with human biases (conscious or unconscious) the result will inevitably be biased. The lack of diversity and inclusion in the design of AI systems is therefore a key concern: instead of making our decisions more objective, they could reinforce discrimination and prejudices by giving them an appearance of objectivity. There is increasing evidence that women, ethnic minorities, people with disabilities and LGBTI persons particularly suffer from discrimination by biased algorithms.

Studies have shown, for example, that Google was more likely to display adverts for highly paid jobs to male job seekers than female. Last May, a studyby the EU Fundamental Rights Agency also highlighted how AI can amplify discrimination. When data-based decision making reflects societal prejudices, it reproduces – and even reinforces – the biases of that society. This problem has often been raised by academia and NGOs too, who recently adopted the Toronto Declaration, calling for safeguards to prevent machine learning systems from contributing to discriminatory practices.

Decisions made without questioning the results of a flawed algorithm can have serious repercussions for human beings. For example, software used to inform decisions about healthcare and disability benefits has wrongfully excluded people who were entitled to them, with dire consequences for the individuals concerned.

Stifling freedom of expression and freedom of assembly

Another right at stake is freedom of expression. A recent Council of Europe publication on Algorithms and Human Rights noted for instance that Facebook and YouTube have adopted a filtering mechanism to detect violent extremist content. However, no information is available about the process or criteria adopted to establish which videos show “clearly illegal content”.

Although one cannot but salute the initiative to stop the dissemination of such material, the lack of transparency around the content moderation raises concerns because it may be used to restrict legitimate free speech and to encroach on people’s ability to express themselves.

Similar concerns have been raised with regard to automatic filtering of user-generated content, at the point of upload, supposedly infringing intellectual property rights, which came to the forefront with the proposed Directive on Copyright of the EU. In certain circumstances, the use of automated technologies for the dissemination of content can also have a significant impact on the right to freedom of expression and of privacy, when bots, troll armies, targeted spam or ads are used, in addition to algorithms defining the display of content.

The tension between technology and human rights also manifests itself in the field of facial recognition. While this can be a powerful tool for law enforcement officials for finding suspected terrorists, it can also turn into a weapon to control people. Today, it is all too easy for governments to permanently watch you and restrict the right to privacy, freedom of assembly, freedom of movement and press freedom.

What governments and the private sector should do

AI has the potential to help human beings maximise their time, freedom and happiness. At the same time, it can lead us towards a dystopian society. Finding the right balance between technological development and human rights protection is therefore an urgent matter – one on which the future of the society we want to live in depends.

To get it right, we need stronger co-operation between state actors – governments, parliaments, the judiciary, law enforcement agencies – private companies, academia, NGOs, international organisations and also the public at large. The task is daunting, but not impossible.

A number of standards already exist and should serve as a starting point. For example, the case-law of the European Court of Human Rights sets clear boundaries for the respect for private life, liberty and security. It also underscores states’ obligations to provide an effective remedy to challenge intrusions into private life and to protect individuals from unlawful surveillance. In addition, the modernised Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data adopted this year addresses the challenges to privacy resulting from the use of new information and communication technologies.

States should also make sure that the private sector, which bears the responsibility for AI design, programming and implementation, upholds human rights standards. The Council of Europe Recommendation on the roles and responsibilities of internet intermediaries, the UN guiding principles on business and human rights, and the report on content regulation by the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, should all feed the efforts to develop AI technology which is able to improve our lives. There needs to be more transparency in the decision-making processes using algorithms, in order to understand the reasoning behind them, to ensure accountability and to be able to challenge these decisions in effective ways.A third field of action should be to increase people’s “AI literacy”.

A third field of action should be to increase people’s “AI literacy”. States should invest more in public awareness and education initiatives to develop the competencies of all citizens, and in particular of the younger generations, to engage positively with AI technologies and better understand their implications for our lives. Finally, national human rights structures should be equipped to deal with new types of discriminations stemming from the use of AI.

Artificial intelligence can greatly enhance our abilities to live the life we desire. But it can also destroy them. We therefore have to adopt strict regulations to prevent it from morphing in a modern Frankenstein’s monster.

Related Posts

Subscribe
Notify of
guest
7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
7
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence