Source: law.com
China has a population of approximately 1.4 billion people and the Chinese government is reportedly using a combination of artificial intelligence (AI) and facial recognition software to monitor their movements and online activities. Even more troubling, China is using the same technology to track and control a Muslim minority group, the Uyghurs. China has subverted the potential of artificial intelligence to impose a form of racist social controls. AI offers new opportunities to enhance business productivity and enrich the personal lives of individuals.
Without a broad agreement on the ethical implementation of AI, the still untapped potential of AI can be corrupted.
This column is the first of a two-part series on creating an ethical AI policy framework for the implementation of AI supported applications. It is based on the groundbreaking work of dozens of expert IT lawyers who contributed to the book Responsible AI published by the International Technology Law Association in 2019. We have previously considered the technological elements of AI, facial recognition and personal privacy issues in our recent columns published here, which may provide some useful background for those new to the subject of AI. See “Artificial Intelligence: The Fastest Moving Technology,” NYLJ (March 9, 2020); “Waking Up to Artificial Intelligence,” NYLJ (Feb. 10, 2020).
Ethical Purpose
Organizations that develop AI systems have a great responsibility to understand how the system will be used and that its implementation will not be harmful to society. AI system developers should require that the purpose of the software implementation be identified in reasonable detail. They must ensure that the purposes of the new AI systems are ethical and not intentionally harmful.
As the full potential of AI for both good and harm is recognized by national governments, some regulatory statutes or rules will follow. Laws that regulate AI should promote ethical uses that do not cause harm, avoid unreasonable disruptions, and do not promote the distribution of false information.
AI is already being used in the workplace to support automation and speed or eliminate routine administrative tasks. Organizations that develop or deploy AI systems should consider the net effects of any implementation on its employees and their work. In some instances, workers will be displaced by automated systems. To gain greater understanding and acceptance of AI systems on their employees, businesses should allow the affected workers to participate in the decision-making process.
AI systems and automation usually increase efficiency and, as a result, workers are replaced by these systems. To promote efficiency and productivity, governments should consider creating programs for any displaced workers to learn new useful skills. Similarly, governments should promote educational policies to prepare children with the skills they will need for the emerging new economy, including life-long learning.
The implementation of AI systems may have an adverse impact on the environment. When developing AI systems, organizations should assess the environmental impact of these new systems. Government should put into effect statutes or rules that ensure complete and transparent investigations of any adverse or unanticipated environmental impacts of AI systems.
Unfortunately, AI systems have been recognized as creating strategic advantages in weapons systems. The use of lethal autonomous weapon systems (LAWS) should respect international principals of humanitarian law, including for example, the Geneva Conventions of 1949. LAWS can be both accurate and deadly. As such, LAWS should always be under human control and oversight in every situation where they are used in a conflict.
The recent very public policy disputes relating to posts on Twitter and Facebook reveal how AI may be used to weaponize false or misleading information. Companies that develop or deploy AI systems to promote or filter information on Internet platforms, including social media, should take measures to minimize the spread of false or misleading information. It is recommended that these systems should prove a means for users to flag potentially false or harmful content. Government agencies should provide clear guidelines to identify prohibited content that respects the rights and equality of individuals.
Transparency and Explainability
Transparency refers to the duty of every business and government entity to inform customers and citizens that they are interacting with AI systems. At a minimum, users should be provided with information about what the systems does, how it performs its tasks and the specifications and/or data used in training the system. The goal of transparency is to avoid creating an AI system that functions as an opaque “black box.”
Explainability refers to the duty of organizations using an AI decision-making process to provide accurate information in human understandable terms as to how the decisions/outcomes were reached. For example, if an AI system is used to process a mortgage loan application the loan applicant should be able to find out the factors supporting a credit decision including credit ratings, quality and location of the house and recent comparable sales in neighboring areas.
Transparency tends to preserve the public trust in AI systems and to demonstrate that the decisions made by an AI system are fair and impartial.
Transparency and explainability become increasingly important as the AI system deals with important decisions involving sensitive personal or financial data. In designing the AI system, transparency should meet the reasonable expectation of the average user. For this reason, transparency and explainability should be built into the design of any AI system.
Fairness and Non-Discrimination
The design of AI systems is a human endeavor and necessarily incorporates the knowledge, life experiences and prejudices of the designers. Companies that develop or deploy AI systems should make users aware that these systems reflect the goals and potential biases of the developers. As has been studied in other contexts, implicit bias is part of the human condition and AI system developers may incorporate these values into the methods and goals of a new AI system. In addition, AI systems are often “trained” by reviewing large data sets. For example, an AI system assisting in loan decisions might have used a data set that indicated certain racial or ethnic minority has a higher than average loan default rate. Screening for such a bias is necessary for a fair system.
The decisions made by AI systems must be fair and non-discriminatory as compared to non-discriminatory decisions made by humans. As such, in the design of AI systems fairness should be prioritized in the system’s algorithms and training data used. Without attention to fairness, AI systems have the potential of perpetuating and increasing bias, and this could have a broad social impact. To minimize these issues, AI systems with a significant social impact should be independently reviewed and tested periodically.
Safety and Reliability
AI systems currently control a wide variety of automated equipment and will have a broader impact when autonomous vehicles are in common use. Whether in the factory or traveling on the highway, AI systems will posse a potential danger to individuals. As to the issue of safety, AI system developers must ensure that AI systems will perform correctly, without harming users, resources, or the environment. It is essential to minimize unintended consequences and errors in the operation of any system.
These AI controlled systems must also operate reliably. Reliability refers to the consistency of performance, i.e., the probability of performing a function without a failure and within the system’s parameters over an extended period of time. Organizations that develop or deploy AI systems in conjunction with a piece of equipment must clearly define the principles underlying its operation and the boundaries of its decision-making powers. When safety is a priority, the appropriate government agency should require the testing of AI systems to ensure reliability. The systems should be trained on data sets that are as “error-free” as possible. When an AI system is involved in an incident of an unanticipated or adverse/fatal outcome it should be subject to a transparent investigation.
The possibility of personal injury and the potential liability raises a host of legal concerns. Legislators should consider whether the current legal framework, including product liability law, requires adjustments to meet the unique characteristics of AI systems.
For a more detailed review of the above issues the book Responsible AI can be purchased from the International Technology Law Association.