Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Will the proposed EU AI rules become the GDPR for biometrics?

Source – https://www.biometricupdate.com/

After several high-profile cases, it’s understandable that governments would want to start regulating artificial intelligence (AI), and biometric technology in particular. The Clearview AI scandal has shown that people are really ‘not OK’ with the knowledge that companies scraped the internet for private images in order to train a facial recognition AI solution they then turned around and sold to law enforcement agencies.

Additionally, a number of cases by civil rights groups have shown that when AI is employed to make decisions about providing credit, rendering a verdict, or simply verifying the identity of a person, minorities are often discriminated against.

At the end of April, the EU adopted a proposal for a regulation called the Artificial Intelligence Act (AIA) designed to regulate AI-based solutions. When these new rules go fully into effect, the EU hopes to become a global trendsetter in AI regulation. The framework of these new laws are similar to the General Data Privacy Regulation (GDPR), which went live in 2018: The legal machinations trigger whenever personal data of an EU citizen is processed anywhere in the world.

New legislation on the horizon

The good news for AI and biometrics companies is that GDPR took two years to move from the proposal stage to the regulation finally adopted by the bloc, so the business world had time to prepare. In its current form, the AIA looks similar to GDPR in what it seeks to accomplish: a means to give end-users a way to control the collection and use of their personal data and digital likeness. In a word: transparency.

AIA argues that the end-user should know, at all times, that they are being judged by AI-powered technology. Is that a chatbot or a live person helping them online? Is their likeness being collected for biometric identification?

Companies that already offer settings to disallow collection of biometric data, or can integrate well with personal data management systems, will find they have the advantage under this emergent new regulatory scrutiny. For biometrics companies in general, the adherence to the final version of these new rules will be required for the correct collection, filtering, and labeling of datasets.

A spate of independent U.S. regulations adds to the complexity

The fragmented nature of the U.S. rules governing the collecting of biometric data has already cost Facebook upward of half a billion dollars, with similar lawsuits against Google, Amazon and Microsoft underway. The absence of clear rules at a federal level leaves it up to the states to decide what AI companies are allowed to collect in terms of personal information without their user’s consent. However, California’s CCPA, Illinois’ PIPA, Massachusetts Data Privacy Act, New York Privacy Act, and the Hawaii Consumer Privacy Protection Act all have the same aim.

For instance, New York’s strict privacy statute has a private right of action for any violation of the law, applicable to all businesses. This means virtually anyone who feels they deserve to pursue legal recourse against a New York business they feel might have violated their rights as described under the state’s privacy statute can do so by simply going down to the civil courthouse, and filing a lawsuit.

Regulation yields… growth?

From an industry standpoint, a common set of regulations governing the use of AI would be a great way to reduce friction when introducing biometrics-based solutions to different markets across large markets like the EU constituent countries and fifty U.S. states. Under one framework, companies can focus on creating solutions that offer the maximum amount of privacy and transparency while solving the kinds of problems that AI exists to solve in the first place.

What we can expect to see in the near future is a flourishing of companies that will provide third party certifications of compliance with the new regulation—from dataset audit to algorithm bias measurements. Some of these services are already standardized via the U.S.-based National Institute of Standards and Technologies (NIST) that, for instance, compares accuracy and speed of facial recognition and fingerprint algorithms, among others. NIST even conducted an extremely thorough comparison of all submitted algorithms regarding their bias against minority groups or ability to recognize faces when wearing protective masks.

Universal regulation also singles out large-scale, facial-recognition-driven surveillance of open spaces as an especially high-risk application of the technology. Due to its “big brother” nature, it is understandable that such an application will be a domain of only a handful of companies, with the rest shying away from such controversy.

There is a growing number of benign applications of biometrics that improve the life of the users without opening their personal data to possible misuse, and that is where the future lies. The COVID-19 pandemic has shown that biometric applications allowed certain industries like financial services and telcos to continue doing business that used to be conducted in-person to confirm proof of identity (opening a bank account, for example) even during the lockdowns. In fact, the technology has proven to be so convenient that even branches adopted digital onboarding instead of their former paper processes. This is where the strength of the technology lies, solving problems in a way that facilitates and maximizes convenience.

Final thoughts

AI is only as intelligent as the data we feed it. If you show a machine learning algorithm 100,000 pictures of a fish, it eventually can draw conclusions about the fish, but a toddler can see one or two pictures of a fish and determine whether or not the following picture is a fish or something else.

However, researchers aren’t always entirely sure as to how AI comes to the decisions that it makes, other than if you feed it biased information, you get biased results. This is why facial recognition has problems with correctly identifying people with darker skin. Datasets of photos used to train facial recognition algorithms contain more images of people with lighter skin than darker skinned people. Resultantly, there’s been an industry push towards explainable AI. ‘XAI’ so you can see what decisions the machine went through to come to its verdict.

At Innovatrics, we found that our AI algorithm is able to identify faces behind face masks, even though it has not been taught to do so. It’s virtually incomprehensible how AI arrives at its decisions because up until now, transparency or explainability haven’t been major outcomes AI engineers have considered when looking for results.

Looking towards the future, as new regulations governing the technology come into effect, explainability and comprehensibility will become the standard. Companies who value transparency and their customer’s privacy will come out ahead in this new era of machine learning.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence