Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says

Source: nextgov.com

Vendors of artificial intelligence technology should not be shielded by intellectual property claims and will have to disclose elements of their designs and be able to explain how their offering works in order to establish accountability, according to a leading official from the Cybersecurity and Infrastructure Security Agency.

“I don’t know how you can have a black-box algorithm that’s proprietary and then be able to deploy it and be able to go off and explain what’s going on,” said Martin Stanley, a senior technical advisor who leads the development of CISA’s artificial intelligence strategy. “I think those things are going to have to be made available through some kind of scrutiny and certification around them so that those integrating them into other systems are going to be able to account for what’s happening.”

Stanley was among the speakers on a recent Nextgov and Defense One panel where government officials, including a member of the National Security Commission on Artificial Intelligence, shared some of the ways they are trying to balance reaping the benefits of artificial intelligence with risks the technology poses. 

Experts often discuss the rewards of programming machines to do tasks humans would otherwise have to labor on—for both offensive and defensive cybersecurity maneuvers—but the algorithms behind such systems and the data used to train them into taking such actions are also vulnerable to attack. And the question of accountability applies to users and developers of the technology.

Artificial intelligence systems are code that humans write, but they exercise their abilities and become stronger and more efficient using data that is fed to them. If the data is manipulated, or “poisoned,” the outcomes can be disastrous. 

Changes to the data could be things that humans wouldn’t necessarily recognize, but that computers do.

“We’ve seen … trivial alterations that can throw off some of those results, just by changing a few pixels in an image in a way that a person might not even be able to tell,” said Josephine Wolff, a Tufts University cybersecurity professor who was also on the panel. 

And while it’s true that behind every AI algorithm is a human coder, the designs are becoming so complex, that “you’re looking at automated decision-making where the people who have designed the system are not actually fully in control of what the decisions will be,” Wolff says.   

This makes for a threat vector where vulnerabilities are harder to detect until it’s too late.

“With AI, there’s much more potential for vulnerabilities to stay covert than with other threat vectors,” Wolff said. “As models become increasingly complex it can take longer to realize that something is wrong before there’s a dramatic outcome.”

For this reason, Stanley said an overarching factor CISA uses to help determine what use cases AI gets applied to within the agency, is to assess the extent to which they offer high benefits and low regrets. 

“We pick ones that are understandable and have low complexity,” he said.

Among other things federal personnel need to be mindful of is who has access to the training data. 

“You can imagine you get an award done, and everyone knows how hard that is from the beginning, and then the first thing that the vendor says is ‘OK, send us all your data,’ how’s that going to work so we can train the algorithm?” he said. “Those are the kinds of concerns that we have to be able to address.” 

“We’re going to have to continuously demonstrate that we are using the data for the purpose that it was intended,” he said, adding, “There’s some basic science that speaks to how you interact with algorithms and what kind of access you can have to the training data. Those kinds of things really need to be understood by the people who are deploying them.”

A crucial but very difficult element to establish is liability. Wolff said ideally, liability would be connected to a potential certification program where an entity audits artificial intelligence systems for factors like transparency and explainability. 

That’s important, she said, for answering “the question of how can we incentivize companies developing these algorithms to feel really heavily the weight of getting them right and be sure to do their own due diligence knowing that there are serious penalties for failing to secure them effectively.”

But this is hard, even in the world of software development more broadly. 

“Making the connection is still very unresolved. We’re still in the very early stages of determining what would a certification process look like, who would be in charge of issuing it, what kind of legal protection or immunity might you get if you went through it,” she said. “Software developers and companies have been working for a very long time, especially in the U.S., under the assumption that they can’t be held legally liable for vulnerabilities in their code, and when we start talking about liability in the machine learning and AI context, we have to recognize that that’s part of what we’re grappling with, an industry that for a very long time has had very strong protections from any liability.”

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence