Source: enterprisersproject.com
As artificial intelligence (AI) and machine learning (ML) are increasingly deployed throughout organizations, they are being tasked with solving some of the biggest business challenges. One of the toughest: IT security.
In 2020, the average cost of a data breach is $3.86 million worldwide and $8.64 million in the United States, according to IBM Security. The number of endpoints we must secure keeps multiplying as our technology stacks become more complex with microservices, IoT, and cloud services.
CIOs can use the power of AI to combat craftier malware and phishing attacks. We can also use it to augment our security teams, enabling them to handle an ever-growing volume of threats.
We also need to ask how we are securing our own AI as these algorithms become a more pivotal part of our IT systems. Let’s dig into the impact AI and machine learning can make in securing our organizations.
Identify evolving malware and phishing attacks
Malware and phishing attacks are growing more sophisticated. Malware authors are constantly producing new variations, ditching their old virus signatures to evade detection. It is the ultimate game of whack-a-mole as security professionals chase these ever-changing virus blueprints.
Machine learning can help. By consuming the historical catalog of all known malware in the wild, it can pinpoint familiar behavioral patterns such as common file sizes, what is stored in those files, and string patterns tucked within the code. By identifying these fingerprints, new viruses, or variants of existing ones, can be shut down in real time.
Using AI, phishing attacks are becoming akin to finely tuned marketing emails. Perpetrators can mine the web to find out not only your name and email address but also where you work, your interests, and the names of your trusted friends and co-workers. This could always be done manually, but AI enables hackers to build these customized profiles at scale.
In addition to tailoring email content to specific subjects and people, hackers can analyze email responses to see what wording triggers greater click-throughs as it continually learns how to craft the perfect phishing hook.
To combat this, we can set up AI to monitor our network to determine patterns of our employees’ daily activity. Once that baseline has been established, the model can identify when a click on a phishing link is out of the norm and shut down the malicious activity before user credentials can be compromised. It is a very targeted safety wall, constructed around the user, causing minimal disruption to the network and business as a whole.
Joining the arms race
The AI community has always been a strong backer of open source. They regularly share source code and data sets to help further the growth of this promising technology.
Unfortunately, you can’t put barbed wire around the code repositories to keep the bad guys out. When you pair these readily available tools with the compute power of the cloud, any hacker has the tools and infrastructure to construct AI-powered attacks to devastating effect.
While our data is limited on how many hacks are fueled by AI, we do know this will be a mandatory skill in the hacker toolkit in the years ahead. With AI tools becoming more powerful every day, and compute time getting cheaper, what hacker wouldn’t want to pump up their attacks on steroids?
It truly is an arms race where organizations will be forced to deploy AI security solutions just to keep pace with rogue actors.
Protecting your AI from hackers
There is a flip side to this issue: According to Gartner, 37 percent of organizations have implemented artificial intelligence to some degree. That represents a 470 percent increase from four years ago.
Artificial intelligence and machine learning are quickly becoming critical components of our IT infrastructure. That makes them a target. If hackers can access our AI, they can poison our data to infect our model. They can exploit bugs within our algorithm to produce unintended results. Whether it’s a drone flying a military mission or a workflow that gets products out to your customers, failure can be catastrophic.
Augment, don’t replace, security personnel
We constantly hear about how robots and artificial intelligence are poised to take our jobs. But more often than not, AI will complement our jobs, making us more effective in our role. Network security is no different.
AI security tools aren’t something you install and forget about. They are machine learning models that must be trained on millions of data points. If the model isn’t producing the desired response, you are more vulnerable than ever since you are operating under a false sense of security.
The work doesn’t stop once the model has been vetted. This new monitoring will likely trap considerably more anomalies than your previous solution. Security professionals will need to sort through these alerts to separate the potential threats from the noise. Without proper diligence, everything becomes noise.
Limitations of AI in security
AI and ML are not magic wands that you can wave to suddenly secure your organization. Security personnel must work closely with these models to train and hone them, and these professionals are neither cheap nor easy to find.
Another challenge is data and cost: We need to amass enough clean data to build a robust algorithm we can trust. Clean data doesn’t just happen – it must be analyzed and verified for accuracy.
The cost of storing massive amounts of data and purchasing the necessary compute time to run hefty ML algorithms is significant, and implementing an all-encompassing AI security solution may be too costly for some. According to the Harvard Business Review, 40 percent of executives reported that the technology and required expertise of AI initiatives are too expensive.
Traditional anti-virus and firewall solutions can’t keep pace with zero-day threats and the wave of malware variants. AI and ML provide a proactive solution. They can find behavioral patterns from the user community to stop threats before they start. AI can help security professionals digest mountains of data to pinpoint problems. They can help us keep pace with an AI-powered hacking community intent on doing us harm.
AI still has some maturing to do before it becomes the security solution for all businesses, but it’s progressing quickly. It’s difficult to imagine the future of IT security without AI and machine learning at the center of it.