Source – ciol.com
hroughout the history of computers and cyber-attacks, security has been a lifelong war between cybersecurity experts and hackers. When security experts plugged holes and thought of better ways to fortify their systems and information, the perpetrators came up with innovative ways to breach these systems. Exploiting one small security loophole is all it takes to defeat the whole system.
This is a vicious cycle that will continue well into the future. Fraudsters will continue to find new ways to send those annoying phishing emails, and hackers will continue to find ways to penetrate networks to steal your banking information.
Most of these vulnerabilities and cyber-attacks are well-known. They follow certain patterns and sequences but even a well-trained cyber security expert may not be able to defend against sophisticated attacks. This is where Artificial Intelligence comes into play.
A well designed AI defense system can go through years of attack logs to analyze and learn different attack methods and strategies. It can then form a baseline on normal user behavior, and analyze future behavior to mitigate anomalies. It can also do so much faster than the experts. This can save a ton of money and manual labor that security professionals pour into dealing with hundreds of cyber-attacks every day. In addition to detecting threats and attacks, AI systems can also be used to improve defense strategies and security policies. Hence, the value of AI in cybersecurity is beyond doubt.
So, the better question to ask is “Can we afford to not integrate Artificial Intelligence into cyber security?”
The Fraud as a System (FAAS) cyber-crime market facilitates and encourages better ways to break into security systems. It is a competitive market where hackers showcase their attack software and methods of conducting cyber-attacks, and put it up for sale. Naturally this forces perpetrators to build their own intelligent systems in future, if they haven’t done already. The malicious AI systems will constantly probe a secure system, learn from each probe and find holes to exploit in the future. An AI system just needs to be successful only once to defeat a secure system.
With increasing computing power, massive cache of open-source and for-sale data, and efficient storage facilities, it is becoming increasingly cheaper for anyone with the know-how to create an AI system. Let us not forget that nation-states and their spy agencies compound the threat scale by sponsoring sophisticated attacks themselves. Malicious AI systems are inevitable. Therefore, it is not a matter of comfort to use AI in cyber security, but an essence.
If companies do not use AI as part of their cyber security strategy, then their traditional security methods will be overrun by malicious AI systems. So, one cannot afford to dismiss AI.
What measures can we take?
There is no bullet-proof approach to this but there are few ways to enhance security in an AI-integrated ecosystem. What matters is “resilience” to attacks. Resilience depends on having necessary measures to prevent, detect and respond to attacks. Security experts should share all relevant data on attacks and malicious codes with others. This can then be fed to their own AI systems. The more data an AI system has, more intelligent it becomes, and increases resilience to attacks.
Another way is for the International Standards Organization (ISO) to make integrating AI to defense systems a standard operating procedure across the globe. This enables a collective approach.
As the world is moving towards an increasingly automated and autonomous ecosystem, it is important for companies to use AI to augment their security measures. These measures are significant to survive in a world where AI algorithms are essentially battling each other for dominance.