Source – hbr.org
People are undoubtedly your company’s most valuable asset. But if you ask cybersecurity experts if they share that sentiment, most would tell you that people are your biggest liability.
Historically, no matter how much money an organization spends on cybersecurity, there is typically one problem technology can’t solve: humans being human. Gartnerexpects worldwide spending on information security to reach $86.4 billion in 2017, growing to $93 billion in 2018, all in an effort to improve overall security and education programs to prevent humans from undermining the best-laid security plans. But it’s still not enough: human error continues to reign as a top threat.
According to IBM’s Cyber Security Intelligence Index, a staggering 95% of all security incidents involve human error. It is a shocking statistic, and for the most part it’s due to employees clicking on malicious links, losing or getting their mobile devices or computers stolen, or network administrators making simple misconfigurations. We’ve seen a rash of the latter problem recently with more than a billion records exposed so far this year due to misconfigured servers. Organizations can count on the fact that mistakes will be made, and that cybercriminals will be standing by, ready to take advantage of those mistakes.
So how do organizations not only monitor for suspicious activity coming from the outside world, but also look at the behaviors of their employees to determine security risks? As the adage goes, “to err is human” — people are going to make mistakes. So we need to find ways to better understand humans, and anticipate errors or behaviors that are out of character — not only to better protect against security risks, but also to better serve internal stakeholders.
There’s an emerging discipline in security focused around user behavior analytics that is showing promise in helping to address the threat from outside, while also providing insights needed to solve the people problem. It puts to use new technologies that leverage a combination of big data and machine learning, allowing security teams to get to know their employees better and to quickly identify when things may be happening that are out of the norm.
To start, behavioral and contextual data points such as the typical location of an employee’s IP address, the time of day they usually log into the networks, the use of multiple machines/IP addresses, the files and information they typically access, and more can be compiled and monitored to establish a profile of common behaviors. For example, if an employee in the HR team is suddenly trying to access engineering databases hundreds of times per minute, it can be quickly flagged to the security team to prevent an incident.
The real value here is when companies apply these learnings to build a risk-based authentication system to give staff access to data or systems. Essentially, it means customizing the level of access given to employees based on a risk score of their past behaviors, compared with an understanding of the data or systems they are asking for access to. This type of risk-based authentication enables better visibility of error-prone users, or those that have opened avenues of opportunity for cybercriminals in the past, helping to solve the “human” problem of cybersecurity.
In order to achieve this, we must first understand that all users are not on the same playing field. There are many different levels of “savvy” when it comes to each individual within a company: some are extremely knowledgeable about technology and implementing safeguards, such as biometrics and multi-factor authentication. Others may not be as careful or may do things such as recycle passwords (shudder) from common accounts, which can easily be leaked or breached, or downloading documents from a suspicious email address.
In addition, there are many varying roles and needs throughout every company, from users with basic computing environments, to those that may need up to five different machines to do their jobs. For these reasons, there is no “one size fits all” approach to navigating the human element of security, and organizations can no longer rely on traditional automated technologies that take a “set it and forget it” mentality.
While it may go against traditional instincts around security, which usually centers on control and restrictions to fight human error, it’s best to let employees just be themselves — and design your systems to cope with that. The combination of understanding employees’ everyday interactions with IT and having the power to analyze every possible scenario in real time can help security teams define more appropriate levels of authentication for the entire workforce, from tech-savvy developers to the Luddites in the boardroom. This provides the right balance of security, privacy, and user experience, while protecting organizations—and the people within them—from themselves.