Cognitive Bias is the Threat Actor you may never detect

Cognitive bias among workers can undermine security work and lead to critical misinterpretations of data, warns Forcepoint X-Labs research scientist, Dr. Margaret Cunningham.

Implicit bias among security workers poses a real risk to industry, prompting cyber security workers to misinterpret critical data and reach incorrect decisions based on that data, a new study by the firm Forcepoint warns.

Security workers may may misinformed decisions about threats or reach inaccurate conclusions that could leave their organizations vulnerable to attack, or make it difficult to properly respond to cyber attacks and other incidents, according to the report, which warns organizations not to overlook bias when interpreting security data.

Bias: from the SOC to the C-Suite

The report (PDF), by research scientist Dr. Margaret Cunningham of Forcepoint’s X-Labs, examines the role of six common biases in cybersecurity decision-making and offers guidance on how to identify and avoid them using applied insight.

“Decision-making is central to cybersecurity–from regular end users and coworkers who are sharing our network, to people working in (security operations centers), to organizational leaders who deal with purchasing security solutions and hiring security personnel. It is critical to understand that everyone, from novices to experts, is subject to cognitive bias,” said Cunningham in an email interview.

Together, theses biases can subtly affect and shade corporate policies and procedures related to cyber security.

A Batch of Common Biases

For instance, Forcepoint found that older generations are typically characterized by information security professionals as “riskier users based on their supposed lack of familiarity with new technologies.” However, studies have found the opposite to be true: younger people are far more likely to engage in risky behavior like sharing their passwords to streaming services.

The presumption that older workers pose more of a risk than younger workers is an example of so-called “aggregate bias,” in which subjects make inferences about an individual based on a population trend. Biases like this misinform security professionals by directing their focus to individual users based on their supposed group membership. In turn, analysts wrongly direct their focus to the wrong individuals as sources of security issues.

Availability bias may influence cybersecurity analysts’ decision-making in favor of hot topics in the news, which ultimately cloud other information they may know but are not so frequently exposed to; leading them to make less well-rounded decisions.

People encounter “confirmation bias” most frequently during research. By neglecting the bigger picture, assumptions are made and research is specifically tailored to confirm those assumptions. When looking for issues, analysts can often find themselves looking for confirmation of what they already believe to be the cause as opposed to searching for all possible causes.

The fundamental attribution error also plays a significant role in misleading security analysts, Forcepoint found. This is manifested when information security analysts or software developers place blame on users being inept instead of considering that their technology may be faulty or that internal factors contributed to a security lapse.

Cyber adversaries are well aware of these weaknesses, Cunningham said. “What security practitioners and advocates have come to realize, is that adversaries have been thinking about human vulnerabilities, cognitive processes, and decision-making issues for a long time.” 

Mitigating Bias

Coping with fundamental attribution errors, and the self-serving bias, requires personal insight and empathy, according to the report. Organizations should acknowledge the possibility of flaws and bias within their own organization and create a structured plan for decision making at both the security operations levels and executive levels, according to Cunningham.

“There is power in awareness, and there is power in numbers,” said Cunningham. “As we, as a group, begin to establish a shared mental model about how and why people make decisions — and what our limitations are — we can improve our decision-making capabilities across all levels of our organizations.”