Black Hat: Talent Scarce, Firms Look to Automation and Ai

A photo of a presentation at the 2015 Black Hat Briefings in Las Vegas, where applying rtificial intelligence and data analytics to security problems were big topics of conversation. (Photo courtesy of UBM.)
A photo of a presentation at the 2015 Black Hat Briefings in Las Vegas, where applying rtificial intelligence and data analytics to security problems were big topics of conversation. (Photo courtesy of UBM.)

In-brief: with security talent scarce, experts at the Black Hat Briefings say that security automation fueled by machine learning and data analytics is going to play an increasing role in security operations. 

I spent much of last week attending the Black Hat Briefings in Las Vegas and had the chance to speak with many of the top security firms and start-ups who are vying for enterprise and government accounts. In all those conversations, one salient trend emerged: the security talent drought is extreme and nobody expects it to end anytime soon.

The solution? Many of the security experts I spoke with said that greater automation of rote security tasks and analysis is the answer. Machine learning and artificial intelligence (AI) are the new keys to the kingdom, as security firms look to leverage massive troves of data for clues about emerging and ongoing threats.

Matt Wolff, the Chief Data Scientist, is at the forefront of that effort. A seven-year veteran of the NSA, Wolff’s expertise in is artificial intelligence and machine learning – especially as it applies to problems such as insider threats, software vulnerability exploitation and network defense.

At Cylance, those skills are used to help develop new endpoint protection software that can automate malware identification – essentially detecting even novel malware instantly. Wolff told me that he sees a lot more interest and experimentation in the security field in applying data science and data analysis to security problems.

“Our technology uses machine learning to analyze more than a billion files and, basically, teach itself,” he said. That’s a big improvement over how things have been done for much of the last 20 years, during which anti malware firms have relied on teams of malware analysts to try to make sense of millions of new malicious software variants that appear each month. Sure, those analysts have long relied on technology and automation to help shrink the pile of malware that must be reviewed manually.

Today, however, that approach is woefully outdated. Malcolm Harkins, the Global Chief Information Security Officer at Cylance notes that organizations may have a window of just a couple of hours – or less – from the time that employee credentials are compromised to those credentials being used by malicious actors to log back in to the network and begin establishing a beachhead and moving laterally to identify and steal valuable data and information assets.

But folks like Wolff say that a kind of deeper analysis, empowered by machine learning, that can reveal patterns in malicious files and network activity that are invisible to even the best malware reverse engineers, stopping exploits before they happen.

For example, Wolff notes that Cylance’s technology has been able to use statistical analysis to find patterns in the assembly instructions for malicious programs that help indicate – out of the box- what kind of malicious program it is (example: a keystroke logger). That kind of deep pattern recognition wouldn’t be possible for a human analyst.

Nate Fick, the CEO of Endgame, said that objective isn’t to replace humans at the console, but to limit valuable operators to the work that is best suited to humans: understanding context, synthesizing disparate information, divining intent and managing complex, multi-disciplinary activities like incident response.

Monzy Merza, the Chief Security Evangelist at the firm Splunk likens it to recognizing that a man in a fur coat is out-of-place on a hot, sunny beach. Even elementary school children can identify that scene as out-of-place, but even sophisticated artificial intelligence algorithms might struggle to do so.

“Humans are just good at contextualization,” said Merza. But with security talent at a premium, many enterprises are unable to even find the staff to spot the fur coated men lurking on their networks. And that has created a market opportunity for firms that do have that talent, and can figure out how to spread it across many different accounts.

Endgame is one such firm. The seven year-old company cut its teeth working with the U.S. military and Department of Defense. That gave the firm a bird’s-eye view of the most sophisticated threats and attacks out there.

But it was also a kind of rarified atmosphere, notes CEO Nathaniel Fick. He recalls instances in which project managers within the military would assure him that they could throw thousands of people at a particular security problem. Fick recognized that few government agencies – let alone private sector firms – can marshall those kinds of resources and that it would fall to firms like Endgame to bridge the gap.

Today, Endgame does that with a mixture of software automation and artificial intelligence that can inspect network and endpoint activity in realtime to spot potential compromises. The company uses data analysis to crunch information on endpoint activity, network visibility and information on known adversaries and tools.

The output of that analysis is filtered to high level security analysts with experience hunting APT actors operating from within government networks.  That expertise in understanding the intersection of threat actors, tools, techniques and behaviors is invaluable to organizations that need to get their arms around a compromise or potential compromise and begin to turn the tide.

“Attacks are really just a kind of transaction,” said Merza of Splunk. “They might start with reconnaissance and move on, ultimately, exploitation and exfiltration.” Using tools like data analysis, machine learning and automation to help identify those transactions early on can empower human operators to intervene before a compromise happens. Platforms like Splunk are increasingly being used to do just that: correlating threat information from a variety of security endpoints with human intelligence in ways that help human analysts identify and stop compromises early on.

Fick, of Endgame, acknowledges that ferreting out “advanced persistent” and state-sponsored actors is difficult, but it’s not impossible. “At the end of the day, these are human adversaries,” Fick said. “It’s not ‘the weather,’ we can understand their actions and gain unique insights into what they’re doing.”

One Comment

  1. It’s interesting — they’re talking about there being scarce talent, but most of what they’re discussing doesn’t require talent so much as technicians. I’m more inclined to believe that the reason they’re so enthusiastic about automation and (this actually worries me, if taken too far) AI is because, at the end of the day, doing things this way greatly enhances their bottom line by reducing their budgets considerably. From a financial standpoint it’s probably a better move, especially now that malware analysis is becoming more and more automated. I do wonder how this will impact people starting out in the field — will it greatly reduce entry-level positions, and if so how will that impact their ability to find more advanced talent for things that cannot be automated? I’m not sure such things can be taught in an academic setting (especially advanced incident response), so this will be something to keep an eye on.