artificial intelligence infographic

Opinion: AI and Machine Learning will power both Cyber Offense and Defense in 2020

Artificial intelligence and machine learning hold great promise for both defenders and attackers, making it one of the most important security trends to follow in 2020, says Gerald Beuchelt, the CISO of LogMeIn.*

No matter how many brilliant security professionals and analysts you have in an organization, humans just can’t keep up with the data processing, analysis and other tasks required to prevent attacks. That’s why, when considering trends in cyber security for 2020, artificial intelligence (AI) and its subset machine learning should not be ignored. Here are some of the machine learning and artificial intelligence trends to pay attention to in 2020.

Identifying attacks and taking action

According to a recent Capgemini report, 51% of organizations have a high utilization of AI for detection of cybersecurity threats. Which is higher than those using AI to respond to events (47%) and predict events (34%). This makes sense as the first step for businesses wanting to invest in AI. They start by using it to establish baselines of normal and abnormal activity.

Gerald Beuchelt
Gerald Beuchelt is the Chief Information Security Officer at LogMeIn.

Already, then, organizations can use AI capabilities to monitor and analyze huge amounts of data to establish what is normal behavior and what should be flagged. This also removes a huge burden on security teams to manually review this ever-increasing pool of data.

Risk-adaptive authentication can be your first line of defense, preventing hackers from getting into your system. It uses artificial intelligence to adapt authentication proportional to the risk of the login: requiring stronger authentication for higher-risk transactions, where low risk or “normal” transactions authenticate in the standard way. For example, it learns a user’s normal behavior over time – so if a user logs in from an unfamiliar address or at an odd time, the system would consider that high-risk and can perform “step-up” authentication, requiring additional steps to authenticate or denying access outright. Admins can also be alerted when these abnormal events occur.

Podcast Episode 117: Insurance Industry Confronts Silent Cyber Risk, Converged Threats

If a hacker gets in your system, User and Entity Behavioral Analytic systems (UEBA) are a next line of defense. They use AI to offer in-depth analytics of user and device behavior to create a normal baseline and flag anomalies. For example, let’s say a hacker gets into your system using a stolen employee credential. The UEBA system knows the typical behavior of that employee, so if the hacker begins performing abnormal actions like downloading huge amounts of data or accessing unusual applications, this will be flagged and access can be automatically revoked.

Both risk-adaptive authentication and UEBA systems can monitor and analyze huge amounts of data very quickly, so they can catch potential threats in real-time. Key to these technologies is a sound strategy on how to manage the associated identities: Knowing the digital identity of the user or device helps to correlate behavior across multiple sessions. The goal for advanced analytics is to ensure that the digital identity is actually used by the “real human” that is entitled to use this identity.

Bad Data & Malicious AI

However, cyber criminals are also trying to hide their abnormal behavior from the AI systems. Their first tactic is to overwhelm the system with inputs, so as to disguise what normal data trends and patterns are.

Dark Web Looms Large as Enterprise Threat

We have seen already a couple of examples of malicious AI usage in biometric authentication. In 2019, a company was scammed into a large wire transfer by an attacker that spoofed the voice of the CEO when talking to a financial analyst. Similarly, we have seen passports issued with photos that overlay two separate individuals faces, tricking face recognition software at border entry points to allow both individual to pass.

Predicting future attacks

This is an area for real development in the coming years. Once the AI systems understand when and how attacks have occurred in the past, the next step would be predicting future attacks. For example, looking at trends leading up to attack and identifying high-risk scenarios in the future.

Using temporal analytics as well as structured and unstructured data analysis and integration, companies can build complex social media and other open source intelligence models to predict future attacks. This is becoming a common practice with some of the largest organizations like Fortune 100 companies and those in banking, Fintech and government. They are the businesses that can afford these cutting-edge technologies.

But can’t the bad guys use AI too?

Yes, cyber criminals are also using AI to improve the accuracy and efficiency of their attacks on businesses and end users.

Just like organizations can use AI to process large datasets, so can cyber criminals. And they are doing this to figure out your company’s vulnerabilities. One of the biggest vulnerabilities for many organizations are their employees. Employees already fall for phishing attacks, but hackers can use AI to identify the best targets (those with the most privilege in your systems, those who are most likely to be tricked based on their actions online, etc.).

They use also use AI and machine learning to improve their social engineering techniques. AI can gather information on the target and generate custom malicious websites, emails, links that are mostly likely to be clicked on. They could even send fake emails that mimic the writing style of the target to try to con their coworkers. Using AI to do this makes it a much less manual process, so cyber criminals can cast a wider net with less effort.

You can see how an arms race between cyber criminals and organizations is developing – with each needing to adopt the latest technology to keep up.

What should your organization do to protect itself?

Whether or not businesses can invest in AI right now – there are steps they can take to protect themselves. First, see if any of your organization’s vendor solutions are using AI or machine learning in ways that you can take advantage of. This is an easier way to start than developing your own proprietary AI algorithms.

Second, educate employees on what to look out for and how phishing and social engineering attacks are getting more sophisticated. Make sure they know how and where to report suspicious emails, phone calls, websites, etc.

Third, protect your access points. Preventing hackers from getting into your environment is your first line of defense. Ensure you have access solutions in place that follow the principles of least access to ensure only the right employees have access to the apps required for their role.

Overall, the best thing you can do is stay up-to-date with the threat landscape and how AI can help. 

(*) Disclosure: This contributed article is sponsored by LastPass, a LogMeIn brand. For more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.