Podcast: Play in new window | Download (Duration: 24:49 — 28.4MB) | Embed
Subscribe: Google Podcasts | Email | RSS
In this Spotlight Podcast, we speak with David Brumley, the Chief Executive Officer at the security firm ForAllSecure* and a professor of Computer Science at Carnegie Mellon University. Brumley is a noted expert on the use of machine learning and automation to cyber security problems. In this podcast, we talk about the growing demand for security automation tools and how the chronic cyber security talent shortage in North America and elsewhere is driving investment in automation. You can also check out a transcript of our conversation below.
Every so often, a technology comes along that seems to perfectly capture the zeitgeist: representing all that is both promising and troubling about the future.
In the 1960s, you think of plastic, which was a pillar of a massively expanding consumer culture in the United States that put “convenience” above all else. That’s the joke behind the now-famous “advice” given to Dustin Hoffman’s Benjamin Braddock in the 1967 movie “The Graduate” by the older Mr. McGuire: “I’ve got just one word for you Benjamin…’plastics.'”
McGuire was on to something: the use of plastic did indeed mushroom in the decades that followed. Advances in the use of polymers revolutionized everything from food packaging to electronics, telecommunication and medicine. That’s undoubtedly been a benefit to billions of people on the planet. It has also made some smaller number of those people fantastically rich. But there is a downside to plastics and the throw-away culture they engendered, as we now know. Plastic trash now clogs our rivers and streams and micro plastics seep into our water and food and, borne on the winds, make their way to the earth’s most remote places.
That same L.A. pool party in 2019 might have young Benjamin being advised to look into “AI” – artificial intelligence. Like plastics in the 1960s, AI and machine learning are already big and getting bigger. Machine learning algorithms are already being used in transportation to ease road congestion, in healthcare to spot medical errors and improve patient care and in retail to improve the customer shopping experience. The technology is poised to change just about everything else …at least eventually. By 2030 AI could deliver additional global economic output of $13 trillion per year to the global economy according to McKinsey Global Institute research.
One industry where there is plenty of speculation about the potential applications and benefits of machine learning and artificial intelligence is information security, where high demand and an acute shortage of talent have executives, entrepreneurs and industry analysts argue that the adoption of machine learning and AI is unavoidable, especially if companies hope to stay on top of multiplying and fast-evolving cyber threats without breaking the bank.
But how exactly will artificial intelligence help bridge the information security skills gap? And even with the help of machine learning algorithms, what kinds of security work is still best left to humans?
For our latest Security Ledger Spotlight podcast, we sat down with someone who is uniquely positioned to answer those questions.
David Brumley is the Chief Executive Officer at the security firm ForAllSecure and a professor of Computer Science at Carnegie Mellon University. He’s on the cutting edge of and a team of students from CMU were victorious in DARPA’s Cyber Grand Challenge with Mayhem, an assisted-intelligence application security testing solution.
In this interview, David and I talk about the potential and pitfalls of using machine learning and artificial intelligence in cyber security. We also talk about what’s driving the adoption of AI and machine learning technologies in the information security field. Namely: a chronic cyber security talent shortage globally and especially in North America, the EU and other advanced economies.
As both an entrepreneur and a teacher, Brumley has a unique perspective on the problem. He sees the future of AI and machine learning as intimately bound up with the difficulty of fielding cyber security talent.
“Computer security is not a known field to the high school student…even though its highly paid, tons of jobs, great career paths. We need to fix that problem,” Brumley told me. Capture the flag contests and cyber challenges like the one that launched his company are a great way to get young people interested in cyber security as a career. However, filling the talent pipeline is a long term solution, and one we’re not even moving toward very quickly.
In the meantime, the answer is automation, powered by machine learning technology, which Brumley says companies like Google, Facebook and others are leveraging heavily to improve the security of their platforms.
“When it comes to what can you do today? It’s about taking those automated processes and saying ‘how can we incorporate those?'” The hard part for companies is being open to the change that adopting machine learning technology entails. “You can’t say ‘I don’t want to change anything, but I want security at the scale of Google. You’re never going to win that game,” Brumley told me.
Paul Roberts: This is the Spotlight Edition of the Security Ledger Podcast, and I’m Paul Roberts, Editor in Chief at the Security Ledger.
[Audio Clip: “Plastics” scene from The Graduate.]
Paul Roberts: Plastics may have been a hot tip in 1967 when the movie The Graduate came out, but in 2019, young Benjamin might be advised to look into AI or artificial intelligence. By 2030, it’s estimated artificial intelligence could deliver additional global economic output of some $13 trillion annually according to research by the McKinsey Global Institute. The benefits of artificial intelligence are already upon us and nowhere is that more evident than in the cybersecurity space where high demand for services and an acute shortage of talent have executives, entrepreneur and industry analysts predicting that artificial intelligence and machine learning technology will be critical to allowing companies to stay on top of fast evolving cyber threats without breaking the bank. How exactly will artificial intelligence help bridge the InfoSec skills gap and what kinds of security work are still best left to humans? Our guest this week has a unique perspective to offer on those questions.
Paul Roberts: David Brumley is the Chief Executive Officer at the firm ForAllSecure and a professor of computer science at Carnegie Mellon University. In 2016, professor Brumley and a team of students from CMU were Victoria’s in DARPA’s first ever cyber grand challenge that pitted automated cyber defense technologies against one another. They won with Mayhem, an assisted intelligence application security testing solution. In this interview, David and I talk about the potential that artificial intelligence, machine learning and automation hold in the information security space, what’s possible today and what may be possible in the future. We also talk about the pitfalls of using artificial intelligence in cybersecurity and about the best way to tackle the U.S.’s chronic cybersecurity talent shortage.
David Brumley: My name is David Brumley. I’m CEO of the company ForAllSecure. Yeah. ForAllSecure does automated analysis to find unknown defects in applications. We look for exploitable vulnerabilities. So when we started this company, our mission and the set of products we’re bringing to market are to automatically check the world’s software for exploitable vulnerabilities. The two key important words for us are, we want things that are automatic and to look at exploitable vulnerabilities.
Paul Roberts: As we have often said or observed on Security Ledger Podcast, every company these days is becoming or has already become a software company. What kinds of companies does ForAllSecure work with? Are these traditional software publishers or are you working with some of the many companies that maybe are developing physical devices that run software?
David Brumley: Yeah. We’ve had kind of an interesting path. We came out of a DARPA research project called the cyber grand challenge. So when we first came to market, our big customers were the U.S. government and kind of interestingly, we’re testing already compiled software. So we’re testing it on the test and evaluation where someone has already written it and then later on, it needs to be checked for vulnerabilities. Since then, we’ve expanded into the commercial sector and we’re working in aerospace along with high tech companies who really want to do a better job finding these vulnerabilities before attackers.
Paul Roberts: The cyber grand challenge, tell our audiences just a little bit about the origins and what that challenge is about.
David Brumley: The cyber grand challenge was pretty cool. In 2014, the Defense Advanced Research Project Agency, DARPA, the people who really funded the original internet said, “Can we make cyber fully autonomous?” And what they meant by that is, “Given applications that you didn’t write, can you write a system that automatically finds, improves vulnerabilities, is able to self heal?” One of the unique things about how they did this is they judged it in a full spectrum hacking contest. It wasn’t about saying, “Okay, you know, I think I found a bug and maybe this is a patch.” It was about showing that you could actually beat adversaries and do these two steps faster than anyone else.
Paul Roberts: What are some of the types of tasks that maybe today, human beings are being asked to do that would be better off shunting them off to a automated system, a machine learning system for example, to perform?
David Brumley: One of the things that we’ve found is that humans are really bad at finding many types of common security vulnerabilities, especially in high-performance languages like C, Go, or Rust or something that’s going to get compiled down to an executable. The machines are really good about systematically testing those. One of the cool things you can do is just add more CPU power to do better testing. It’s much cheaper than, for example, hiring an FTE. What we’ve seen in practice, for example, is Google has used automated fuzzing to find I think, 12,000 new bugs in Chrome completely automatically. Of course, these are things that humans had missed previously.
Paul Roberts: These are bugs that already made it through whatever QA process Chrome has, which presumably is pretty substantial.
David Brumley: Oh yeah. The Chrome team, I think there’s 38 people just on the security team, so these are smart well funded people, but it’s just hard to do that sort of in depth testing. Computers never get tired. They can be the proverbial monkey on the keyboard typing 24/7 trying to find those problems, improving them.
Paul Roberts: When we talk about finding vulnerabilities in software, what types of activities are we really talking about? Give us an example of how either human or a computer, learning system might go about locating a vulnerability in uncompiled code or compiled code, I guess for that matter?
David Brumley: Those are really historically two different processes. If you had compiled code, like if you’re just given software, you bought it, maybe it’s part of your SOHO wireless router, really it would take significant reverse engineering expertise to even begin going down the path of finding exploitable vulnerabilities. That’s a place where computers can really do a lot more than humans because they can reason about the code as it actually is going to execute. Once code is compiled, that’s a language for computers to execute. So it makes sense that computers are the best at analyzing it.
David Brumley: When you look at when people have source codes, people are like, “Well why do we need computers there?” And the reason is, often people make mistakes in how they think about a program. For example, they may think, “Hey, the user’s going to give me an input and it’s only going to be as long as maybe a DNS record,” but they never actually check that and computers can find those side cases. They can help cover the human blind spots, I guess is one way to put it.
Paul Roberts: Assumptions I guess, that developers make that users are going to use their software as designed rather than look for mistakes or vulnerabilities in it. That seems to be a major obstacle even today.
David Brumley: It’s a huge obstacle. I want to talk a second about the software supply chain problem. So we’ve seen a pretty recurring problem. You’ll write an application and maybe it parses XML or JSON, so you’ll go find Open Source that parses XML or JSON and you’ll build your application on top of that. Even if you audit your own software, you’re inheriting all these software vulnerabilities from the supply chain. You need to start looking at techniques that help you check that supply chain because at the end of the day, the user, the hacker, they don’t care whether it was your software or the Open Source software, or something from a third party. They just see it as your app and it’s a huge blind spot. Developers often forget to check that.
Paul Roberts: Yeah. There are companies out there that do assessments of Open Source, right? Black Duck and Synopsis and companies like that. Is it adequate to use their services or is there more that needs to be done?
David Brumley: Well, I think it’s still a growing field. If you look at this idea of, I think we call it software component analysis, SCA, it’s a rather new field and the general for SCA tools is, “Hey, I know this piece of software is vulnerable, it’s in Open Source.” And they’ll try to identify whether or not it’s part of your build. If you’re technical, they’ll try to do a strings and say, “Hey, the library version that we know as vulnerable is present on your system.” I think that’s a good start. Really what that’s doing is saying all the known vulnerabilities out there on Open Source, we can make sure that whether or not you’re using those known vulnerable components, but we have to go beyond that because Open Source isn’t deeply checked. This idea that many eyes find all vulnerabilities was a great theory, but it hasn’t proven itself out in practice.
David Brumley: That’s where we find techniques like buzzing. They don’t just assume the Open Source community knows of all bugs in it and so it’s sufficient just to check for known bugs. If we use an XML library, just using the current version of that library isn’t enough. There’s still probably a whole host of problems that are still latent in it, or it could just be how you’re using it in your application is unintended. Perfectly fine for you, but creates new security vulnerabilities you need to check. I say it’s a blind spot because developers often think like, “It wasn’t developed here so we don’t have to worry about it,” or We’ll just run SCA.” That’s really just saying, “Well I’m sure someone else has checked.”
Paul Roberts: We talked about, I know before also, the mental assumption I think that people make that, especially if it’s a widely used component, then you’re kind of extra safe because so many people use it. “Somewhere along the line there somebody must have audited this code. It’s not me, but I’m assuming 100,000 downloads, somebody audited it.” But in fact, I think we see that that’s not the case, that everybody’s kind of pointing fingers at everybody else. Often, this widely used components, as you said, Open Source components, libraries and so on, might have even glaring vulnerabilities that just nobody’s bought it.
David Brumley: Yeah. If you think about it in terms of incentive, those Open Source developers are often unpaid, they’re doing it as a side project.
Paul Roberts: Yeah.
David Brumley: Maybe in the best case like Apache, there’s a foundation, but they’re not staffed to go and do rigorous security audits. So it could be popular at solving an important problem, but that doesn’t mean that you should assume for your use case that it’s secure enough. It’s really on you if you’re shipping software to make sure everything you do is secure. I said I ran a hacking team and one of the things like if you’re going to go look for a new vulnerability you do is, you actually just look through the Open Source components and try to figure out what hasn’t been audited.
David Brumley: If you look at the Tesla hack, it was interesting. The way they hacked it was they found a vulnerability in Chrome. Now of course, Tesla doesn’t write Chrome, but that’s just an example of they’re using Open Source, they said it was validated enough, but it certainly wasn’t enough for automotive use.
Paul Roberts: As you said, sometimes behind these widely used components might need only be two or one individuals who obviously are more than happy to have help and maybe not looking too closely at what that person who’s helping them is doing.
David Brumley: It’s amazing, right? There’s something that’s just kind of out there, anyone can contribute and then you’re going to trust it. We even see this in proprietary software. We did an audit of wireless routers that you can buy from Amazon and a huge number of them contained essentially back doors. We can call them field access if you want, but I mean come on.
Paul Roberts: Yeah.
David Brumley: If you’re buying a router from Amazon, why should the company who made it be able to log into your router?
Paul Roberts: Distinction without a difference, as I say, yeah.
David Brumley: We literally found a device that was used in safety critical systems that had a program called BK door, back door, on the system.
Paul Roberts: Just in case you were confused by the unusual name of the function, what it was for. BK door, there we go.
David Brumley: Oh, no. Yeah. That’s for field maintenance and I’m pretty sure none of the users expected that.
Paul Roberts: One of the challenges is that organizations just as the sort of risk problem bites and there’s more higher stakes and more people kind of paying attention to application security. There’s also tremendous pressure to rapidly iterate software programs and applications, and that preferences speed and kind of getting code out there. Is it possible to kind of reconcile that with the types of things that you’re talking about and if so, how?
David Brumley: Some people use us, and the way our product Mayhem was designed was to check compiled software, so this allows the end user to check the security of the software they use, which I just felt was a fundamental primitive we didn’t have. Things like SCA and static analysis, those are for the developer to check and that’s great, but the end user should be able to check. Nonetheless, that’s at the end of the process and what we’re seeing with the rise of Dev SecOps is I really think it’s a transformational technology or transformational idiom. We’re saying it’s not enough to check security at the end. It has to be integrated into your dev cycle.
David Brumley: Just like any new process, there’s things you can do to make your life easier. If you just take the same old way you’ve done stuff and say, “Okay, we’re going to add security and shift left.” That’s not enough. You have to say, “What are the new processes and where can we add it to our pipeline to find these things as early as possible.” We’re starting to see a shift towards that. Of course, it’s not happening I think, as quick as any of us would like, but there are companies doing it.
Paul Roberts: We talked about autonomous security. What in your mind is the proper balance, I guess, between automation and human analysts? Where’s the handoff and what are things that, at least at this point in time, you’re much better off having humans look at and what are things that you might saved money and be better off having the computers and machine learning algorithms take care of?
David Brumley: That’s a good question. First, I think in a lot of practices, the human is the weak point, especially when you look at how software is deployed. It can be days, weeks, months, years before software gets deployed that has a fix and we have to reduce that time. So that’s a place for autonomy. If it passes your regression tests, you should be able to field it. That’s what the dream in your organization you should be shooting for. So I think that’s one of the properties is, make sure that you can automate to the point that if your automation says it’s a pass, that you can actually field it.
David Brumley: I think the role of the human is to architect the system to make it easy to check. So I’ll use Google Chrome as an example, we didn’t write Google Chrome, so it’s a great kind of third party example. Google has spent a lot of time building sandboxes into their web browser. These are like little safety pits for the thing playing MP3s and the thing that’s playing videos and the thing that’s doing audio processing. The reason I do that is not just for security, it just makes the software easier to test. To answer your question, the human had the set up the architecture. They said, “Okay, this is a chunk. It’s going to be one component and it’s going to be testable. This is another chunk. It’s testable. I can put them together and that’s testable.” And you need the human to design those systems so that the computer can take over from there.
Paul Roberts: You talked about the tremendous resources that companies like Google, or Facebook, or Microsoft, Apple are able to throw at security, big security problems. Obviously, most companies don’t have those resources. One question would be, are we already seeing sort of a security poverty line where security becomes something for big wealthy companies, but everybody else, all the other kind of software publishers out there, it’s sort of beyond their reach is because, as you said, the scarcity of talent, the cost of that talent and obviously the demands of the marketplace. I guess, is there a way to get around that to square that triangle?
David Brumley: Oh, it’s interesting that you had it as a security poverty line. I hadn’t heard that, but that’s exactly what we’re experiencing. So the longterm fix, right? Is we need to get more people interested in computer security as a field. There’s just too few people coming out, which makes those coming out such highly sought after people that it’s really hard to compete with a Google job offer if you’re a smaller company and you’re not willing to pay 300K a year and all those benefits. So I think we’re seeing that poverty line. I think the common wisdom is always, “You need to work smarter.” The way that you do that is you can look at, “What are those companies doing that’s automated? How do I switch my processes so that I can take advantage of those even though I’m not big?”
David Brumley: So Google has, I think, 24,000 CPU cores that are trying to do fuzz testing for Google Chrome. Now, most people don’t have 24,000 cores, but there’s services and products like ours or there’s Open Source utilities like AFL, where you can set it up yourself and even just putting 10 CPUs automatically checking your target, you’re going to find a lot of problems that you didn’t know about before. I don’t want to distract us from, we have to fix the larger problem that computer security, I’m just going to be direct, is not a known field to the high school student. They don’t come to university by and large saying, “I want to be a computer security expert.” Even though it’s highly paid, tons of jobs, great career paths. We need to fix that problem. I don’t want to belittle that all.
David Brumley: Things like hacking contest, engaging with open source, engaging with high school students are the ways to do that, but when it comes to, “What can you do today?” It’s taking those automated processes and saying, “How can we incorporate those?” The hard part for companies is you have to be willing to change. You can’t just say, “I don’t want to change anything, but I want security tomorrow at the same scale as Google.” You’re never going to win that game.
Paul Roberts: It’s really interesting. You’re an educator obviously, so I know you see this firsthand. As I look at it, it’s not even that high school kids aren’t thinking about cyber security, I don’t see many kids thinking about software development. I know that those majors are popular in colleges, but if you were somebody who was interested in something like software development, you really had to search it out and basically outside of the K-12 system.
David Brumley: Yeah. I think that you have to start at the high school and there’s a couple of barriers. One of the things we do at CMU is we run a high school hacking contest called picoCTF, it’s open to everyone. It’s actually going to run an October this year, but last year I think we had like 80,000 U.S. high school students play. I found a couple of things.
Paul Roberts: That’s great.
David Brumley: One thing is, a lot of computer security in the market place is about fear, uncertainty or doubt. It’s about danger and you can break into things and it confuses criminal with hacker. Hacker should be something we aspire to. You shouldn’t equate it with criminal, but I think when we’ve run these contests, what we found is that actually puts a lot of students off as well. It’s like, “My interest isn’t how to break into things.” And when you start rephrasing things and say, “Computer security is about building trust in the things that we use every day so that people can trust it. You’re actually helping people.” And good software development is the same way. You can actually reach a larger audience.
David Brumley: I think some of the things the U.S. needs to do is actually be a little bit more serious about it. They put a lot of talk into it, but not a lot of action. I remember we run this as a volunteer effort, this high school hacking contest, very little funding for it, pretty large participation. We started getting letters from various states saying, “Hey, you have to sign all these agreements with us because our students are using it.” Every state had a different process. While I admire the overall idea that they care deeply about their student privacy and what they’re looking at online, it makes it really hard to build something that touches many lives. If you look at places like Russia, it’s just a gladiator sport there, right? Like whoever wins the big gladiator contest is the best. In the U.S., we’re much more about human choice and about freedom.
Paul Roberts: Yeah.
David Brumley: We need to spend more time encouraging it and giving people opportunities because it’s not ever going to be mandated, nor should it be mandated. We actually have to put our money where our mouth is on this.
Paul Roberts: Let’s have a TV show about, or a Netflix show about somebody who does cybersecurity or does software application development who isn’t wearing a hoodie and you know a misanthrope.
David Brumley: Absolutely. You know what? This TV show exists in China where there’s the movie star with the great looks, the girl, but she’s a hacker playing after the flag contest. Why don’t we have that here?
Paul Roberts: We have Mr. Robot, which is an amazing show, an amazing show, but you could be forgiven if you didn’t see it and say, “I’m not sure that’s the community for me.”
David Brumley: Yeah. We should be raising these people up like a good hacker like the people [inaudible 00:21:05] own are helping us make things we care about, like our cars, better.
Paul Roberts: Exactly. Exactly. So kind of emphasizing pro social rather than antisocial tendencies because obviously, there are many more pro social than antisocial people out there.
David Brumley: Yeah, I totally agree. So we’re big advocates. Computer security is about fostering trust and increasing trust and actually helping people. It’s about helping the person who’s not a computer science or engineering major who wants to write their English paper or wants to do research on dinosaurs, making sure they can trust their devices, their airplanes they fly in. I think if we start rephrasing it that way, we’ll attract a bigger audience. I think also, is computer security tools really need to start focusing more on that message. We, in industry, need to do our part of not just going about the FUD, the fear, uncertainty, doubt.
Paul Roberts: For folks who are out there listening to Security Ledger Podcast, maybe they’re working in technology, maybe they’re working for a company that is making some software driven thing, they’re probably worried as heck about their software supply chain risk. Where do they start? How do they even start to get their arms around this very big problem?
David Brumley: Well, I think there’s different stages for everyone. So I’ll give a couple of pieced of advice.
Paul Roberts: Denial is the first age, right?
David Brumley: Denial is the first stage. The second is wishing, “Why didn’t someone else solve that? I hope that there’s like… I can just go by that black box and it works.”
Paul Roberts: The third stage is outsourcing.
David Brumley: Third stage is outsourcing. That doesn’t work so well. I think I’m a big believer that you got to look at pairing tools with processes. So ForAllSecure, we have tools that help companies automate the same sort of things google and Microsoft do. We think they’re more technically advanced. We won the cyber grand challenge, DARPA deemed us best. Our website is forallsecure.com. I think if you’re in business, that’s a great way to get started, is just talk to us. Get a different perspective. I think if what you’re trying to also do is grow the community, and I think some of your listeners are, encourage people to play in these hacking contests like picoCTF.com. You can learn a lot. We have a large number of U.S. high school students play, like I said. Create your own. We, by no means, think that we should be the authoritative source on that. I think there’s two answers there.
David Brumley: One is, start looking at products and don’t just say, “Okay, I’m going to go look at when everyone else is buying.” Start thinking about, “Well, what do the best people do and how do I mimic that?” Because it’s not as hard as you think to get those sort of tools and we offer them. Then the second is, start participating in the community for growing it. I think that we’re really at a transition point in the U.S. and as far as software development, and so I really look forward to hearing from people what they think are their problems so that we can better address them. I’ve talked about some here, but I think it’s good to have that dialogue. So anyone, feel free to reach out to me. It’s just D.Brumley@forallsecure.com.
Paul Roberts: David Brumley of ForAllSecure. Thanks so much for coming on and speaking to us on Security Ledger Podcast.
David Brumley: Oh, thanks for having me. Have a great day.
Paul Roberts: David Brumley is the Chief Executive Officer and co-founder at ForAllSecure. You’ve been listening to a spotlight edition of the Security Ledger Podcast sponsored by ForAllSecure. ForAllSecure was founded with the mission to make the world’s critical software safe. The company’s patented technology is the product of over a decade of research into solving the difficult challenge of making software safer. ForAllSecure has partnered with Fortune 1000 companies in aerospace, automotive, and high tech, as well as the U.S. Department of Defense, which is integrated ForAllSecure’s Mayhem technology into software development cycles for continuous security. Check them out at forallsecure.com.
(*) Disclosure: This podcast was sponsored by ForAllSecure for more information on how Security Ledger works with its sponsors and sponsored content on Security Ledger, check out our About Security Ledger page on sponsorships and sponsor relations.
As always, you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloud, Stitcher, Radio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.
Pingback: Episode 162: Have We missed Electric Grid Cyber Attacks for Years? Also: Breaking Bad Security Habits | Raymond Tec
Pingback: Security Ledger Podcast: Security Automation is (and Isn’t) the future of InfoSec – Stanley