Altered Stop Sign to Fool Machine Vision

Episode 180: Gary McGraw on Machine Learning Security Risks

In this episode of the podcast (#180), Gary McGraw of the Berryville Institute of Machine Learning joins us to talk about the top security threats facing machine learning systems. [Transcript]


As long as humans have contemplated the idea of computers they have contemplated the idea of computers that are capable of thinking – reasoning. And as long as they’ve contemplated the notion of a thinking machine, they’ve wondered about how to contend with the consequences of computers’ faulty reasoning?

Spotlight Podcast: How Machine Learning is revolutionizing Application Fuzzing

Stories about machines acting logically – but based on faulty or incorrect assumptions – are the fuel for science fiction tales ranging from 2001: A Space Odyssey (Arthur C. Clark) to Minority Report by Philip Dick, to the 1980s cult classics like the movies War Games and The Terminator.

Gary McGraw Synopsis
Gary McGraw is the Co-Founder of the Berryville Institute of Machine Learning.

So far, these warnings have been the stuff of fiction. But advances in computing power and accessibility in recent years has put rocket boosters on the applications and abilities of machine learning technology, which now influences everything from multi-billion dollar trades on Wall Street, to medical diagnosis to what movie Netflix recommends you watch next.

As machine learning and automation fuel business disruption, however, what about the security of machine learning systems? Might decisions be manipulated and corrupted by malicious actors intent on sowing disruption or lining their own pocket? And when machine decisions go awry, how will the humans impacted by those decisions know?

Altered Stop Sign to Fool Machine Vision
Adversarial examples such as altered street signs can poison machine learning algorithms with bad data. (Photo courtesy of Cornell University.)

Facebook opens up on System that ‘protects Billions’

Our guest this week, Gary McGraw, set out to answer some of those questions. Gary is the founder of the Berryville Institute of Machine Learning, a think tank that has taken on the task of analyzing machine learning systems from a cyber security perspective. The group has just published its first report: An Architectural Risk Analysis of Machine Learning Systems, which includes a top 10 list of machine learning security risks, as well as some security principles to guide the development of machine learning technology.

In this conversation, Gary and I talk about why he started BIML and some of the biggest security risks to machine learning systems.  

Transcription

00:00:00 – 00:05:02

Paul: Hello this is the Security Ledger Podcast. I’m Paul Roberts Editor in Chief at the Security Ledger. In this week’s episode of the Podcast number 180:

[Sound Clip from 2001: A Space Odyssey – HAL talks to astronauts]

As long as humans have contemplated the idea of computers, they’ve contemplated the idea of computers that are capable of thinking and reasoning and wondered about how to contend with the consequences of computers’ faulty reasoning. Stories about machines reaching the wrong decisions logically are the fuel for science fiction tales ranging from 2001: A Space Odyssey by Arthur C Clarke to Minority Report by Philip Dick.

So far these warnings have been the stuff of fiction. But advances in computer power and accessibility in recent years has accelerated the use of machine learning technology and expanded its applications and capabilities. Today machine learning influences everything from multi-billion dollar trades on Wall Street to medical diagnoses to recommendations from Netflix about what movie you should watch next.

As machine learning and automation fuel business disruption, what about the security of machine learning systems? Might decisions be manipulated and corrupted by malicious actors intent on sowing disruption or simply lining their own pockets? When machine decisions go awry, how will the humans impacted by those decisions be able to tell? Our guest this week, Gary McGraw, set out to answer some of those questions. Gary is the founder of the Berryville Institute of Machine Learning a think tank that has taken on the task of analyzing machine learning systems from a cybersecurity perspective.

The group just published its first report: an Architectural Risk Analysis of Machine Learning Systems. In this conversation, Gary and I talk about why he started the Berryville Institute and about some of the biggest security risks to machine learning systems.

Gary: I’m Gary McGraw. I’m the co-founder of the Berryville Institute of Machine Learning

Paul: Gary, welcome and I think it’s welcome back to the Security Ledger Podcast because I think we’ve had you on before. Today we’re talking about your latest endeavor which is The Berryville Institute Machine Learning. First of all tell us about how this got started.

Gary: Well its sort of a sad story, so I will admit that I tried to retire and I was strictly bad at retirement so last January when I retired from doing software security work professionally for Synopsis. I decided to take a look at machine learning which I worked in twenty five years ago to see what kind of progress had been made. Because you see all this incredible coverage about now machines can play GO now. They can read everything now. They can translate all your speech and, you know, understand what you’re saying. I wondered how much of that was hype and how much was real and what progress had actually been made in twenty five years. I was talking to a guy who’s now in the Berryville Institute with me at a Technical Advisory Board meeting for Intrepid. That guy’s been working in a machine learning directly for the last five years or so. So we decided to put together a research group in just look into the field and read some scientific papers and see what progress has been made.

What we found out is not surprising: computers are way way faster and data sets are way bigger. But as we were reading and learning that really it was just computers have gotten better and the algorithms are pretty much the same. But what we found is that nobody’s really paying attention to security. What little there is was about attacks. For example, there’s the famous story of putting tape on a STOP sign and making the machine believe that it’s the Speed Limit: Forty Five sign, which would be hugely problematic if it was a Tesla that was doing that, for example. Or a machine learning algorithm that’s supposed to distinguish between wolves and dogs and it does a great job, but it turns out that it’s not actually distinguishing between wolves and dogs. It’s just a snow detector.

00:05:02 – 00:10:06

So if there’s snow in the picture it says “wolf.” So these sorts of things get a lot of coverage in the press as security problems especially with the nomenclature “adversarial input.” And that’s good, but it reminded me a lot of software security in the early days where we were breaking this piece of software and that piece of software and there wasn’t any coverage about what we should do about this. So we decided we would do risk analysis. And that’s what we did. It took us a year.

Paul: So tell us about how you went about that.

Gary: Yeah in a very science-y way. So we started reading papers and we just followed our way through the references. We at the same time created what’s really nice resource now: an annotated bibliography, which you can find on the Berryville Institute for Machine Learning website at Berryville I L DOT COM. And in that annotated bibliography, you can see the papers that we read. We’re still actively doing that. In fact Thursday we covered four new papers that we read and we discussed those at great length to try to determine, you know, which direction the field is going. Who’s making progress? Who’s doing trivial work and who’s doing really profound earth-shattering work, you know?

It’s been a complete blast to put on my scientist hat again because I’m a trained scientist and think about “how do we get to the edge of this from science research perspective and then contribute?” So when it came to contribution, we decided that what was really the most needed was a risk analysis of machine learning systems writ large. So we came up with a generic model of a machine learning system that has nine components and then we thought about the nine components very deeply –considering what risks might be associated which with each of those components individually. We identified a whole bunch of risks that way and then we started thinking about the system as a whole or interactions between components unidentified even more risks. What we ended up with was seventy eight risks that we talk about explicitly in this document and then we made a top ten list. Because you have to put stuff like this out and in and we put that out in the world and people are just going “Holy Cow. This is amazing! Nobody’s done this before!” It was pretty exciting kind of work that I’ve been doing for twenty five years in now applied to machine learning.

Paul: so it might be useful to just step back for a second and kind of define terms here. And when we’re talking about machine learning systems, how would you define that? And what are we talking about?

Gary: So you’re talking about basically an algorithm that learns to associate input with output. So you might think about something that classifies pictures so the machine is trained up to look at pictures and say whether or not there’s a ball in the picture. So you show it, you know, tens of thousands or maybe hundreds of thousands of pictures and each time there’s a ball you say “ball” and it learns to say “ball” when it sees a ball. And when you show it a picture without a ball, it says “no-ball.” And so it learns to classify through statistical association whether or not there’s a ball. So it’s not learning by being coded a set of rules in the old way of building a computer program. Instead, it’s a neural network that has weights and thresholds. And you feed back whether it gets it right or wrong. You feedback through the network and adjust the weights and thresholds in such a way that the network will will end up doing the tasks that you want to train it to do.

That’s a very very simple example but by and large that’s how these things work, so you’re building in associations. Machine learning security is not about using machine learning to do security. Machine learning security is about the security of machine learning. And it’s kinda like building security in, versus, like, sprinkling magic crypto fairy dust everywhere.

Paul: I love magical crypto fairy dust.

Gary: So you have these seventy eight different types of machine learning risk but before that there’s a sort of taxonomy of known attacks on machine learning that you put together like input manipulation, data manipulation, model manipulation.

Paul: so as you guys surveyed out there, what is the state of play right now in terms of attacking a machine learning? So for those different types of attacks, are there any that are actually very prevalent from those that you identified?

Gary: Definitely. I mean the number one risk and the number one attack are pretty much the same, and that’s this idea of adversarial examples.

Because of the way that these things do their statistical association, you can often make a mask that is imperceptible to a human and put that over a picture and then have it categorized incorrectly. So for example you might have something that’s supposed to identify tanks and you figure out a way to put some noise into the input so that it thinks that all tanks are cats and it says “Cat! Cat!

00:10:06 – 00:15:04

Everything’s fine is just a bunch of cats coming over the border!” You know? No problem. So that kind of attack has gotten a lot of coverage for many reasons. One is: it’s a kind of input that people understand because it’s visual. Some people have worked on text based adversarial input. But basically what you’re doing when you’re manipulating adversarial input is you’re fooling the machine learning system by providing malicious input with really tiny perturbations that a human can’t see but the machine is like “Well, that’s totally a cat,” you know? And so there’s a disproportionately large amount of coverage for that stuff but it’s very much real. I mean it’s got a lot of sex appeal. You can put pictures in your articles. Whatever so almost all of the oxygen in the room in machine learning security is taken up by risk number one: adversarial examples, which is not necessarily terrible. But there are seventy seven other risks. We should think about talk about them.

Paul: What are some of those risks? Are there particular ones that you think are particularly salient or worrying?

Gary: Yeah well let me just march through the top ten and then you can pick two or three out of the top ten and we can talk more about them.

Paul: Perfect. You’re giving me choice. I like that.

Gary: Yeah number two is — this is how you feed kids lunch, by the way. “Do you want a peanut butter sandwich or..?”

Paul: Mmmm…Yummy!

Gary: (Laughs) So number two is data poisoning. Number three is online system manipulation. Number four is transfer learning. Number five is data confidentiality. Number six is data trustworthiness. Number seven is reproducibility. Number eight is over fitting. Number nine is encoding integrity in. Number ten is output integrity.

And before I let you pick I will just say this one thing: this is a kind of a meta point and its super important. We spend a lot of time in security thinking about the technical systems that were building and risks that are inherent in those systems. But we don’t spend that much time thinking about the fact that in machine learning, most of the system is the data that we used to train the system on. So when you’re thinking about risks in machine learning you have to focus a lot of attention on data issues where the data come from. How do we know it’s real? What if somebody screwed around with the data? Where do we store the data? Hey which data did you use to train that thing up? All those things lead to new kinds of risks that nobody’s really paid much attention to.

Paul: So the one that jumped out at me. ’cause it’s one that I can grasp and that seems like it would probably be a big issue, would be data integrity. So: feeding machine learning algorithms just with bad data. So first of all like how is that different from adversarial examples. Aren’t adversarial examples. Just type of bad data or maybe there’s a difference?

Gary: Yeah that’s a really good question. Very insightful. The reason that there’s a difference is because when it comes to data poisoning we’re thinking about the data that are used to train the system in the first place. So let me give you an example. There’s a machine learning system that’s supposed to help people decide whether or not to hire someone so it watches videos of candidates answering questions, then says: “Yeah, hire that person,” or “No, don’t hire that person.” It makes the “decision” by having been exposed to a bunch of old hiring decisions from before. So you feed in all your data about what people said in interviews and whether or not you hired him and whether they worked out and the machine decides whether or not somebody should be given a job offer.

Now the problem that is evident in the data poisoning risk is this: if the data that you’re using from your history are racist or xenophobic or sexist and you train the system to basically do what we’ve been doing for the last decade, you’re gonna end up with a machine learning system that’s racist, sexist and xenophobic.

That’s a perfect example of data poisoning accidentally. It turns out that the data that we were using was crappy because our corporation has racist xenophobic or sexist tendencies that we were not aware of until we train a machine to be like us. And then we were like “Uh oh. Look at that. That’s bad. Look what drops out of this data.”

And so we have to be super cognizant of that. A really hilarious example – I mean it’s funny but also awful – is Microsoft. They put out a Twitter bot called Tay. And Tay had a Twitter account. And you’d tweet whatever you wanted to it and then it would have a little conversation with you. And Tay very very quickly became a total asshole. It got so bad so fast that Microsoft is like ‘Oh, we better turn Tay off! We got a fire Tay. You can’t say that to people – even on Twitter!” So that’s an example of what happens when you just have data coming from the public that can be manipulated by an attacker.

00:15:04 – 00:20:05

Now let’s get serious about this. Imagine that you’re using data from public source to train a machine to do something important like identify hotspots for virus spread in on a map now if those data are public and they can be tampered with by an adversary. Then what will happen is the machine will do the association but it will have the wrong sort of data that it’s doing this association on. It’ll just do the wrong thing because you trained on poisoned data. That’s a much more serious and important example of that category risk.

Paul: You mentioned transferred learning attack. That’s that sounds really interesting and it has to do with, kind of, building machine learning on other machine learning I guess is is my reading of it?

Gary: Yeah that’s exactly right. So it turns out that training these things up is computationally intensive. And so if you’re training up a network with say ten layers and you’re teaching it to categorize pictures it takes a lot of cycles to do all that training. It takes millions of cycle millions of training examples in lots of computation to get machines trained up. And so what people have done is say ‘Well, I’m going to train up a machine on a basic task and then later I’m going to take that very same resulting machine – the trained up one – and I’m going to use that again. And I’m going to refine it to do a more subtle task but I’m going to start with the already basically trained up model.’ So you transfer the brain so to speak from one machine learning system into your new on before you refine it in training more. That’s called transfer learning.

So a transfer learning attack is screwing around with that system. You say well you know if you’re gonna transfer something then I’m gonna make that something terrible like a Trojan that has sneaky behavior in it that I’m not going to tell you about, so when you take my system and you start with that you’re also starting with say a little possible malicious behavior that you’re not aware of so you you basically eat the whole doughnut including the poison pill that’s in there. That’s a Trojan of version of transfer learning.

The other thing. Is you start with a machine learning system as an engineer and you start with the wrong one. So you’re like ‘I’m gonna just start with this one!’ And then it does all sorts of surprising crazy things in makes all sorts of mistakes no one would ever make. And then the third one is more subtle. It turns out that if you train up a machine learning system on a bunch of data it actually represents those data inside of itself in a very distributed way that humans can’t parse, but the data are in there. So if you have a model that say you’ve trained up to do some sort of medical task. If a lot of PII was used to train that thing up, those data are still in there. Somewhere in the question is whether you can get them out and if you use data if you transfer to start with that model with the PII sort of encoded in it somewhere, then that’s going to be coded in the new target too. And that is very very bad because all of a sudden we’re leaking private information all over without meaning to another one.

Paul: You guys mentioned – and by “you” I mean Berryville -called out one area that I is going to be very problematic. This is this notion of reproducibility. And as we lean more on machine learning algorithms do important work whether it some read X-rays or educate the fate of a accused criminals sitting in the courtroom. How does the algorithm reach that decision and is at decision reproducible?

Gary: In some way in the way that in science we expect experiments to be reproducible. At they’re accurate. Yeah I’m sorry to say this but machine learning involves a whole lot of cludge-ing and a whole lot of ‘Well, we SORTA got it to work and it works.’ Aand you read the literature like even the science literature that’s been peer reviewed and they’re like ‘Well, we set all the hybrid parameters empirically.’ What that really means is ‘we ran six and one works so we used the one that worked! and here’s the numbers!’ Or they’re like: ‘We set this to four. And you’re like ‘why? What does four mean?’

Like what the hell are you talking about end? It turns out that there’s a lot of incredibly sloppy kind of work. That’s being used now. The results are good because the machines do what they’re supposed to do and so everybody’s excited. They’re like: ‘Yeah. It does the thing except for that we are not sure why. But we’re not gonna talk about that part!’ And you know if somebody else comes along and they’re like hey. We’re GONNA make our thing do that too. Hey how’d you do that if you read the papers? There’s often not enough information to figure that out or it’s being held in a proprietary way. That’s bad because in the normal case. These algorithms are inscrutable. We don’t know how they come to the decisions they come to we know. It’s based on statistical association between data sets that we provided but we don’t know what the representation is we don’t know what the edges in the boundaries are.

00:20:05 – 00:25:01

We don’t know how people could possibly make it. Misbehaved through adversarial input. All of those things are pretty murky and that is all what we kind of. Stick under the reproducibility thing. Now if you produce a machine that’s doing important stuff and you’re not sure why does it and then one day it does the wrong thing like you can’t just say in the court. Oh well you know it’s a machine. Yeah the machine did it. And we’re not sure why is like well. Who owns that machine? Who TRAINED THAT MACHINE UP Coming coming soon to a courtroom near you.

Paul: You know we are entering an era where were relying on machines to do. You know make many more of these decisions from healthcare decisions to decisions affecting somebody’s kind of freedom and civil liberties potentially. You know, the sort of “automated judge” type applications. And yet as you’re saying it, we often can’t fully explain in terms humans can understand how particular decision was reached by a machine learning algorithm.

Gary: Well this is right and it is a huge problem. But guess what? It’s also a problem for people. If you say ‘Hey, how come you did that to your five year old?’ They’re like ‘Oh well you know, I don’t know.’

But somehow it’s worse when it’s a machine so we are making a lot of progress. Were Gonna use this technology. We need to do what we can to manage those risks. We can’t just throw all this stuff away and say ‘Oh two neighbors to us but we do have to just go in with our eyes open. So what we did at minimal is meant to be used by people who are either taking existing machine learning technology and putting it into their own system or designing new machine learning algorithms and systems themselves so engineers architects technologists. People who are thinking about using this stuff need to read our work and think about building security in for machine learning.

Paul: What does that mean practically?
Gary: That means being cognizant of the risks knowing about where your data are coming from and how you’re storing it understanding all of the seventy eight things that were like. This could go wrong. What would happen to your system if it did go wrong so you know just thinking about it ahead of time. It’s not really that hard. In fact sometimes being aware of the risks is half the battle because then you can work your design your way around them and you know we’re optimistic that our work is Gonna make a big difference in the way people are approaching some of this stuff and they’ll think wow those security is they sure are crazy they think that. Oh man wow we gotta work on that you see.

Paul: Is there anyone that you found out there who is doing this right? Who is you know doing innovative work around machine learning and also as part of that wrestling with some of these issues?
Gary:There are plenty of groups. And you know there’s some good academics. The guys Google Brain, Noah and and his colleagues are doing great work there. There are some people at Microsoft that are doing very good work in this area. They’re thinking about threat modeling for machine learning. So this is the thing that is coming into its own and kind of blossoming all at once. The time is right. It’s it’s time for the Kuhnian Paradigm Shift to occur and it’s occurring answer.

The good news is there are lots of people working on it now. The better news is there are not enough people working on it now. So if you feel like you want to get interested in this stuff and get involved, there’s tons of work to be done. There’s tons. For example, what we want to do next at BIML is think about those seventy-eight risks and start thinking about particular mitigations or controls that you might put in to manage those risks appropriately. So we can say: ‘Oh if you’re worried about this risk, have you tried doing it this way?’ And you know we’re just starting to think about that aspect of the work. We thought ‘We will identify the risks. And then we’ll go from there.’ So you know job one is done or maybe Step Zero and they’re probably ten thousand steps to go.

Paul: I wanted to ask you about the relationship between this and BSIMM – Building Security In Maturity Model, which is what you developed around application security and software software security. Are there similarities or connections? Do we end up with something like a BSIMM for machine learning?

Gary: Maybe. The number one similarity is: it’s my work. And we put it out under the Creative Commons. The other similarity is really philosophy. So philosophically speaking, ever since the Java security days back in the mid 1990s, I’ve been deeply interested in how we get in front of security problems. It’s like why are we always running around with their hair on fire going ‘Incident Response: cleanup on aisle seven!’ ‘Oh my God! We lost all of the records!’ Like, you know, ‘Oh no! It happened again!’ And instead of running around squirting water random places.

00:25:01 – 00:29:25

We’re like: ‘how do we build like sprinkler systems and stuff so we stop burning down San Francisco like in 1906?’ Like, you know, back at the turn of the century one hundred years ago cities would just burn down. And then people were like ‘This sucks. We gotta have these cities stop burning down.’ And I think that’s the state that we’re in now in high-tech, philosophically. That’s what “Building Security In” is about. Let’s think about these things ahead of time and plan for problems. Let’s don’t freak out. Instead, let’s design for security and make these systems harder to attack.

Every time we’re adopting a new technology there’s gonna be lots of interesting security ramifications so I’ve been working on machine learning but you know what there’s a whole other technology? That’s just as interesting coming down the Pike. And that’s quantum computing. So what are the risks that are gonna come along with quantum computing? What impact is that Is that gonna have on our current cryptographic solutions? And what happens when we can compute really strangely complicated things more quickly than ever in terms of fairness and – I don’t know even – like social responsibility? In all of these there are things that are just ripe for looking into so. I hope that out there somewhere there’s like a BIML for quantum computing that’s making the same progress.

All of these things are right at the edge. You know? Life is exciting. Humans are doing some amazing stuff. But really humans are not really taking enough time to think: ‘Gosh what’s the downside of this? What goes wrong if I adopt this too quickly?’ And that’s what a security people have to be the the people that are like: ‘Hey. Wait. Hold on. Hang on.’

Paul: It strikes me that it is as if you were a recruit people to to tackle these problems. You’re you’re both looking for people with deep computer science and math backgrounds right but also it would seem people who are well-versed in things like ethics and philosophy. I know you’re like a renaissance man -a prodigal musician and scientist. What was your major?

Gary: I was a philosophy major undergrad and then I did a degree in Computer Science and Cognitive Science later.

Paul: Right. So you’re one of these sort of Renaissance people but it would seem to me that you’re probably looking for a mix of talents to wrestle with this problem?

Gary: Well let me tell you this super secret about my career. So it turns out that if you get into a brand new field nobody knows what the hell they’re talking about. So if you’re just remotely intelligent you can make a huge impact.

Computer security was what in such a state of disarray in the mid 1990s then a few guys that are marginally smart came along. And we’re like: ‘Wow. This is really screwed up. Let’s fix it.’ We came from different fields. Like if you think about Dan Geer, he was a bio statistician, you know? We came from everywhere and so it was kind of a security renaissance in some sense, because everybody came with wildly different backgrounds and we all got pushed together in the same crucible which is really cool but every time there’s an edge of technology.

There’s an opportunity for that sort of thing. So that’s what I said about quantum computing or about crisper stuff which is about, you know, manipulating molecules. All of these things are coming down the pike and it’s going to take people with wildly different backgrounds to get involved. So if you think ‘gosh, I can’t do anything in machine learning because I’m not a math guy.’ Well, that’s not really true. There’s a lot of good work to be done in the ethics of this or where data set should come from. Or how do we even keep information private if machines learn all this stuff and all the information’s mushed inside of the machine? Is it still private? What does that mean?

I think it’s it’s fun and exciting. They’re huge problems. They really matter to the future of us as a species. And you know. It’s fun to do.

Paul: Gary McGraw founder of the Berryville Institute of Machine Learning and failed retiree, thanks so much for coming on and speaking again on the Security Ledger Podcast.

Gary: My pleasure, Paul. Nice to chat with you.

Paul: Gary McGraw is a globally recognized author and authority on Software Security. He’s the founder of the Berryville Institute of Machine Learning. You can find them online at Berryville. I M L. DOT COM.