In-brief: Markets for information on software vulnerabilities are good for security. But they can also raise moral and ethical quandaries, especially in an age of cyber physical risks, argues Cisco’s Marc Blackmer.
Virtually anyone, anywhere can create Internet of Things (IoT) devices and applications, put them online, and network them to just about anything else. Computing and storage have never been cheaper or more accessible. Coding skills are proliferating and we’re seeing countries, such as Finland, make coding a core component of the school system’s curriculum. It’s an exciting time to be a part of the IoT movement. I love it!
This growth in connected devices also means a commensurate proliferation of vulnerable devices and apps online. It’s going to happen, and it’s going to drive the demand for vulnerability researchers – of both the white hat and black hat varieties.
In an earlier blog post, I focused on the disclosure process and how researchers and vendors can work together to be more secure. Today I’d like to look at the vulnerability market, itself. It continues to be controversial even within the cybersecurity community. Beyond that, I believe the market for vulnerabilities including “0-day” (or previously unknown and unpatched) vulnerabilities is bound for exponential IoT-driven growth.
How did we get here?
Vulnerability research has matured right along with the cybersecurity field, turning from an (unpaid) hobby to an industry in its own right. It shouldn’t be a surprise that vulnerability researchers and other entrepreneurial hackers figured out that they could get paid decently for doing what they enjoy and do best. You can’t feed a family o(r buy an expensive sports car) on the occasional corporate shout-out, can you? Today, there are hundreds if not thousands of talented security people around the globe making good money and – occasionally – getting rich by finding software vulnerabilities in other people’s code.
Who’s buying and why?
That might sound wrong to you, but you don’t get paid for your services if there isn’t demand for those services.
Who’s buying vulnerability information and why? Today, hundreds of legitimate companies across the globe offer bounty programs that provide monetary rewards and other incentives (t-shirts, swag) to individuals who find holes in their software. They range from Google, Yahoo, Paypal and Microsoft (yes, Microsoft) to automakers GM and (recently) Fiat Chrysler.
Researchers who discover common, low-level vulnerabilities in products might earn a few hundred or a thousand dollars for their effort. However, remotely exploitable security flaws in common platforms like Microsoft’s Windows, Google’s Chrome Browser or Apple’s iOS can fetch the discoverer tens of thousands of dollars through private or public bounty programs. On the underground market, however, such holes can fetch hundreds of thousands of dollars or more.
And there’s the rub: the market for software vulnerabilities is varied, encompassing both public marketplaces (like company sponsored bounty programs) and shadowy, black and gray markets for the most sought after and potent software holes. In just one example, in 2015 the private vulnerability and exploit broker Zerodium offered $1 million for a working exploit of Apple’s latest version of iOS.
In these marketplaces, the most common purchasers of vulnerabilities are nation-states and their military, intelligence, and national law enforcement apparatus. Ostensibly, these groups acquire vulnerabilities in the ongoing race to gain and maintain superior capabilities than their adversaries. Previously unknown and unpatched security holes provide a method for compromising a popular platform and, in theory, planting monitoring tools or extracting valuable information.
For buyers it can be difficult to know, for sure, that an adversary has not independently made the same discovery themselves. Nor is there any way to guarantee that the purveyors of vulnerabilities haven’t also sold the same information to an adversary. After all, it wasn’t too long ago that one such company had claimed to only sell to “non-repressive” regimes. But their claims rang false after the company was breached and their customer list posted online. Arms dealers have long been among the most mysterious and shadowy groups doing business globally. In the arena of cyber “arms,” the same holds true.
But surely, everyone has a right to make an income from a very specialized skill/expertise, right? Of course. I believe that when those with skill can openly and fairly compete it makes the economy stronger, benefits customers, and attracts top talent. Beyond that: why shouldn’t individuals with a valuable skill be able to profit from it? The market for vulnerabilities creates demand for more vulnerability researchers. In the long run, that’s good for security, not bad.
However, when it comes to vulnerability research, things are not clear-cut. We always need to weigh the ethical value of the research to the public with the economic benefits of a free market. Imagine, for a moment, that a researcher discovers a vulnerability in a safety control system commonly deployed in nuclear power plants. If exploited, it could disrupt operations and conceivably cause a catastrophe with loss of human life. With such knowledge, does the researcher have a moral obligation to disclose the vulnerability to the system manufacturer? To report it to his or her government? Or, is it OK for the researcher to just sell the information to the highest bidder(s), take their money and move on?
Food for Thought
In answering those questions, it might be good to consider the Stuxnet malware. The accepted analysis is that it was a created by nation-states to inflict damage on an adversary in order to avoid a military strike. It can certainly be argued that the malware served its purpose; it caused the damage desired without starting a military conflict. If the vulnerabilities exploited by the malware had been made public and patched, there may have been no other choice but to engage in a military attack.
Now consider that the malware that was launched can be reverse-engineered, re-tooled, and fired back at its creators. It can now be studied and improved upon by other nation-states, malware developers, and conceivably terrorist and criminal organizations. Did the malware prevent wider violence, danger, and destruction? Or has the development of this type of malware raised the stakes in the long-term? Like the Atom Bomb, it is impossible to put the genie unleashed by Stuxnet back in the bottle.
The IoT is bringing us to another crossroads of technology and society. Should we accept a state of collective insecurity so that a weapon can be used against our adversaries, or does neutralizing these weapons make us all more secure?
So, I support and encourage hacking – for good. Hackers and independent security researchers are among the most interesting people I’ve ever met and I seek out their company at conferences. They view the world differently and I constantly learn from them. In my life outside of work, I organize an event that teaches youth about security concepts and encourage the kids who participate in the 1NTERRUPT cybersecurity program to become hackers. The skills that define “hackers” are in demand and they can lead to a rewarding career.
But the truth is that there are skills that can be used for good as well as for evil. “Hacking,” including the kinds of vulnerability research and exploit creation we’ve been talking about is a good example of such “dual use” skills that create ethical quandaries that the technology industry and – indeed – our society is still working out.