VENOM Vulnerability Renews Shared Code Worries

The recently disclosed VENOM vulnerability raises important questions about our reliance on shared (and vulnerable) code.
The recently disclosed VENOM vulnerability raises important questions about our reliance on shared (and vulnerable) code.

In-brief: The recently disclosed VENOM vulnerability dispels the myth that virtual machines are immune to cyber attacks, and raises important questions about our reliance on shared code.

The disclosure of a serious and exploitable software vulnerability dubbed “VENOM” by the security firm Crowdstrike on Wednesday raised concerns about the security of software running within virtual machines – an increasingly common method of deploying software applications.

The bigger lesson may be about the risks inherent in our growing reliance on cloud-based software and the shared code and platforms that power them, says noted security researcher Dan Kaminsky, the Chief Security Officer at the firm White Ops.

“This bug is incredibly generic,” said Kaminsky. “It’s everywhere and its on by default.”

VENOM is what’s described as a”virtual machine escape vulnerability” that resides in code that is common to a number of widely-used, open-source hypervisors including the Xen, KVM, VirtualBox and QEMU hypervisors. According to a blog post by Dmitri Alperovitch of Crowdstrike, the vulnerability (CVE-2015-3456) was first introduced in the QEMU hypervisor more than 10 years ago, in 2004.

According to a vulnerability note, the Floppy Disk Controller (FDC) in QEMU allows local guest users to cause a denial of service or, in certain cases, to execute arbitrary code using a set of specified and unspecified commands. Alperovitch notes that Crowdstrike had to do considerable work coordinating disclosure among all the vendors affected by the flaw, as a result of the sharing and re-use of QEMU code in XEN, KVM and other hypervisors.

“It allows the guest operating system running under the hypervisor to break out of the hypervisor and get access to the host operating system,” wrote Wolfgang Kandek of the firm Qualys. “This is one of the worst classes of vulnerabilities in virtualization, since from there the attacker can infect other guest operating systems, or try to get into other host systems in typical lateral growth fashion.”

And the bug is impossible to fix at the “guest level” that most shared hosting users have. “The problem has to be fixed at the host level, which is typically controlled by a service provider, external or internal,” Kandek wrote.

Beyond the specifics of VENOM are important lessons for firms that are staking their future on renting powerful computing resources in the cloud – if not for the entire technology community, says Kaminsky.

“VM escapes are real. This isn’t the first VM (virtual machine) bug and it won’t be the last,” Kaminsky told Security Ledger. “What it tells us is that there are some pieces of code that are important enough to survive, but not so important that anyone wants to mess with them.”

Kaminsky said the code that is the source of the vulnerability dates back to the 1990s, when floppy disk drives were standard equipment with all new personal computers. While floppy drives have long been deprecated in favor of writeable CD and DVD drives and, more recently, USB storage devices and the cloud, the virtual floppy drive code remained.

“It’s not really the floppy disk so much as the motherboard” that is being emulated, Kaminsky said. The reflex to keep the code even after floppy drives were no longer the removable media of choice reflects the reluctance of developers to fix what isn’t broken, Kaminsky said. “You couldn’t know as a developer that (the floppy code) shouldn’t be there,” he said.

The implication, of course, is that other VENOMs are lying undiscovered – or maybe just undisclosed – in shared and reused code like that which made it into the core QEMU hypervisor.

“This experience highlights the continuing need for a better and more clearly defined process for identifying dependencies and coordinating vulnerability disclosure between open-source projects and vendors that integrate that technology,” wrote Alperovitch of Crowdstrike.

Kaminsky said the move to a greater reliance on hosted infrastructure from firms like Amazon, Rackspace and others is inexorable. “Its a game. But its a necessary game,” Kaminsky told Security Ledger. “It’s just so much easier to deploy systems to the cloud…Cloud makes IT scale better,” he said, noting game maker Zynga’s high profile move from Amazon’s cloud to its own private cloud in 2011, built at a cost of $100 million, and now back to Amazon.

But the agility that cloud offers comes at a cost, as VENOM illustrates. “Before it was your data center. It was your servers,” Kaminsky observes. “Now someone can spend $15 (for a virtual machine) escalate their privileges and get onto your server. No matter how you cut it, there is still risk somewhere.”

What is needed is more scrutiny of the code that undergirds that shared infrastructure.

“Not all software is critical infrastructure,” Kaminsky said. “But its not true that no software is critical infrastructure. We’re not tracking the million most important lines of code.”

As with the recent Heartbleed vulnerability in OpenSSL, the technology industry needs to scrutinize code and look for other examples like the virtual floppy disk controller that made it into so many hypervisors. In recent years, firms such as Google – which rely heavily on open source software – have pledged financial resources and incentives to help fund audits and security reviews of commonly used open source software projects, including BIND, DHCP, OpenSSL and more.

“We need to start thinking more about how agglomerated this infrastructure is,” Kaminsky said. “We’re building a global society on this infrastructure. So we don’t want to have it end up that every few years we’re saying ‘Wow. Look at this trivial, old school buffer overflow from the 1990s.”

Spread the word!

Comments are closed.