Connected Car

Hackers Remotely Steer Tesla Model S Using Autopilot System

Security researchers managed to take remote control of the Autopilot feature of Tesla Model S car using a wireless gaming keypad, highlighting the potential security issues with next-generation automobiles’ Advanced Driver Assistance Systems (ADAS) that are meant to enhance driver safety.


Researchers at Tencent Keen Security Lab have successfully comprised the autopilot system of a Tesla vehicle, taking control of the vehicle according to a newly published paper that extensively details their research.

The researchers had previously demonstrated their findings at the Black Hat USA 2018 security conference, posted a video online showing the hacks. The new report describes three ways they were able to take control of the vehicle’s autopilot by exploiting several flaws in the Tesla’s electronic control unit (or ECU).

Researchers noted three key achievements in their hack of Tesla Autopilot ECU, software version 18.6.1. They interfered with Autopilot to turn on the Tesla’s windshield wipers by exploiting a vision-recognition flaw in the auto-wipers system; caused the Tesla to veer into the reverse lane by placing interference stickers on a road, which confused the lane-recognition system; and remotely controlled the car’s steering even when the driver of the vehicle did not activate Autopilot.

“This proves that with some physical environment decorations, we can interfere or to some extent control the vehicle without connecting to the vehicle physically or remotely,” researchers concluded in their paper. “We hope that the potential product defects exposed by these tests can be paid attention to by the manufacturers, and improve the stability and reliability of their consumer-facing automotive products.”

Perils of progress

Indeed, researchers said they did notify Tesla after they succeeded in compromising the Autopilot system and Tesla “immediately fixed” the attack chain, according to Tencent.

Researchers at Tencent Keen Security Lab compromised the Autopilot Advanced Driver Assistance System of a Tesla Model S. (Source: Tesla)

No matter, the research still shows the persistent danger of hackers’ ability to use the increased connectivity and intelligence of contemporary vehicles as a new attack surface, a reality that was first brought vividly to light by a 2015 hack of a Jeep Cherokee published in Wired.

“An average modern automobile contains hundreds of sensors and numerous on-board computers that are each potentially vulnerable to physical, software, and/or logic attacks,” Jerry Gamblin, principal security engineer at San Francisco-based threat-intelligence company Kenna Security, told Security Ledger. “This creates an amazingly large attack surface which automobile manufacturers need to secure, and a target rich environment for ‘bad actors’ to potentially exploit.”

Since the Jeep hack, cars have become even more sophisticated, with Advanced Driver Assistance System (ADAS) technologies like Tesla Autopilot being rapidly developed in the vehicle industry.

These systems are meant to augment the drivers’ ability and provide smart safety features like collision-avoidance to enhance safety. However, their increased sophistication also makes them potentially destructive if the system is compromised, which has called into question the security and safety of ADAS technologies.

Privilege equals control

Keen Security Labs researchers said they used the ROOT privilege to pull off what is perhaps the most terrifying aspect of their hack–a takeover of the Tesla’s steering system “in a contact-less way,” researchers wrote. They used their privilege to send control commands from the Autopilot system to control the steering system while the car was driving.

The researchers’ ability to influence the autowipe and lane-control features relied on an improved optimization algorithm to generate what researchers called “adversarial examples” of the respective features.

The autowipers and lane-recognition features make decisions purely based on camera data, researchers found. This means that it’s not so difficult to trip them up by making them “see” environmental conditions that don’t actually exist.

Researchers achieved this by sending images to the system’s neural network in the autowipers’ scenario and modifying a road’s lane lines in the lane-recognition scenario. In both cases, the systems responded to what they thought they were “seeing” rather than the actual conditions.

Adversarial models are an area of intense study. As more systems come to rely on machine learning and AI, researchers are looking at ways to influence their operation by feeding them “bad data.”

Tesla’s response

In it’s blog post, Tencent Keen researchers published Tesla’s response to the hack, which was, unsurprisingly, vehemently defensive. The company dismissed the autowiper and lane-recognition compromises as situations that would not happen in the “real world” and thus shouldn’t be of real concern to drivers.

Tesla’s response also stressed that drivers don’t have to use the autowiper feature if they don’t want to, and they also can “override Autopilot at any time by using the steering wheel or brakes, and should be prepared to do so at all times,” especially if it doesn’t appear to be working properly.

In terms of the researchers’ use of ROOT privilege to take control of the car’s steering, Tesla reminded researchers that the company fixed the primary vulnerability addressed in the report in a security update in 2017, followed by another “comprehensive” security update last year. Moreover, the company said both of these updates were available before Tencent Keen Security Lab reported its research to the company, according to Tesla.

“In the many years that we have had cars on the road, we have never seen a single customer ever affected by any of the research in this report,” according to Tesla.

Protestations aside, security experts still aren’t convinced ADAS systems like Tesla’s Autopilot won’t wreak havoc and destruction if they fall into the control of the wrong hands, something car manufacturers should be mindful of as they move forward with system designs, Gamblin told us.

“The majority of attention needs to be focused on securing the systems that could cause serious injuries to customers and other passengers if compromised,” he advised. “Manufacturers must balance this investment and also correctly address any issues that arise out of secondary system attacks and which impact their customers’ overall experience, but ultimately do not endanger passenger safety.”