Autonomous Vehicle Cybersecurity

Self driving cars are one of the most heavily anticipated innovations of the 21st century, but the potential cybersecurity risks cannot be ignored.

Autonomous Vehicle Cybersecurity

Self driving cars are one of the most heavily anticipated innovations of the 21st century. Companies like Tesla are heavily investing heavily into autonomous vehicle technology and it’s something that is slowly being implemented into everyday society. As of 2019, 37 different states have enacted legislation regarding autonomous vehicles, placing autonomous vehicles in our immediate future.

Before we jump into the cybersecurity aspect of vehicular autonomy, lets take a close look at the different levels of vehicle autonomy. There are a few different levels or phases of car automation, summarized in this graphic below.

Some people estimate that up to 90% of car crashes would be eliminated by the use of autonomous vehicles. It’s also the safest option for people that can’t drive safely due to things such as disease, old age or physical disability. However, with this innovation comes two main risks that have security professionals and everyday consumers worried. The first risk is that an attacker will be able to take control of the automated car remotely and secondly, an attacker will be able to steal driver’s information by hacking into the car.

Taking Control of an Automated Car

This risk would cause the most damage by allowing someone to remotely control another person’s vehicle. This risk was depicted very well in a popular movie, Fast and Furious 8 in which the main antagonist is a skilled computer hacker and was able to hack into the cars of a major city remotely:

Autonomous Cars Hacked Scene - The Fate of the Furious. While this scene is a dramatization it is based on a very real potential threat. A research study by Charlie Miller and Chris Volosek on a 2014 Jeep Cherokee found that a simple internet connection was all that was needed to remotely take control of the vehicle and perform motor functions e.g. stopping the vehicle on the highway. Not only does this pose a safety risk to passengers but if an attacker were to combine this with things like google maps and GPS they would be able remotely steal people’s cars and move them to the location of their choice.

Stealing user’s information through their car

This risk is much less about physical safety and has everything to do with user privacy. This risk is the same as any computer system that stores your information, if someone hacks into your car any information contained in the embedded systems can be extracted. However, with a car it can be a bit more personal than a regular computer. For example your real time current location as well as where you’re driving to can be exposed if you’re using google maps or a similar app. It can also include things like fingerprints, eye scans or facial scans. It’s yet another means for attackers to target people and extract information.

In the case of car accidents it is unclear who should be at fault because the owner/passenger of the car is not usually the one in control. It makes it unclear whether the manufacturer is at fault, the pedestrian or the passenger or even if anyone is really at fault or was it some external fault such as an obstructed traffic sign that keeps the car’s sensors from detecting the sign. There’s also moral/ethical concerns about whether the car’s navigation systems should prioritize the safety of the pedestrian, the safety of the car owner or the other cars in traffic in a given situation.

What is being done about this?

Countries like the US and UK have already passed legislation to ensure car manufacturers meet a minimum standard for their vehicles. Some of the policies outlined includes: Privacy by Design, Transparency, which means controlling how personal data is used, consent, location data, security of personal data and more.

Many companies are doing extensive research and testing before pushing out any cars to the market to ensure that they are safe for the public. For example, Waymo’s self-driving cars have been tested for over 10 million miles on public roads in the US. There’s also groups like the Automotive Information Sharing & Analysis Center (Auto ISAC) that are proactively sharing information about security issues related to autonomous vehicles. While there are definitely things to be concerned about, the risks are being looked at extensively by governmental and private entities looking to mitigate these risks. If you like this article and would like to read more I write regularly at securitymadesimple.

The awesome image used in this article is called Santos and was created by The High Road.