The need for a GPS-signal jamming simulator became more apparent for developers of self-driving vehicles, after a study found that hackers can use a $225 device to spoof signals and send autonomous cars to the wrong destination.
The hardware’s width is slightly bigger than a normal pen. Hackers only need to use a $35 raspberry pi, a $10 mobile charger, a $3 antenna, and a $175 HackRF One SDR. This new threat indicated that even self-driving technology needs proper education for safety reasons.
How it Works
Hackers would have to install the small device in an automated car before they could spoof GPS signals by using an algorithm that forces the system to follow a “ghost route.” The device’s relative affordability can have costly consequences, as it could provide hackers with partial or even complete control of a vehicle.
The worst-case scenario involves sending an instruction to the automated vehicle to pass through oncoming traffic, particularly in urban areas. Drivers of self-driving cars may not even be aware that they’re already traversing an unfamiliar route, especially if they have been used to passing by the same route every day.
Car manufacturers and developers should then focus more on how automated technology could protect itself from GPS spoofing. Exploiting a self-driving car’s weaknesses should be done ideally during the testing phase.
Artificial intelligence also requires a broad familiarity with different road conditions. Many public-private partnerships in the U.S. have deployed driverless cars on public roads to find out their viability. Real-world testing, aside from machine learning, will be important in knowing whether or not self-driving cars are ready to co-exist with human drivers on the road.
Simulation equipment and real-time testing serve as good options for exposing the vulnerabilities of automated cars. As cyber-attacks become more complex, developers need to be more proactive in solving the different glitches of modern vehicles.