Researchers found a slight modification of a street sign could lead to a driverless car misinterpreting its meaning, leading to potential danger. For example, Google found altering an image just 4 percent could fool AI into thinking it’s a different object 97 percent of the time. Tuesday, the nonprofit research company OpenAI brought the image manipulation test to driverless cars.
Automakers might also have much simpler problems to fix before they can tackle adversarial examples. It’s entirely possible that a black marker and some poster board might be just as effective as a maliciously-crafted machine-learning attack—a Carnegie Mellon professor has documented how his Tesla mistook a highway junction sign as a 105 mile-per-hour speed limit.