Scientists at MIT's LabSix,synaesthetic cinema and polymorphous eroticism an artificial intelligence research group, tricked Google's image-recognition AI called InceptionV3 into thinking that a baseball was an espresso, a 3D-printed turtle was a firearm, and a cat was guacamole.
The experiment might seem outlandish initially, but the results demonstrate why relying on machines to identify objects in the real world could be problematic. For example, the cameras on self-driving cars use similar technology to identify pedestrians while in motion and in all sorts of weather conditions. If an image of a stop sign was blurred (or altered), an AI program controlling a vehicle could theoretically misidentify it, leading to terrible outcomes.
The results of the study, which were published online today, show that AI programs are susceptible to misidentifying objects in the real-world that are slightly distorted, whether manipulated intentionally or not.
SEE ALSO: After getting shade from a robot, Elon Musk fires backAI scientists call these manipulated objects or images, such as turtle with a textured surface that might mimic the surface of a rifle, "adversarial examples."
"Our work demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought," the scientists wrote in the published research.
The example of the 3D-printed turtle below proves their point. In the first experiment, the team presents a typical turtle to Google's AI program, and it correctly classifies it as a turtle. Then, the researchers modify the texture on the shell in minute ways — almost imperceptible to the human eye — which makes the machine identify the turtle as a rifle.
The striking observation in LabSix's study is that the manipulated or "perturbed" turtle was misclassified at most angles, even when they flipped the turtle over.
To create this nuanced design trickery, the MIT researchers used their own program specifically designed to create "adversarial" images. This program simulated real-world situations like blurred or rotating objects that an AI program could likely experience in the real-world — perhaps like the input an AI might get from cameras on fast-moving self-driving cars.
With the seemingly incessant progression of AI technologies and their application in our lives (cars, image generation, self-taught programs), it's important that some researchers are attempting to fool our advanced AI programs; doing so exposes their weaknesses.
After all, you wouldn't want a camera on your autonomous vehicle to mistake a stop sign for a person — or a cat for guacamole.
Topics Artificial Intelligence Google
(Editor: {typename type="name"/})
The White House might have inflated Trump's golf record, because this is how we live now
Look, Kanye West! 4 times Donald Trump distracted us all
Every bandwagon Warriors fan should read this Twitter account
Google reportedly scales down its self
Man City vs. Real Madrid 2025 livestream: Watch Champions League for free
Ed Sheeran tweets for the 1st time in a year and no one knows what it means
The Chuck E. Cheese Challenge will make you nostalgic for ball pits
Every bandwagon Warriors fan should read this Twitter account
Texas vs. Arizona State football livestreams: kickoff time, streaming deals, and more
Polar vortex plunging into the U.S.: Get ready for dangerous cold
The best early Prime Day outdoor deals: Yeti, Stanley, Jackery, and more
Satanic Temple stands up to Ohio's heartbeat bill
接受PR>=1、BR>=1,流量相当,内容相关类链接。