A team of researchers from Penn Engineering discovered critical vulnerabilities in robots powered by artificial intelligence (AI), showing that they could force these machines to do harmful actions that are usually blocked by safety controls.
As explained in their October 17 publication, the researchers created a program called RoboPAIR, which successfully bypassed safety features in three different robots: NVIDIA's self-driving Dolphin LLM, Clearpath Robotics' wheeled robot Jackal, and Unitree's four-legged robot Go2.
They easily managed to make the robots carry out dangerous actions, like simulating bomb detonations, ignoring traffic signs, and blocking emergency exits.
Did you know?
Want to get smarter & wealthier with crypto?
Subscribe - We publish new crypto explainer videos every week!
How to Buy Crypto SAFELY With a Credit Card (Animated)
The researchers discovered that minimal adjustments to commands could lead the devices to carry out dangerous tasks. Instead of directly asking the robots to perform harmful actions, they used vague instructions, which led to the same outcomes.
Alexander Robey pointed out that the vulnerability likely affects all robots using LLMs, not just the three tested. He believes identifying threats is essential for building effective safeguards, a strategy that worked for chatbots and should now apply to robots.
These revelations demonstrate the need for stronger security measures in AI-powered robots to prevent real-world harm.
In other news, a memecoin called Goatseus Maximus (GOAT) recently skyrocketed because an AI endorsed it on social media.