Hacking for a Safer World
Summer Scholars research protects Iggy the robot through “ethical hacking.”

As a Summer Scholar at Saint Joseph’s University, computer science student Dan Shapovalov, BS ’27, spent his days in a complex game of chess against Iggy, the University’s humanoid robot.
As the use of AI-assistive technology grows, so does the risk of cyberattacks. Shapovalov worked primarily to enhance cybersecurity measures on the robot through a process known as “ethical hacking” — detecting and addressing the system’s vulnerabilities that could be exploited by other parties.
The main goal of the summer’s exercises was unauthenticated privilege escalation, or gaining full access to the robot. Shapovalov worked to identify weak spots in the robot’s software that could provide this entryway.
“During this process, we found two main vulnerabilities to focus on,” Shapovalov says. “The first is a remote execution vulnerability on Choregraphe, a software that helps program the robot and create different interactions.The second is a vulnerability related to a built-in tool for updating the robot's operating system.”
Marcello Balduccini, PhD, department chair and associate professor of decision and system sciences, explains that the process is vital in protecting the machine from outside threats.
“We suspected it was possible to become an administrator of the robot without having any legal access to it, and when we finally succeeded, we developed techniques for fully protecting the robot from those types of attacks,” he says.
According to Balduccini, the team’s efforts are, to their knowledge, the first to achieve unauthenticated privilege escalation in the Pepper robot model, making it the only fully protected robot out of the over 20,000 in existence.
“If someone were to take over the robot like we were able to do, they could physically hurt people, hit them, or rig it to make it explode, or they could simply steal their information and their pictures."
Dan Shapovalov, BS ’27
This protection is especially important to the team as they continue their long-term partnership with Bancroft, a nonprofit serving individuals with autism and intellectual and developmental disabilities, as well as those in need of neurological rehabilitation.
For over two years, Bancroft staff has utilized the robot in interactions with residents.
“The robot uses AI for selecting and running activities that have been developed together with Bancroft’s staff and psychologists and that specifically target certain skills,” Balduccini says.
These activities might be anything from matching games used to improve memory, to simulations of trips to McDonald’s, to work on life skills. The residents interact via buttons on the robot’s tablet, encouraging movement and object identification along the way.
“In our case, the robot is a living, breathing component of a larger ecosystem made up of databases collecting information about the patients — likes, dislikes, what worked during a session and what didn’t work — so next time, it can use that information to tailor the session better,” Balduccini says.
Because of the close proximity to the residents and the use of AI, vulnerabilities such as those addressed by Shapovalov’s work pose a danger to patients’ well-being if left neglected.
“If someone were to take over the robot like we were able to do, they could physically hurt people, hit them, or rig it to make it explode, or they could simply steal their information and their pictures,” Shapovalov adds.
The team is currently working on publishing a paper with their findings and solutions. Meanwhile, the next phase of the Bancroft partnership includes introducing Iggy to school-aged children with intellectual and developmental disabilities.
“They’re so excited to see the robot, and we see a lot of possibilities, but it’s a whole new challenge for us,” Balduccini says.