It is difficult to explain, analyze, and comprehend how the human mind works. While doing the same for machine behavior is a very different matter.
Humans still lack a comprehensive understanding of their capabilities and behaviors as artificial intelligence (AI) models are used more frequently in complex situations, such as approving or denying loans, assisting doctors with medical diagnoses, assisting drivers on the road, or even taking full control.
Bayes-TrEx can be utilized to understand how AI models behave in unfamiliar circumstances
Most current studies concentrate on the fundamentals: Just how precise is this model? Focusing on the idea of straightforward precision can frequently result in hazardous errors. What if the model has very high confidence in its errors? What would the model do if it came across something it had never seen before, like a self-driving car spotting a different kind of traffic sign?
A group of researchers from the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT has developed a new tool called Bayes-TrEx that enables users and developers to get insight into their AI model in the pursuit of better human-AI interaction. It does this in particular by identifying real-world instances that motivate a particular behavior. The approach makes use of “Bayesian posterior inference,” a popular mathematical methodology for deducing the uncertainty of a model.
By using Bayes-TrEx to numerous image-based datasets in experiments, the researchers discovered important insights that had previously been missed by conventional evaluations that only considered prediction accuracy.
“Such analyses are important to verify that the model is indeed functioning correctly in all cases. An especially alarming situation is when the model is making mistakes, but with very high confidence. Due to high user trust over the high reported confidence, these mistakes might fly under the radar for a long time and only get discovered after causing extensive damage,” explained, the co-lead researcher on Bayes-TrEx, MIT CSAIL PhD student Yilun Zhou.
In order to make sure that a medical diagnosis system doesn’t miss any specific variants of a disease, a doctor can utilize Bayes-TrEx to detect photos that the model misclassified with very high confidence after it has finished learning on a collection of X-ray images. By the way, did you know an AI can tell what doctors can’t, now it can determine the race by looking at x-ray images?
Bayes-TrEx can also be used to better understand how models behave in unfamiliar circumstances. Consider autonomous driving systems, which frequently rely on camera imagery to recognize impediments like traffic lights and bike lanes. The camera can quickly and accurately identify these frequent occurrences, but more challenging circumstances can be both actual and figurative impediments.
A zippy Segway might be mistaken for something as large as a car or as small as a bump in the road, which could result in a difficult turn or catastrophic crash. With the aid of Bayes-TrEx, these unexpected scenarios might be anticipated and the possibility for tragedies can be avoided by developers.
The researchers are working on a less static area in addition to images: robots. Their “RoCUS” technology, which draws inspiration from Bayes-TrEx, makes significant modifications to evaluate behaviors unique to robots. The latest study also showed how AI can make robots racist and sexist.
Experiments with RoCUS, however still in the testing stage, indicate innovative findings that may be easily overlooked if the evaluator was only concerned with job completion. For instance, because of the way the training data was gathered, a deep learning-based 2D navigation robot chose to maneuver closely around obstacles. However, if the robot’s obstacle sensors are not entirely precise, such a preference could be dangerous. The asymmetry in the robot’s kinematic structure for a robot arm reaching a target on a table exhibited greater ramifications for its ability to reach objects on the left vs the right.
“We want to make human-AI interaction safer by giving humans more insight into their AI collaborators. Humans should be able to understand how these agents make decisions, to predict how they will act in the world, and — most critically — to anticipate and circumvent failures,” stated the co-lead author, MIT CSAIL PhD student Serena Booth.
Along with MIT Professor Julie Shah and CSAIL PhD student Ankit Shah, Booth and Zhou are coauthors on the Bayes-TrEx paper. They virtually delivered the paper at the AAAI Artificial Intelligence Conference. Nadia Figueroa Fernandez, an MIT CSAIL postdoc, worked on the RoCUS tool alongside Booth, Zhou, and Shah. The code of the Bayes-TrEx is open-source and is available on GitHub.