AI programs with autonomous cars, robots, and other automated machines are habitually trained in simulated environments before their true debut. But circumstances that artificial intelligence does not find in virtual reality can turn out to be a blind spot while coming on to decisions in real life. For example, a distribution robot trained in a virtual landscape without an auxiliary vehicle may not know where to halt before entering a pedestrian crossing while listening to sirens.
To manufacture machines that have the necessary caution, computer scientist Ramya Ramakrishnan and his colleagues have developed a simulation training program in which a human demonstrator helps AI identify gaps in its training. “This enables the AI to act safely in the actual world,” says Ramakrishnan, whose efforts will be presented on January 31 at the AAAI conference on artificial intelligence. Engineers can also use AI blind spot information to design better simulations in the future.
During its probation period, artificial intelligence becomes aware of the environmental factors that influence human actions and that it does not distinguish in its simulation. When people do something that artificial intelligence does not expect; for example, hesitating to enter a crosswalk, despite the right of way, looks for previously unknown elements in their environment, such as sirens. When artificial intelligence detects one of these characteristics, it assumes that the man follows a security protocol that he has not learned in the virtual world and that he must rely on his judgment in this type of situation.
Ramakrishnan and his colleagues tested this setup by initiating artificial intelligence programs with simplistic simulations and then allowing them to learn the blind spots of human characters in more practical but tranquil virtual worlds. Researchers now need to test the system in the real world.