But Missy cummings, a professor at Duke University and a former fighter pilot who studies automated systems, says the speed at which decisions need to be made on fast jets means any AI system will be largely self-sufficient.
She doubts advanced AI is really necessary for air combat, where planes could be guided by a simpler set of hand-coded rules. She is also wary of the Pentagon’s rush to embrace AI, saying mistakes could erode trust in the technology. “The more bad AI the DOD uses, the less pilots, or anyone associated with these systems, will trust them,” she says.
AI-controlled fighter jets could potentially complete parts of a mission, like surveying an area on their own. For now, EpiSci’s algorithms are learning to follow the same protocols as human pilots and to fly like another squadron member. Gentile has performed simulated test flights where AI takes full responsibility for avoiding mid-air collisions.
The military adoption of AI is only accelerating. Pentagon believes AI will prove essential for future warfare and is testing technology for everything from logistics and mission planning to reconnaissance and combat.
AI has started to infiltrate some planes. In December, the Air Force used an AI program to control the radar system aboard a U-2 spy plane. While not as difficult as controlling a fighter jet, it does come with a life and death responsibility, as missing a ground missile system could expose the bomber to attack.
The algorithm used, inspired by that developed by the Alphabet subsidiary DeepMind, has learned through thousands of simulated missions how to direct radar to identify enemy missile systems on the ground, a task that would be critical for defense in a real mission.
Will roper, who resigned his post as deputy secretary of the air force in January, said the protest was in part to show that it is possible to speed up the deployment of a new code on military hardware older. “We didn’t give the pilot the override buttons, because we wanted to say, ‘We have to prepare to operate this way where the AI is really in control of the mission,’ he says.
But Roper says it will be important to make sure these systems are functioning properly and that they are not themselves vulnerable. “I’m afraid we are too dependent on AI,” he says.
DOD may already have trust issues around using AI. A report Last month, Georgetown University’s Center for Security and Emerging Technology found that few military contracts involving AI mentioned building trustworthy systems.
Margaret konaev, a researcher at the center, says the Pentagon seems aware of the problem but it’s complicated because different people tend to trust AI differently.
Part of the challenge comes from how modern AI algorithms work. With reinforcement learning, an AI program does not follow explicit programming and can sometimes learn to behave in unexpected ways.
Bo Ryu, CEO of EpiSci, says his company’s algorithms are designed according to the military plan for using AI, with a human operator tasked with deploying deadly force and able to take control at any time. The company is also developing a software platform called Direction of the swarm to allow teams of civilian drones to map or inspect an area collaboratively.
He says the EpiSci system is not only based on reinforcement learning, but also has handwritten rules built in. “Neural networks certainly have many advantages and gains, without a doubt,” says Ryu. “But I think the essence of our research, the value, is knowing where you should and shouldn’t put it.”
More WIRED stories