Washington — Two Air Force fighter jets recently squared off in a dogfight in California. One was flown by a pilot. The other wasn’t.

That second jet was piloted by artificial intelligence, with the Air Force’s highest-ranking civilian riding along in the front seat. It was the ultimate display of how far the Air Force has come in developing a technology with its roots in the 1950s. But it’s only a hint of the technology yet to come.

The United States is competing to stay ahead of China on AI and its use in weapon systems. The focus on AI has generated public concern that future wars will be fought by machines that select and strike targets without direct human intervention. Officials say this will never happen, at least not on the U.S. side. But there are questions about what a potential adversary would allow, and the military sees no alternative but to get U.S. capabilities fielded fast.

“Whether you want to call it a race or not, it certainly is,” said Adm. Christopher Grady, vice chairman of the Joint Chiefs of Staff. “Both of us have recognized that this will be a very critical element of the future battlefield. China’s working on it as hard as we are.”

A look at the history of military development of AI, what technologies are on the horizon and how they will be kept under control:

From machine learning to autonomy

AI’s military roots are a hybrid of machine learning and autonomy. Machine learning occurs when a computer analyzes data and rule sets to reach conclusions. Autonomy occurs when those conclusions are applied to act without further human input.

This took an early form in the 1960s and 1970s with the development of the Navy’s Aegis missile defense system. Aegis was trained through a series of human-programmed if/then rule sets to be able to detect and intercept incoming missiles autonomously, and more rapidly than a human could. But the Aegis system was not designed to learn from its decisions and its reactions were limited to the rule set it had.

“If a system uses ‘if/then’ it is probably not machine learning, which is a field of AI that involves creating systems that learn from data,” said Air Force Lt. Col. Christopher Berardi, who is assigned to the Massachusetts Institute of Technology to assist with the Air Force’s AI development.

AI took a major step forward in 2012 when the combination of big data and advanced computing power enabled computers to begin analyzing the information and writing the rule sets themselves. It is what AI experts have called AI’s “big bang.”

The new data created by a computer writing the rules is artificial intelligence. Systems can be programmed to act autonomously from the conclusions reached from machine-written rules, which is a form of AI-enabled autonomy.

Testing an AI alternative to GPS navigation

Air Force Secretary Frank Kendall got a taste of that advanced warfighting this month when he flew on Vista, the first F-16 fighter jet to be controlled by AI, in a dogfighting exercise over California’s Edwards Air Force Base.

While that jet is the most visible sign of the AI work underway, there are hundreds of ongoing AI projects across the Pentagon.

At MIT, service members worked to clear thousands of hours of recorded pilot conversations to create a data set from the flood of messages exchanged between crews and air operations centers during flights, so the AI could learn the difference between critical messages like a runway being closed and mundane cockpit chatter. The goal was to have the AI learn which messages are critical to elevate to ensure controllers see them faster.

In another significant project, the military is working on an AI alternative to GPS satellite-dependent navigation.

In a future war high-value GPS satellites would likely be hit or interfered with. The loss of GPS could blind U.S. communication, navigation and banking systems and make the U.S. military’s fleet of aircraft and warships less able to coordinate a response.

So last year the Air Force flew an AI program — loaded onto a laptop that was strapped to the floor of a C-17 military cargo plane — to work on an alternative solution using the Earth’s magnetic fields.

It has been known that aircraft could navigate by following the Earth’s magnetic fields, but so far that hasn’t been practical because each aircraft generates so much of its own electromagnetic noise that there has been no good way to filter for just the Earth’s emissions.

“Magnetometers are very sensitive,” said Col. Garry Floyd, director for the Department of Air Force-MIT Artificial Intelligence Accelerator program. “If you turn on the strobe lights on a C-17 we would see it.”

The AI learned through the flights and reams of data which signals to ignore and which to follow and the results “were very, very impressive,” Floyd said. “We’re talking tactical airdrop quality.”

“We think we may have added an arrow to the quiver in the things we can do, should we end up operating in a GPS-denied environment. Which we will,” Floyd said.

The AI so far has been tested only on the C-17. Other aircraft will also be tested, and if it works it could give the military another way to operate if GPS goes down.

Safety rails and pilot speak

 

Vista, the AI-controlled F-16, has considerable safety rails as the Air Force trains it. There are mechanical limits that keep the still-learning AI from executing maneuvers that would put the plane in danger. There is a safety pilot, too, who can take over control from the AI with the push of a button.

The algorithm cannot learn during a flight, so each time up it has only the data and rule sets it has created from previous flights. When a new flight is over, the algorithm is transferred back onto a simulator where it is fed new data gathered in-flight to learn from, create new rule sets and improve its performance.

But the AI is learning fast. Because of the supercomputing speed AI uses to analyze data, and then flying those new rule sets in the simulator, its pace in finding the most efficient way to fly and maneuver has already led it to beat some human pilots in dogfighting exercises.

But safety is still a critical concern, and officials said the most important way to take safety into account is to control what data is reinserted into the simulator for the AI to learn from.

leave a reply:

Discover more from EditState

Subscribe now to keep reading and get access to the full archive.

Continue reading