This past September, a panel at the AFCEA/INSA Intelligence and National Security Summit delved into the race for artificial intelligence (AI). According to the Signal article, “AI Looms Large in Race for Global Superiority,” panelists discussed the various approaches to AI research and applications among the U.S., Russia and China.1

According to Margarita Konaev, research fellow at the Center for Security and Emerging Technology, “Russia trails China and the U.S. in all metrics of AI. However, its military is leading the country’s efforts to catch up in key areas, most of which involve military applications.” Konaev listed three of Russia’s investment focuses: military robotics and unmanned systems, electronic warfare (EW) capabilities and information warfare. Noting the country’s quick pace when it comes to experimentation, she said, “They’re quick to test, and they learn limitations under operational conditions.”

China’s AI strategy was covered by Elsa Kania, an adjunct senior fellow for technology and national security at the Center for a New American Security. “China has made no secret of its goal to lead the world in AI, and the military, in particular, has seen greater progress than expected. This includes a long list of applications such as suicide drones, autonomous weapon systems, EW, cyber operations, wargaming, data analytics and situational awareness,” she said.1

By comparison, the U.S. invests the most time and energy evaluating how AI will impact the EW domain by exploring issues like ethics. According to Col. P.J. Maykish, USAF, who serves as the director of analysis for the National Security Commission on Artificial Intelligence, “Ethics is a major consideration for U.S. AI development. It comes down to three issues: civil liberties, human rights and privacy.” China and Russia do not share those concerns to the same extent. As a result, Maykish recommends a “coalition of nations focused on common values.” He also warns that the U.S. should not disregard rising AI and machine learning (ML) developments from other nations, which could be further in development than others realize.1

It is obvious that AI and ML are quickly gaining traction in EW. They provide clear advantages in the electromagnetic (EM) spectrum, which is increasingly congested and contested. As threats and countermeasures evolve, ever more complex battles are being fought for spectrum dominance. Systems that respond to threats or countermeasures without human intervention, continuously learning from those responses, are more likely to prevail.

UNDERSTANDING THE LINGO

As more discussion and progress focus on AI and ML, more confusion seems to arise around seemingly interchangeable terms: cognitive and adaptive versus AI and ML. Responsive threats did exist before AI and ML were implemented, often labeled as cognitive and adaptive. Although people use these terms interchangeably, many levels of adaptability exist. Most of them do not come near the capabilities of cognitive EW. Using ML, cognitive EW systems can enter an environment with no knowledge of the adversary’s capabilities and rapidly understand the scenario. Doing something that makes the adversary’s system react, they evaluate its response and develop an effective response suited for the adversary’s system.

In contrast, adaptive solutions cannot rapidly grasp and respond to a new scenario in an original manner. For instance, an adaptive radar can sense the environment and alter transmission characteristics accordingly, providing a new waveform for each transmission or adjusting pulse processing. This flexibility allows it to enhance its target resolution, for example. Many adversary systems require only a simple software change to alter waveforms, which adds to the unpredictability of waveform appearance and behavior, making military forces struggle to isolate adaptive radar pulses from other signals, friend or foe.

AI and ML are often used interchangeably, creating confusion around how each one impacts EW systems. Making the distinction, Dan Pleasant, a Keysight solutions architect, says, “Many people use the terms differently, but to me it’s AI - although more properly called cognitive - if it learns and is autonomous. That means it does not necessarily do the same thing every time for an identical stimulus, because it has learned from prior experience. It’s ML if it has a neural net built into it that’s been trained. However, the use of ML does not necessarily mean that it’s cognitive. The neural network may be static and unchangeable. If it’s static, it’s not AI, it’s not cognitive.”

In EM spectrum operations, the goal is to respond immediately. All those systems are now software controlled and reconfigurable on the fly. They can change what they are doing at a moment’s notice and choose from a multitude of modes. With AI and ML, machines can perform smarter tasks using capabilities like signal recognition. They continuously learn from data, from every conflict, determining ways to be more effective, so they prevail against future countermeasures. This evolution occurs without the need for human interaction; the computer decides how to alter behavior. When tested or engaged, these threat systems learn from that experience. They modify future behavior, which means the computer decides the next steps. Due to the system’s unpredictable behavior, even the people responsible for the system cannot foretell its exact behavior.

As threat systems advance with ML technology, they will adapt and alter their behavior or course of action at an increasingly rapid rate. If a radar is trying to track a jet, for example, the adversary’s countermeasures may stop it from succeeding. Using ML, that radar would repeatedly try new approaches to achieve success. Today’s machines possess intelligence that is an order of magnitude higher than a human expert in EW, learning from data that continues to aggregate.

SHIFTING TIMELINE

ML and AI greatly impact the ways EW systems are developed and their functionality. Pleasant points to the example of how a jammer technique is typically created for a new radar.

“You first have to make a recording of the radar signals - very hard in its own right, because people try not to let you record them. Once you get a recording, you send it to a team of people who look at the radar signal and try to understand what the radar is doing. Then they figure out how you would disrupt it with a jammer and implement the new technique in software or FPGA hardware. Finally, they must do extensive testing and get the solution deployed in the field, where people can use it. The whole process can easily take a year or more,” he says.

The software-driven capabilities of modern radars can completely change what the radar does in an instant. Thus, militaries can no longer afford the year or so it previously took to respond to a radar. Pleasant states, “If you’re going to be interacting with a system that can change on the fly, you have to change on the fly, too. It must be instantaneous, and people cannot be involved. People are much too slow. If you’re trying to jam a radar, the jammer needs to figure out how to jam that radar. Plus, the radar - if it figures out it’s being jammed - needs to know immediately what to do about it and flip to some other mode, change frequency, etc.”

PRIORITIZING COMMUNICATIONS

Figure 1

Figure 1 Notional interface depicting the vast collection of data from EW sensing and monitoring equipment. Source: Shutterstock.

One common way to gain an advantage in EM spectrum operations is to disable communications. An Army EW soldier, for example, must differentiate between the vast amounts of data streamed from battlefield sensors (see Figure 1), deciding what constitutes enemy activity, such as jamming or interference, and what can be ignored. To reduce this cognitive burden and boost multi-domain operations, the Army Rapid Capabilities and Critical Technologies Office (RCCTO) is leveraging AI to prioritize incoming data rapidly and precisely.

According to the article “Artificial Intelligence Improves Soldiers’ Electronic Warfare User Interface,” “The RCCTO is partnering with soldiers from the 1st SBCT at Fort Wainwright, Alaska, who are using the new technology against operational scenarios. Their feedback will help improve effectiveness of the capability as the Army integrates it into EW systems. The new expert learning AI prototype uses AI that is trained to reduce or eliminate common low-level tasks performed by EW soldiers. It also simplifies the user interface of the battle management system. The tool saves time by decluttering the user interface and enhancing soldiers’ ability to zero in on whether the emitter is from a ‘red’ or enemy source, is a ‘blue’ or friendly force signal, or just ‘gray’ noise.”2

The Army RCCTO partnered with the Army’s Project Manager Electronic Warfare & Cyber (PM EW&C) and the Combat Capabilities Development Command’s C5ISR Center to develop the AI tool. An operational pilot is planned with a select forward deployed unit later this year. Within the last year, the Army revealed that it has delivered new EW prototype systems to satisfy an operational needs statement from U.S. Army Europe. According to the Army, soldiers with select units are using the equipment to implement electronic protection for their own formations, to detect and understand enemy activity in the EM spectrum and to disrupt adversaries through electronic attack effects. These systems are interim solutions, designed as a bridge until the programs of record, including the EW Planning Management Tool, can be fielded.

BOOSTING INTELLIGENCE GATHERING

Beyond radar and communications, military units grapple with intelligence gathering and improving situational awareness. The Pentagon recently divulged a plan to use AI tools to gain a battlefield advantage. Under the Joint Artificial Intelligence Center’s joint war-fighting initiative, C4ISRNET noted it is developing algorithms to provide armed services and combatant commands with AI tools. The goal is to accelerate decision-making.

In the article “How does the Pentagon’s AI center plan to give the military a battlefield advantage?” C4ISRNET said, “the center is ‘specifically focused’ on working to harness AI to link together systems involved in the intelligence gathering phase to the operations and effects piece of all-domain operations.”3 The goal is to connect the varied platforms, creating an end-to-end system that enables actionable visibility of intelligence.

The C4ISRNET article quotes Department of Defense Chief Information Officer Dana Deasy, “The JAIC is also working on an operations cognitive assistant tool to support commanders and ‘drive’ faster and more efficient decision-making through AI-enabled predictive analytics.” Some obstacles may hinder this effort. For example, success will demand changes in user behavior, as the various services branches need to collect and share data at increased levels, and they will need to prioritize data differently. Adequate data storage and development platforms are also critical to the development of AI, putting emphasis on the need for cloud technology.

KEEPING PACE WITH EW EVOLUTION

Figure 2

Figure 2 Soldiers will increasingly monitor EW system responses, rather than making decisions. Source: Shutterstock.

Whether it is communications, radar or another system, anything implementing ML and AI brings up interesting questions about knowing how it will perform (see Figure 2). Typically, test systems apply a stimulus to the system under test. The ideal testing process generates known results, so it is easy to verify performance. If a system is cognitive, however, and learns and changes from time to time, there is no way to know in advance exactly what it will do. So, how can you tell? Did it pass? Did it fail? Did it do the right thing? For radars and jammers, the test can put the jammer signal all the way through the radar processing and examine the outcome determining if the jammer fooled the radar, for example. Ultimately, test systems and other performance and support systems must be able to learn and adapt ahead of the EW systems to be able to gauge their responses and performance.

EW operations have always resembled a chess game in this way. A new system, such as a radar, is developed. Countermeasures are created in response, leapfrogging beyond the radar. Then, new radar technology is designed. The pace of development traditionally was very slow. Now, machine algorithms will dictate the pace. With the implementation of ML and AI, systems are closer to learning and responding instantaneously. Ultimately, the goal is to have autonomous systems that can learn and make their own decisions. Using these systems, forces want to attain the increased intelligence and learning/response capabilities to prevail in EM spectrum operations. Having more complete knowledge of the environment - whether it is friendly, neutral or adversarial - provides the path to superiority in the contested spectrum environment.

References

  1. R. K. Ackerman, “AI Looms Large in Race for Global Superiority,” Signal, September 18, 2020, www.afcea.org/content/ai-looms-large-race-global-superiority.
  2. N. Jones-Bonbrest, “Artificial Intelligence Improves Soldiers’ Electronic Warfare User Interface,” U.S. Army, October 8, 2019, www.army.mil/article/218705/artificial_intelligence_improves_soldiers_electronic_warfare_user_interface.
  3. A. Eversden, “How Does the Pentagon’s AI Center Plan to Give the Military a Battlefield Advantage?” C4ISRNET, September 10, 2020, www.c4isrnet.com/artificial-intelligence/2020/09/10/how-does-the-pentagons-ai-center-plan-to-give-the-military-a-battlefield-advantage/.