Militär Aktuell recently took part in the Paris Action Summit on Artificial Intelligence (February 10-11), a high-profile event organized by the French Ministry of Foreign Affairs. On the fringes of the summit, the renowned École Supérieure de Guerre provided an opportunity to talk to Sofia Romansky, strategic analyst at The Hague Centre for Strategic Studies and project coordinator of the Global Commission on Responsible AI in the Military Domain.

Sofia Romansky conducted a comprehensive investigation into the military use of information during the first seven months of the Russian invasion of Ukraine in 2022 (-> current news from the war in Ukraine). She also wrote a study on the protection of European AI innovations to prevent their use in China’s military development.

©Military News

Ms Romansky, at the Action Summit there was often talk of AI tools that can replace human actors in certain applications. How exactly is that to be understood?
Currently, most of the AI models we work with or talk about are geared towards a specific task or a clearly defined area of application.

So tasks such as target tracking, identification or decision support?
Exactly. AI tools are providing increasingly precise and relevant recommendations – that is their purpose.

But now it is being said that supervision, i.e. management, could also be taken over by AI tools?
Yes, because these systems can perform several tasks simultaneously and benefit from specialized, singularly operating AI systems. We then speak of so-called agents.

So you coordinate what happens on several levels?
Yes, they make decisions about how the subordinate systems are managed. The result is a kind of pyramid-shaped structure.

But without humans at the helm? Isn’t their knowledge based on machine learning and what human operators tell them?
It’s a combination. You can involve people who check the results of the agents, or you can let them act autonomously – under the supervision of higher AI instances. Both are possible. This development is now coming to the fore.

Weshalb die Fliegerabwehr so an Bedeutung gewinnt

And your role is to regulate this or at least give governments guidance on what they need to look out for?
We set standards and principles to define the framework for responsible AI – not in the form of hard regulations, but as guidelines. The question is: What is responsible AI? It must reliably fulfill certain tasks. Only when the associated standards are met can states and users use it safely.

And whoever does not adhere to these parameters is operating in a gray area?
Exactly. For example, there is a view that certain AI systems are inherently illegal due to their unpredictability. However, a certain degree of predictability is required in the military sector in particular. If a system is too unpredictable, its use could violate international humanitarian law. This is one of the central questions we are dealing with.