Regulating AI in Defence: Can Delayed Decisions Be Catastrophic? | Business

Regulating AI in Defence: Can Delayed Decisions Be Catastrophic? | Business

However, Telia’s chief legal consultant, law professor Paulius Pakutinskas sees one big problem with this technological transformation – the use of AI in defense is still not regulated on a global scale.

“Probably, none of us would like to see images when an autonomous drone or robot soldier chooses a target by itself, aims and destroys not only military objects, but also civilians, while ethical decisions are made by a callous AI algorithm when planning military operations.” Unfortunately, the current delay of the countries of the world to approve common rules and standards for the application of AI in the field of defense is rapidly pushing us towards the fulfillment of these nightmares. In order to avoid this, it is necessary to seat even the states on the side of the aggressors at the common table and find ways to reach a global consensus on the maintenance of defense AI systems”, P. Pakutinskas believes.

Company photo/Dr. Paulius Pakutinskas

If you delay making decisions, you will have to deal with the consequences

Although the EU made a breakthrough in AI regulation this year by issuing the world’s first mandatory legislation on the use of AI, it only covers civilian life. This does not apply to national security and defense, and there is a great lack of consensus on how to solve this issue in the international arena. This creates a potentially dangerous situation where countries are free to develop and use AI technologies without international regulations.

“For example, despite the fact that the international action plan on the use of AI in defense, which took place in South Korea, involved 60 countries, it is still viewed ambiguously, as not even a third of countries have joined it. Among them was Russia, which was not even invited due to its aggressive actions, which leaves its hands free to continue developing potentially devastating military AI technologies without any legal restraints, thereby posing an even greater threat to global security. Such a paradoxical situation calls into question whether international agreements can function effectively without the involvement of all the main actors, including aggressive regimes,” Telia’s representative thinks.

As the Wild West era continues in the military AI industry, there is a threat that these technologies will be used not only against military targets, but also against civilians. With political confrontations intensifying and hostilities continuing on our planet, this is especially relevant, because AI equipment is rapidly improving and is increasingly being used to replace humans, it makes weapons more and more effective, but does this reduce civilian casualties in wars?

As a result, the world urgently needs to develop global regulatory mechanisms to prevent these technologies from causing irreparable consequences on the battlefield. Individual countries or organizations have their own documents, strategies, where they foresee how to use AI properly and responsibly, and indicate that only a person can make the final decisions, not only our ally the USA has such documents, but also NATO, the alliance to which we belong, but are these documents and ethical principles really interesting to Russia, Belarus, North Korea or another unfriendly country?

Smart drones, for example, can be adapted for both military missions and civilian logistics, making transparency of their use very important.

The debate over autonomous lethal weapons is not new, in 2018. United Nations Secretary-General António Guterres has said that lethal autonomous weapons systems are politically unacceptable and morally repugnant and has called for them to be banned under international law. But proponents of such weapons say it would reduce the overall death toll by killing fewer soldiers, preventing sexual and other personal-based abuse, war crimes by individual soldiers, and more.

Ethics and law cannot be ignored in the development of military AI

The challenge in the defense of AI is related to ethical and legal aspects. Currently, many military AI solutions are considered extremely reliable, because they make decisions based on data rather than emotions or attitudes, are accurate, react insanely fast, and speed and accuracy are big advantages in war. It is rarely considered that the data or algorithms used in AI may also be inaccurate or biased, and in some cases, the entire training of AI algorithms may have serious ethical flaws.

“Every AI mistake on the frontline can result in loss of life or destruction of property. From this comes the need to confirm the norms of AI decision-making and accountability. It is important to realize that in the event of an accident, there will always be a dilemma, who should take responsibility – a person or a machine? We need to decide and define this in international law as soon as possible and find effective implementation mechanisms,” the law professor is convinced.

Another ethical dilemma arises from the dual purpose of AI solutions. Most of these technologies can be used for both civilian and military purposes. This makes their regulation difficult, as it is often difficult to distinguish where their military application begins and ends. Smart drones, for example, can be adapted for both military missions and civilian logistics, making transparency of their use very important.

Private companies also play an important role

Private technology giants can also become a kind of wild card in military operations. Companies like Palantir have been providing AI solutions for military projects for some time, and OpenAI, the creators of the famous ChatGPT, hired a retired US Army general just this summer. This not only changes the nature of military conflicts, but also raises questions about the ethical use of these technologies.

In addition, OpenAI revised its policy in 2024 and removed previous restrictions that prohibited the use of their AI for weapons development or warfare. Such a drastic change in the position of the AI ​​leader marks a significant change in the entire technology sector, which is increasingly involved in the military field and thus can give private entities too much power to influence events on the battlefield and in the political arena.

According to Mr. Pakutinskas, private companies can be the engine of innovation, but without proper supervision we risk creating an uncontrolled technological environment. The activities of private companies in the military industry pose additional risks, as their interests often do not coincide with the aspirations of state institutions or international organizations. Therefore, it is necessary to include leading private companies in the field of AI in agreements on the responsible and ethical use of AI.

Regulation is not the same as prohibition

Despite all the challenges, the potential of AI technology in defense remains huge. Current AI systems already help in strategic planning, logistics optimization, data analysis and battlefield decision-making. However, their future depends on a robust regulatory framework. Without clear rules and standards, AI can become an unpredictable and even dangerous tool.

“On the subject of military intelligence, it is very important to distinguish between regulation and prohibition. We have to be realistic, AI is widely used in military and defense. Regulation should set clear rules for how AI is used in defence. This is not to say that technology should be restricted or banned – rather, we need to create a system where AI is used responsibly. “Lithuania could increase its basket of AI products for defense by using best practices and ethical standards”, explains Telia’s chief legal advisor.

The delay in creating the aforementioned system, uniform international rules, is already holding back the very progress of military intelligence. The integration of AI into defense systems requires large investments, so countries that look more responsibly at this issue are faced with the dilemma of whether it is worth investing large funds in AI solutions, if in the future their use may be limited or even prohibited due to international agreements and this may distort the defense capabilities of individual states balance towards less responsible states.

window.fbAsyncInit = function() {
FB.init({
appId: ‘117218911630016’,
version: ‘v2.10’,
status: true,
cookie: false,
xfbml: true
});
};

(function(d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {
return;
}
js = d.createElement(s);
js.id = id;
js.src = “https://connect.facebook.net/lt_LT/sdk.js”;
fjs.parentNode.insertBefore(js, fjs);
}(document, ‘script’, ‘facebook-jssdk’));

#Regulating #Defence #Delayed #Decisions #Catastrophic #Business

**Interview⁣ with Dr. Paulius Pakutinskas on AI in Defense**

**Editor:** Thank you for joining us today, Dr. Pakutinskas. Your insights on ⁣the ⁢ethical implications of ‌using AI in military contexts are quite timely. Can you elaborate on the pressing⁢ issues surrounding the lack ‍of ‌global regulation for AI in⁣ defense?

**Dr.⁢ Pakutinskas:**⁢ Absolutely. One of the most concerning aspects is that while many countries are investing heavily ⁣in military AI, there are no international regulations governing its use. This is akin to entering a battlefield without rules. ‍The current landscape could lead to scenarios where autonomous systems decide who lives and dies, including‌ targeting​ civilians. ⁣The absence ⁣of a global ‍consensus raises significant⁤ ethical and security concerns.

**Editor:** You mentioned the frightening possibility of autonomous weapons systems making life-and-death ‌decisions. What do you think⁤ is required to prevent such developments?

**Dr. Pakutinskas:** We need to ⁤bring all major⁣ stakeholders, including those nations that ⁣are ​typically seen as​ aggressors, to the negotiating table. Only⁤ through collaboration can we hope to establish comprehensive guidelines that prioritize humanitarian ‌values. ‍Unfortunately, existing international dialogues, such as the one held recently in South Korea, have seen limited participation, which complicates the situation further.

**Editor:** The EU has made strides in regulating AI, but⁢ only for civilian applications. What are the implications of this selective ⁣regulation?

**Dr. Pakutinskas:**​ While the ​EU’s legislation marks progress, excluding national security from these discussions creates a significant⁣ gap. Nations can freely develop harmful AI technologies without shared ethical frameworks. This could lead to dangerous⁣ precedents where countries, motivated by their⁣ own interests, ignore global peace and safety protocols.

**Editor:** You’ve pointed ‌out that private tech companies ‍play a prominent role in military AI applications. Can you elaborate on how this complicates the regulation process?

**Dr. Pakutinskas:** ​Private companies, like OpenAI and Palantir, ⁤are increasingly​ involved in military projects, which places immense power⁤ in the hands of⁢ these entities. Their goals may not align with ⁤the interests of governments or humanitarian principles. Without clear regulations⁢ in place,‍ we risk creating an echo chamber where profit ‍drives technological advancements rather than ethical considerations.

**Editor:** You call for the establishment of clear rules and standards. What would you suggest as the first steps in creating a⁤ regulatory framework ⁢for military AI?

**Dr. ‌Pakutinskas:** The first step would be to initiate an open dialogue among nations, particularly involving those with advanced military capabilities. It’s crucial to include perspectives from leading tech companies as well. We need to define accountability—who is⁢ responsible for AI decisions in military contexts? Additionally, establishing transparent‌ guidelines on dual-use technologies is crucial to prevent misuse of ⁢AI ‌for harmful purposes.

**Editor:** Thank⁣ you, Dr. ​Pakutinskas, for your valuable insights. It’s clear that as the‍ technology evolves, so too must our approaches to its ethical and legal implications. We appreciate your ⁣time today.

**Dr. Pakutinskas:** Thank you for having me. It’s an urgent issue, and ongoing ‍discussions are ⁢essential to shaping a safer future.

Leave a Replay