Issues with the Usage of Lethal Autonomous Weapon Systems in the Military Domain by Ruchika Dash

Source: The International Committee of the Red Cross

A study on why is there a need to fill in the responsibility and accountability gap.

The use of Lethal Autonomous Weapon Systems (LAWS) during armed conflicts are on the rise now. There are many states which are investing millions of dollars in the research and development of the same. And then there are a few other countries which have already developed and also used the LAWS. Amidst all these, there are also some legal and practical issues attached to it, for example, the LAWS, which depends on artificial intelligence and other software, requires minimum to no human interference during its activities on the battleground. This might work as an advantage for the military parties but in case the LAWS fails to comply with the laws of the armed conflict, then who should be held liable for their activities?

When we talk about the responsibility part of something from a legal point of view, then there are two ways to segregate and categorize them. First would be state responsibility and the second would be individual criminal responsibility. But in the process of establishing which would be applicable, it largely depends on the scenario and how exactly the LAWS are being used altogether. For example, if the autonomous weapons are being used by the state military and armed forces, then the acts committed would be accountable by the state itself. Whereas, if the autonomous weapon is being used by an individual for committing an illegal act such as a war crime, crime against humanity or genocide, then it might be an individual criminal responsibility. Or in case the LAWS is associated to a human agency and the required mens rea is established, then also it might be a case of individual criminal responsibility.

In my view, this issue play a very critical part while using LAWS. According to the Martens Clause, a longstanding and binding rule of IHL, specifically demands the application of “the principle of humanity” in armed conflict. Taking humans out of the loop also risks taking humanity out of the loop.[i] There is definitely a need for the autonomous weapons to be assessed before being taken into use, which is also a rule according to the Protocol 1 of the Geneva Conventions,[ii] and the same has been said according to the Martens Clause as well.[iii] These rules and provisions give the states a scope to do a more of an internal examination for themselves before using any weapons be it an autonomous weapon rather than anything else.

There are a number of issues that are a matter of concern with the LAWS, and it very evidently might disobey international humanitarian law as well as international criminal law.

Primarily we can talk about the absence of human judgement during the use of these weapons. Even though the programmers of these systems must have properly tested and fed with how exactly to imply with the laws of the conflict while doing their duties in the battlefield, there still might be some scenarios where it is very difficult for the systems to distinguish between what is correct to do and what is not. The process of working of the LAWS bring high risks to those affected by the armed conflict including the combatants as well as the civilians, raises challenges with the application of the international law especially the rules on the conduct of hostilities and the protection of the civilians. It also raises fundamental and ethical questions such as how can one substitute human decision about life and death with software, sensors, and artificial intelligence.[iv]    

For example, imagine a scenario which is a Non-International Armed Conflict, and there is a Non-State Armed Group on the other side of the battle which has successfully failed to distinguish[v] themselves from the civilians of the society. They are neither wearing any insignia nor do they have any uniforms which helps the LAWS to distinguish between the members of the armed group with continuous combat function or civilians who are directly participating in the hostilities and the civilian population around the area who are protected. In this scenario, how do we expect an autonomous weapon to make the final judgement on its own before targeting a protected person or a civilian.

Another issue would be how would the LAWS distinguish when the enemy combatant has surrendered or is injured and would be classified as hors de combat. The laws[vi] say that attacking Hors de combats, who have either surrendered already or are injured during the conflict or is in the power of the enemy party is strictly prohibited. It will definitely be challenging for a LAWS using artificial intelligence or machine learning to distinguish and to figure out as to when a combatant is injured or when a combatant or a civilian is surrendering before the enemy party. Even if we consider that the machine must have been fed with the technology that will help the autonomous weapon figure out certain indications of person surrendering or the person being injured, there still might be new ways and new signs each time for a person to convey the same. This way it is almost next to impossible for the LAWS to adhere to the international laws especially laws of armed conflict. At the end, it is only a machine with advanced technologies which lack the human judgement to make any final call regarding life and death.

Considering the lack of human judgement in the autonomous weapons, it would also be a challenge for them to distinguish between a military target and a civilian object.[vii] How do the LAWS access the necessity[viii] factor before the attack. The decision to attack a non-military object after accessing the fact that it is a military necessity involves human judgement to make the final call.

Proportionality[ix] of the target and precautions[x] are other concerns that we should be keeping in mind before deciding to use the lethal autonomous weapons.

These are a few of the examples of the issues that we prima facie have in hand while using LAWS which need to be addressed. These are the loopholes that we have in our current legal system which need to be fixed considering the growing usage of the same.

CONCLUSION

Now that we have talked and discussed about what exactly is lethal autonomous weapon systems are, what are the types, what are the state-of-the-art inventions, cases, and incidents where they have used so far and how many states are involved in their research, inventions, and use. It would be a good time to discuss the aftermaths of the use. As it is clear to us that most the lethal autonomous weapon systems use artificial intelligence, algorithms, and software to be autonomous and in most of the cases, it does not require a human interference or human judgement to make a final call before going ahead with the target. The real question now would be how do we deal with the responsibility and the accountability part of using the lethal autonomous weapon systems.

To answer the above, the author suggests that firstly, we should try to find out whether to establish this responsibility factor, is it necessary to establish that a human agency is involved with the lethal autonomous weapon system to be related to an agent. And depending on it, should we decide the attribution of the responsibility, if it is going to be state responsibility or individual criminal responsibility. Secondly, based on the above answer, should it be allowed for the states and individuals to use a fully autonomous lethal weapon system and to what extent should a weapon be autonomous. Thirdly, it should also be examined whether it is legal for the LAWS to have ‘kill and forget’ capabilities or not. Considering the fact that tracing back the actions of the LAWS is highly important in the process of attributing responsibility.

This will evaluate the much-required aspect of using lethal autonomous weapon systems. It is the need of the hour as the world has already recorded incidents of killings done by lethal autonomous weapon systems without any human interference or human judgements.

 

[i]  Sophie Bobillier, ‘Autonomous Weapon Systems’ How does Law Protect in War, available at <https://casebook.icrc.org/case-study/autonomous-weapon-systems>.

[ii]  Article 36, Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 1977, available at <https://ihl-databases.icrc.org/en/ihl-treaties/api-1977/article-36>.

[iii]  Bobillier (n 1).

[iv]  ICRC, ‘Position Paper on Autonomous Weapons’ (12th May 2021), available at <https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems> accessed 19 December 2022.

[v]  Principle of distinction, Rule 1 of Customary Rules of International Humanitarian Law, ICRC Database, available at <https://ihl-databases.icrc.org/en/customary-ihl/v1/rule1>.

[vi]  Rule 47 of Customary Rules of International Humanitarian Law, ICRC Database, available at <https://ihl-databases.icrc.org/en/customary-ihl/v1/rule47>; Common Article 3, Geneva Conventions of 1949; Article 40 & 41, Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 1977.

[vii]  Rule 49 and 50 of Customary Rules of International Humanitarian Law, ICRC Database, available at <https://ihl-databases.icrc.org/en/customary-ihl/v1>.

[viii]  Rule 50 of Customary Rules of International Humanitarian Law, ICRC Database, available at <https://ihl-databases.icrc.org/en/customary-ihl/v1/rule50>.

[ix]  Rule 14 of Customary Rules of International Humanitarian Law, ICRC Database, available at <https://ihl-databases.icrc.org/en/customary-ihl/v1/rule14>.

[x] Rule 15 – 24 of Customary Rules of International Humanitarian Law, ICRC Database, available at <https://ihl-databases.icrc.org/en/customary-ihl/v1>.