54 min read
14 Mar
14Mar

CHAPTER 4 

AI AND WAR     


It is a commonplace that the history of civilization is largely the history of weapons.  

George Orwell, in his 1945 essay “You and the Atomic Bomb”   


The rush to create computational systems for virtual warfare reveals a fatal flaw that’s been with Homo sapiens as long as civilization itself: hubris, that persistent and terrible human tendency to embrace blind ambition and arrogant self-confidence. 

Roberto J. Gonzalez   


The introduction of nonhuman logic to military systems and processes will transform strategy. 

Kissinger, Schmidt and Huttenlocher   


The new twist is the gray or hybrid or Phase 4 type of warfare which yields a wicked war, unmanageable, a blob of mercury, again relying on social media but complicated to further levels by cyberwarfare on the ground and drones overhead. The result is not the sci-fi future; it is the present, with more to come. Zartman and Vukovic  


4.1 Introduction 

4.2 War: Conflict Challenges 

4.3 War: Conflict Opportunities 

4.4 War: Conflict Competencies 


4.1 Introduction  

It is not hard to imagine how or why humanity has been intrigued and enchanted throughout the decades by some form of distance from risk, injury and death offered by its latest weapons of the time. How a longer sword, a catapult, a trebuchet, a rifle or a V-bomb created a bit of distance between yourself and the enemy, between yourself and death, and increased your positive conflict outcome chances. In the age of AI warfare, we have moved those distances to where we now do not even need to be on the battlefield, or the same continent as those conflicts. 


We can use machines as proxies, highly skilled, deadly and increasingly autonomous replacements for flesh and blood and bone. Artificial intelligence has already begun the process of changing war in crucial respects. Here too, the future has arrived and we are still arguing about regulation and the proper use of these weapons while they lie ready for use. 


There is a quest to automate warfare, for a variety of reasons, good and bad. While cutting edge weaponry and technology have always been of paramount importance in battles throughout the centuries, AI now sweeps most of the existing strategies and conventional wisdom off the planning table. The rivers and mountains and snowfall seasons so crucial to past battlefields may now become less important than good signal reception.


While regulatory efforts and debates centre on the development and use of various classes of these automated systems, and how this may be distinguished and defined, advanced AI-assisted weapon systems are already extensively in use. The US, China and Russia are already using various classes of unmanned aerial vehicles (UAV’s) in their military activities, with the latest Russian – Ukrainian war providing ample instances of such use.  Russian use of Integrated Air Defence Systems has attracted attention in military circles even before the Ukrainian war. Since 2019 China, the US and Russia have each formally and in practice committed themselves to an increase in development and implementation of AI as an increased part of their military strategies, with that entailing, in each instance, an overarching AI military strategy.


Artificial intelligence is shiny new technology that could never have been kept from military and security application. Its abilities and potential are simply too real and too enticing to ever be kept from the battlefield, and any such efforts to ban it from the future of war and security are as naïve as it is fruitless. AI anticipates and predicts events crucial to warfare at an unprecedented level, at speeds that we humans cannot compete with.  New capabilities flowing from these new technologies, their potential and shortcomings dictate what new strategies, timeframes and sequences should be. 


Your Sun Tzu and Von Clausewitz will only take you so far, nowadays. A new “Art of War” awaits to be written. I have decided to include a section on warfare related AI conflict developments not because I believe that too many of us will still spend time on a battlefield, but because the conflict questions arising from AI battlefield developments manage to sharply focus some of the technological, ethical and practical questions arising from AI that are also relevant in the other areas that we will be studying. 


An understanding of AI on the battlefield will in itself give you a good grounding in the more economic and more personal AI conflicts to be faced. The lines between conventional wars, politics and state or private security have in any event started to blur even more than they were before. Similarly, the lines between military and civilian use and collaboration have also become porous like never before. Google Earth can be used to plan your next holiday, or by security forces to attack a specific target. The data generated and collated by social media can be used to plan a military attack on an area. We will investigate this development, and its consequences, at greater depth later on, but for now the following commonplace Amazon video, discussing the use of Cloud technology and AI for the “warfighter” makes the point eloquently, and as an added modern bonus, it is set to dramatic music. 

Amazon Web Services for the Warfighter - Bing video 


One publically known example of the blurring of traditional state military and civilian participation is where the Pentagon, after initially awarding its JEDI (Joint Enterprise Defense Structure) cloud computing contract to Microsoft, cancelled it as a result of sustained legal and other challenges to the arrangement, and in 2023 reinstated the essence of the program (now called the JWCC – Joint Warfighting Cloud Capability), sharing its nearly $10 billion between the four recipient tech giants Microsoft, Amazon, Oracle and Google. 


Here we see civilian media platforms participating in and making possible the future of war – the same people who regularly has access to our personal and professional data. Amongst other interesting observations, we see here the weakening of the boundaries of military and civilian collaboration in very public debates and conflicts. While this was always the case, and very visible in the two world wars, as examples, the exchange of and access to information, especially civilian information, in these processes were always limited, if involved at all. 


This is no longer the case in the AI revolution, as we can see. It is also in the arena of AI wars that we see the inevitability of these developments, and the futility of trying to remain outside of them, a discussion we had earlier on broader and more general terms.


Can a country responsibly remain in a conventional war preparation state of mind when their enemies are becoming AI-enabled? And once we have cleared that obstacle, how does even the smallest, tech-deficient state escape some rudimentary form of keeping-up-with-the-neighbours AI arms race? What immensely complex future geopolitical and even commercial conflicts must follow on these new dynamics? Again, some may seek to deflect these complex conflict questions by arguing that these consequences and questions may never arise, that they are remote and continents away, that we can simply opt out of these developments, that we should not act as if this is inevitable. I agree that the inevitability argument can be used by various purveyors of AI to shut down debate and to promote their own interests, but that argument really only goes so far. 


Conceding any inch of this battlefield will simply have very real consequences in the security calculations of governments, some of which aspects could very well be argued to be constitutional and legislative obligations resting on some of those governments. To ascribe inevitability as a strategy to those who are driving the datafication of war does not remove the fact that some of these conflict dynamics are in fact just that: inevitable. This is a vast and complex field, and of necessity we will only look at those developments that are of practical use to us in our quest for our own conflict competency. We will also distinguish between those AI conflicts relevant to the battlefield and those we will find in a more local arena, such as law enforcement, state security, personal surveillance and so on. These are increasingly part of the original idea of a battlefield, with everything from equipment, training to strategies being partly or completely militarized.  


4.2 WAR: CONFLICT CHALLENGES 

Advanced technology can aggravate and escalate conflict. Wars and rumours of war, combined with the public parading of weaponry, have traditionally caused a sense of safety in the strong, and of envy or anxiety in the weak or the insecure. I try at times to take comfort in the idea that AI is not a weapon in itself, that in essence it is a recommendation and decision making range of machines and algorithms, and that we can remain in control of that process. This comfort is needed especially when we consider the effects of AI in our current and future military conflicts, and the unique destructive power locked into some of these machines. 


But part of that comfort evaporates when we recall that in this process we are also changing how we are making our decisions, especially on the battlefield (Payne, loc 59). As we have discussed earlier, this cold comfort of the human-in-the-loop being in some form of control is, in my view, often just a comforting half-truth that we tell ourselves. 


This role that the human plays in the machine interface is limited in several ways, and nowhere more so than on the high-price, high-stakes modern battlefields. Let’s examine just a few of the challenges to the assurance of human control over these machines of destruction. This can be, depending on the specifics of the weaponry involved, a slightly different argument than the collaboration argument. Firstly, and in fairness to our species, what role can humans play in the data-intense, pattern-searching maelstrom that is modern AI-driven warfare? If those systems, calculating billions of data items in a few seconds, conclude that a specific course of action is to be preferred and implemented, what do we expect the human to do? Pause for verification? Consider the facts carefully? Convene a committee? If we are saying that humans can make strategic decisions better than AI, they just need longer to do so, on what basis are we making such a statement, what is the evidence? And if we are not better at that process, why are we even involved? For legal or political reasons? In warfare the stakes can of course also be much different, and much higher, than in merely workplace or commercial debates. 


Secondly, how long before completely autonomous weapon systems outperform human-in-the-loop systems in important battles, and what effect will this have on battlefield decision making? Is it really prudent to leave the big decisions to the human if he is the slow kid in the class? Should a politician give the job of defending a capital city, under attack from a swarm of armed drones, to the autonomous system or the one where a human has override authority? What would your choice be if you are a resident in that city? Simply put, then, do we really improve questions of strategy, security and safety on the battlefield by placing humans at various points in that system, with various powers of command and some form of veto? 


It sounds like such a comforting compromise solution, but is it really a solution at all, and will it remain one for long, especially in battlefield situations? Would we be better off, more secure, if we let the AI generals run the entire show? And in the heat of battle, where survival, national pride and fortunes, lives honour and careers are all at stake, what would the battlefield decision makers do, regardless of questions of ethics, politics and textbook niceties? 


There are, as we will see, other serious and wide-ranging concerns about the use of AI and AI-assisted weapons, especially once we understand what these weapons are capable of, and that battlefields are no longer neatly delineated spaces Over There. These concerns do not arise from taking Skynet too seriously, but from realistic, calm and informed decisions based not on speculation but on already available technology. We can rest assured that we are not over-cautious in having these concerns, as there are increasing calls for the banning or limitation of the development or use of especially autonomous weapon systems, including some calls from politicians, scientists and others with sufficient knowledge and experience of the capabilities of such systems. 


These concerns have led to significant coalitions and activist campaigns trying to limit these developments and applications, such as Human Rights Watch and the Stop Killer Robots campaign. (Stop Killer Robots - Less Autonomy, More humanity.


These weapons are in use, unregulated, applied in war as experiments, against military and civilian targets without much distinction. The 2022 start of the Russian-Ukraine war has seen various levels of AI-assisted weapons being used. Ukraine, for example, made use of AI-assisted semi-autonomous drones, both in attack and (drone) defence, and Russia claims to have fully autonomous AI weapons, although details and proof of this is rather sketchy. Eventual analysis of this war will show to what extent, if any, AI-assisted weapons may have led to a reduction in hostilities, loss of life or civilian suffering and disruption, some of the parameters often advanced in favour of AI warfare. 


So far, however, everything that we know from this war seems to be simply more of the same from what previous wars have shown humanity, with no real benefit other than strategic and destructive potential. As always, the increased destructive potential of a weapon does not translate into a reduction of suffering for civilian populations. 


In 2017 over a hundred AI experts and celebrities such as Elon Musk and Google Deepmind co-founder Mustafa Suleyman drafted an open letter calling for a ban of especially lethal autonomous weapons. The letter and related information can be accessed here: Elon Musk leads 116 experts calling for a ban on autonomous weapons | World Economic Forum (weforum.org) 


In the years since then the development, and even deployment, of these systems have continued apace, with very little success to show along the regulatory front. The risks of automated weapon systems have been with us for many years. The examples of unintended consequences and failures of safety measures are rare but extremely instructive. 


The use of the Aegis combat system by the US on its air defence destroyers placed fully autonomous lethal weaponry at the disposal of military commanders. The ability to identify, track and destroy enemy aircraft in split seconds became a reality. So too did the limits of the weapon system, when in 1988 the USS Vincennes, using that system,  mistakenly identified and shot an Iranian civilian airliner out of the sky, killing all aboard it. These examples of failures and the publically available information on most of these machines and weapons justify and generate much of these concerns. If you are interested in these details, do a search on, for example, the Israeli Harpy drone, the ATLAS robots from Boston Dynamics, robot dogs,  and other, seemingly daily produced and improved examples of these warbots.   


Current and future battlegrounds serve as testing grounds for much of AI experimentation and development. Oversight is minimal, if it exists at all, and some of the bigger countries are able and willing to throw vast amounts of money at this field and its promises and mastery and domination. 


Prof Kenneth Payne, a clear thinker about the use of AI in war, often uses the term “warbot” in describing the AI (and AI-assisted) machines that we are starting see and hear about. I agree with Payne that the pure speed of development and the seductive battlefield abilities of these warbots will, in practice if not always in theory, simply outstrip the political will or ability to effectively regulate these machines and technologies. 


A new way of thinking about these risks, threats and capabilities must be found, while retaining the best in conflict strategies from the past. One of the main reasons I believe that a study of these war-arena conflicts assists us in our more private and professional conflict aims is the observation that these battlefield machines are bound to, in some form or another, seep into our civilian lives. Military applications are, as often seen in history, soon followed by civilian application, with or without modification.


Civilian surveillance, internal security, law enforcement applications and so on will be natural fits for weaponry tested on military battlegrounds. Add the fact that these systems are often designed and implemented by the same people that run our search engines and our diaries, and the lines begin to blur. This simple point serves as a reminder to us not to see these categories as neatly isolated silos that we can accept or reject at will. AI developments on the battlefield may very well have an impact on our civilian lives, our economies and our civil rights as we perceive them at present. To insist on a meaningful distinction between an unmanned military drone gathering information from the skies above the enemy in North Africa and that same drone gathering data from a crowd of protesters in Boston may sometimes be hard to achieve. 


The targeting of civilian populations has now taken on new meaning, with new attacking or deployment possibilities, and new ethical questions arising. AI on the battleground also raises the standard big AI questions, even if in a slightly different context. How do we regulate what type of machines are created and used? When are they to be used? 


Are they to be completely autonomous or do we keep a human hand on the decision making process? Are there types of wars where some ethical constraints are to be in place and others where these limitations are dropped? Are civilian targets fair game to AI weapons? Can we relax, or dispense with, our own regulation if the other side fails to adhere to the rules as we see them? How do we monitor or audit regulation if these weapons can be a few lines of code on a laptop? Who sells these technologies, to whom? Can the data of civilians, gathered through the use of social media and other civilian endeavours be used for military purposes, and if so, to what extent? Does regulation have any meaningful application in war, and how did we do in regulating nuclear weapons in the past? 


In managing these new energies and potential conflicts, existing conflict concepts will need to be reconsidered and reapplied in the AI age. We can use conflict escalation as a particularly vivid example for the AI wars that lie ahead. Briefly put, a conflict escalates because of a variety of factors such as a real or perceived lack of progress, an assessment of options, external factors and agendas, frustration or for strategic advantage. Conflict escalation can be destructive or, if skilfully applied, a very effective tool in resolving impasse or conflict rigidity. In war as influenced or assisted by AI however we revert to the fears of the Cold War. 


What if they strike first, what if we do not have enough time to retaliate? The arguments around mutually assured destruction no longer hold the same amount of water, as an AI assisted cyberattack, as one example, can strike within seconds, destroying the ability to respond. 


There is no time to launch your own nuclear warheads, as crazy (and effective) as that argument was at the best of times. What are the new conflict dynamics in the first-mover calculus?   This conflict escalation may be completely inadvertent, caused by false intelligence or an error in judgment, caused by the speed at which AI-enabled weapons can operate, the heuristics of some of these weapons that may very well in time trade speed for accuracy or completeness of information, or a justified fear of the utter destruction that can be unleashed in a matter of seconds. 


The strategic importance and ingrained wisdom of the so-called first mover advantage can, in the AI age, become a brand new conflict strategy, and severely impact conventional peace talks, armistice possibilities and conflict outcome assessments. Some of the battlefield strategies and lessons, honed during the Cold War and some of its proxy battles, would be quite out of place in the AI era. 


We have, in my view, in any event already entered a new Cold War, where some brand new conflict strategies will need to be designed, with the only real difference being that the chessboard now has a few new pieces to contend with. 


Of course, Cold War comparisons are limited. In the Cold War a country’s nuclear capabilities were never actually used, but simply employed as a deterrent. AI weapons are simply not comparable to those dynamics. The Cold War had a type of equilibrium at the heart of the conflict, an environment where mutually assured destruction was a crazy but effective argument. 


Paradoxically, the involved nations’ capabilities also carried with it its own vulnerability, which in turn led to a type of protection, an enforced willingness to works towards de-escalation or some other form of resolution short of war itself. AI assisted military conflicts are not affected by the same limitations and concerns of mutually assured destruction as we found in Cold War days. 


While the spectre of a nuclear war remains, it is now no longer the most complex, or even the most dangerous threat. As Prof Nobumasa Akiyama points out, 

“Nuclear deterrence is an integral aspect of the current security architecture and the question has arisen whether adoption of AI will enhance the stability of this architecture or weaken it.” 

(Von Braun loc 7250). 


Here again the field is populated by adherents of all of the probabilities, from AI adding to stability and security, to it being the next technology that will destroy us, and all points in between. 


As with so many of the areas that we are investigating we again see here, focusing on the threat of nuclear war considerations and AI that, at the heart of some of these questions, we end up looking at ourselves in Hamlet’s glass. How rational is human behaviour in any event? Looking at the divide between what we sometimes tend to romanticize as human rationality and how humans actually conduct ourselves, I have to wonder if AI would not often be able to make decisions that are closer to our idealized view of rationality than what we are able to reach. What are some of the things that we learn about ourselves in the behavioural sciences? 


I believe that AI brings very little new risk to the nuclear scales. If anything, it tends to amplify the existing threats that are not caused by or worsened by the introduction of AI. And here, in the nuclear deterrence debate, we also face that same difficult question then: do we stick to human rationality and human decision making powers, such as it may be, or do we at some stage along AI’s development hand over some or all of these destructive capabilities to machines? 


Much of this will of course depend on the results of human – AI collaboration in the next decade or two, and the details of the developments that AI can reach. We can nevertheless, while we wait, pencil in that question: if we accept at some future stage that AI can make more rational decisions about the nuclear calculus than what we can do, should we cede control of such abilities to AI? 


Nuclear deterrence arguments will remain relevant, both as referring to the ongoing nuclear realities of our global conflicts, as well as hopefully forming an experiential basis for application, if to a limited extent, for the AI arms race. As an example we could use the well-established nuclear era debate around first strike capabilities and responses. 


Conflict escalation in these new AI war scenarios would really counsel that the first strike will be the last strike, and that a conflict escalated is a conflict won. If a massive, destructive attack can be assured through these technologies (with or without the strategic benefit of unaccountability), the negotiating table and wartime mediation may become less attractive to some. 


Conflict escalation, in conventional wars, are often unintended, driven by over-compensation, overly aggressive responses and paranoia, generally a misreading of the opponent’s forces, intentions and timing. All of those dynamics are now deepened, made more complex, by the nature and mechanics of AI weapon systems. The AI battleground will not make these decisions easier. 


Conventional data and intelligence may be completely useless. Simply put, in strategic terms: how do you now read your opponent’s abilities, intentions and timing? Now add the extreme difficulties that may lie in the accurate detection and identification of the actual perpetrator of a real or threatened strike, and conflict escalation becomes a brand new battlefield art. 


The finger on the proverbial red button is a lot more nervous nowadays. The battleground also emphasizes the challenge, and possibly strong solution of co-operation between humans and machines like few other areas of human conflict. 


Warbots and LAWS gather and act on data at speeds far beyond human capabilities, with no human intervention needed. Here again we have to ask: do we, even with a human in that conflict loop, simply accept the data given to us and act on it and let the AI system act (such as a pre-emptive strike) or do we allow the human to check and verify that data, and possibly negate the crucial edge that our AI system gave us? 


How, for example, would the Bay of Pigs conflict have played out if we were completely or even partially dependent on machines making these calls? And what do we hope to really gain by ensuring a human component to the war machinery, even if we give that human(s) a decision overriding capability? Does that ensure greater accuracy, a higher moral compass, does it bring common sense or experience into the process, or does it really just ensure some murky form of accountability? How practical and relevant will the current distinction between AI-assisted weapons and AI weapons (where AI makes autonomous decisions) remain in the long run? 


Scepticism in much of the restraint that we are seeing in the development and application of military use of AI would be warranted, in my view. For once, just as with nuclear weapons, these weapons involve more than just offensive or defensive arguments. To what extent is the apparently responsible distinction between AI weapons and AI-enabled weapons simply a foot-in-the-door strategy to gradually be able to use the full extent and capabilities of the available technology? What about the concept of emergence in AI systems, where sudden and unexpected results and new abilities may arise in a system, results that may not have been explicitly planned for or have been foreseen during the original planning stages? The implications of such emergence related consequences are enormous, from simple peace and stability considerations but also personnel safety, attack attribution, regulation and legal liability considerations. 


Battlefields have traditionally been seen by some military and political leaders, throughout the ages, as acceptable experimental laboratories for testing and developing new and untested, potentially unstable weapons and systems. Given the destructive power of some of these AI systems, those days may need to come to an end. But why would they? The lucrative arms industry stand to make more money than ever before, even as more conventional weapons become obsolete. 


While much of AI assisted weapons systems seem to be enthusiastically supported and cheered on by the industry in general, with much money and time being spent on these developments, this could become a very interesting cost analysis once the early, developmental dollars have left the stage. As costly aircraft carriers, tanks and heavy artillery start leaving war zones, replaced by drone swarms and young people with joysticks, the arms industry may suffer economic loss. How will they react, and how will the decision making process be swayed and manipulated? 


Here lies great potential for far-reaching future conflict causes and drivers. Hopefully these types of battlefield scenarios will be of limited occurrence. One has to wonder to what degree these debates, important as they are, will really carry weight in a military environment. What ethical considerations would outweigh a real time, strategic battlefield consideration? If you have access to advanced information, access to advanced conflict solutions, and a split second to act on it, do you not have an ethical obligation to act on that information? 


While much effort and energy is being expended in trying to reach understandable or explainable AI, this may be little more than a semantic nicety in the heat of battle. An opaque decision making process, with an urgent and generally reliable conflict solution suggested, may be all a military commander have to, or wants to, rely on. And what about data security and integrity even on the battlefield? In recent years, geo-locating an enemy unit by triangulating a cell phone left on with social media content became possible, and a Special Forces unit could be traced to their location in East Africa, using their training program recordings on Strava, a popular exercise app (Payne, p106). 


Even in war, our data is being harvested and used. Is it reasonable to expect, for example, that military forces will not make use of accessible data about civilian populations that may be of great strategic value to them? Civilian data would of course not be off-limits, and everything from population densities to actual social media, conversation or email monitoring could be some of the easier examples of increased vulnerability that such populations would be exposed to. 


4.3 WAR: CONFLICT OPPORTUNITIES 

If we accept, as many do, that war will be with us throughout history, is it not an improvement if we can fight each other using machines, here and there serviced and directed by humans? Can necessary wars (and I accept that there may be such scenarios in future) be conducted more humanely, costing a fraction in loss of human life, time and money? 


Geographical concerns can become less of a factor, which in turn may remove smaller countries from the wars of the mighty, and those small countries can arguably level the battlefields if the success of one’s battles no longer depend on the number of young people you can send into war. The double-edged sword of easier access to these weapons can also mean that smaller countries can now have access to weapons that they can use for self-defence and internal security, weapons that they may not have had access to in the past. 


This could also, in theory at least, lead to greater internal stability if a government or even localized form of government could protect its interests, and those of its citizens, more efficiently. Needless to say, those same benefits may now also be accessed by rebel groups, proxy enemies and purely criminal organizations. Wars may now also be far more reduced in time and scope, as technological, cyber  or machine strikes at key strategic points may very well end these wars, as opposed to protracted battles having to wear down the armies of the enemy. 


Trench wars, grunt work, moving vast numbers of troops from point A to point B may all become unnecessary anachronisms. Can we hope that an increase in AI military capabilities would lead to an eventual reduction in the world’s nuclear capabilities and threat level? This in itself would be a major advantage flowing from AI. Or is that, at least for the next few decades, an unfounded hope, with nuclear destruction always remaining as a Plan B, or even remaining as the weapon of choice for countries or groups that fall outside the beneficial gains of the AI arms race? 


One of the opportunities afforded us by the visceral experiences of AI’s destructive efficiency and the horrors of war seems to be an intensification of attempts to regulate these developments, or at least to provide some form of framework that could regulate these battlefield risks. 


For example, in January 2023 the House of Lords “AI in Weapons Systems Committee” was formed to explore the ethics of the development and deployment of LAWS, and it has already started gathering evidence. A month after that the US government issued a declaration on the responsible use of AI in the military. In focusing our attention on the drafting and intentions of such ambitious projects that the practical difficulties in giving effect to these intentions again become apparent. At least, for now, the intentions seem good, and it at the very least establishes a sense of urgency and momentum that may be useful in the long run. 


One would, at the same time, have to caution against such efforts not just creating a sense of busyness and comfort zones that lead to scaffolds that fall apart under real-world pressures. There is some consensus on the limitations of AI in warfare, and here humans can continue to play an important role in everything from job creation, regulation and oversight and a variety of practical, commercial and ethical considerations. 


These AI limitations can briefly be summarized as follows: 

(1) Perception (ie making sense of the real world); 

(2) Decision making, reasoning and context; 

(3) Action selection, self-correction and ethical self-assessment; (4) Teaming, trust and transparency in interaction. (Von Braun loc 6105 – 6162) 


In understanding these limitations we can form more practical and efficient responses to these developments. In not flinching in the assessments we need to be making, in arriving at honest assessments of what we are dealing with, we can craft better, more ethical and more durable solutions and even opportunities for mankind.   


4.4 WAR: CONFLICT COMPETENCIES  

Looking at the latest developments in AI warfare I do not believe that any of the big three countries will adopt an AI-only military force in the foreseeable future, if at all. 


This will in all likelihood be a frontier where human-machine collaboration will be tested and refined, hopefully with civilian benefits from such experience. Much research and experimentation as to the limits of use and possibility of these collaborations are ongoing. 


Kissinger et al suggest a list of six primary tasks that responsible leadership should display in the control and management of their AI arsenals in particular and these conflicts in general: 

(i) leaders of rival and adversarial nations should be prepared to speak to each other regularly, such as occurred during the Cold War; 

(ii) the unsolved riddles of nuclear strategy must be acknowledged; 

(iii) these leaders should define their policies and doctrines, and where prudent establish points of correspondence between their respective approaches to use, threat etc; 

(iv) these leaders should commit to the internal monitoring and review of their own systems such as early warning and command-and-control programs; 

(v) these leaders, especially of major and advanced technological nations, should create robust and accepted methods of maximizing decision time during times of heightened tensions and confrontation. Here the authors specifically advocate that humans should make the decisions whether advanced weapons are to be deployed; 

(vi) the major AI powers should consider how to limit the continued proliferation of AI weapons and systems, and the balancing use of diplomacy and threats of force. 

(Kissinger 147) 


Sound as these guidelines are, what can we expect of this “responsible leadership” in times of peace, or in times of war? As we asked earlier: does “responsible leadership” in the new calculations necessitated by the AI wars not put those leaders at a disadvantage in times of war? 


Conventional soldiering skills, such as being fit, young and healthy will play a diminished role in selected categories in future wars. The soldier of the future, if he or she is human at all, may very well be someone proficient in skills such as cyberattacks, programming and coding, drone piloting and other related skills.   Hopefully we need not prepare ourselves for any AI generated war conflict competencies, but we can learn from developments there. 


We can see the destructive forces capable of being unleashed, we can try to act more responsibly in our activism, in our political choices, in who we entrust with the leadership of our multinational corporations and our governments. 


We can relearn, and develop, the lessons learned during the Cold War, that of the mutual destruction calculus, the value of privacy and how rights are eroded surreptitiously, how difficult it is to rebuild important human achievements once they are lost. We can demand true transparency, we can try to ensure systems and processes that give us some meaningful level of knowledge and input into wars and more regional conflicts that affect us.   What is there that we can do about these conflict realities, other than using it as a highly visible learning experience, where conflict lessons can be transposed on to our commercial and personal battlegrounds? 


What about our role in regulation? Can we play a role in whether AI technology on the battlefield be regulated and controlled? I have to agree with Kenneth Payne that “they are simply too useful” to be limited in any real manner (Payne, loc 132). 


Monitoring, lobbying and other forms of citizen activism can surely remain one of our viable responses. Political pressure, the use of the democratic process, a free press and targeted activism could still be of some use. Activism can also be a fruitful conflict strategy against unacceptable military, state or corporation overreach and abuse. Modern media capabilities can be of service in any such projects that we may regard as worthwhile. 


The online project against bribery that was run in India is a good example of what is possible, with some principles that could be transposed to military situations – see I Paid A Bribe | Report Now  


The Pentagon’s Project Maven is another good example of what can be achieved by small groups of dedicated activists and citizen pressure groups. In brief, this project was a collaboration between the Pentagon and Google in developing facial recognition systems that run off CCTV footage and police or other security agencies’ drone footage. Initial discomfort and protests,  largely driven by the civilian employees of the organizations involved in the project, in collaborating with the military in developing these systems spilled over when Google eventually pledged not to renew its contract with the Pentagon on this, and, in 2020, during the Black Lives Matter protests, other involved organizations such as IBM, Amazon and Microsoft pledged not to work on facial recognition programs, or at least until rigorous legislative and regulatory safety nets were in place. 


At the time, Amnesty International called for an outright ban on the use of the technology (see eg Payne p141). But these conflicts promise not to always be that simple. These protests were aided, in addition to moral and ethical concerns, by democratic principles and a free society. How would these scenarios have played out in an authoritarian regime? Even in the US, when Google stepped away from the ethical problems faced in this scenario, tech giant Palantir stepped up and were not so constrained. 


And we can, for now at least, bring some of our human conflict skills to this arena as well. Human conflict capabilities, at this stage of AI’s development, may have an edge in strategic thinking, especially on the battlefield (Payne 75). 


AI, with its superior pattern recognition and data handling capabilities, has an easy edge in tactics, but as far as the bigger, sometimes more abstract thinking involved in strategic thought is concerned, AI may have to bend the knee to human capabilities, at least for a while. 


One of the often unnoticed characteristics of human conflict of eras past, in mythology and reality, is its sheer, exuberant creativity. From the Trojan horse adventure to Von Clausewitz’s maps, from Roman attack formations to Shaka’s bull horn impi formations, we often deal with creative solutions to these human conflicts. Has AI now diminished or even done away with that creativity in war in particular and human conflict in general? 


Without in the slightest glorifying war, and focusing on necessary battles, we should certainly see human creativity as (at least at this stage) a distinguishing factor, a real world conflict competency, that could earn us a chair at the war-room tables of the future. Here, again for now at least, AI cannot really compete with us. How creative military strategy in modern times need to be in any event, is of course another interesting question. And what about our role as personal agents of change, of reform, of oversight in all of these developments? 


We should each assess these personal roles that we can play in affecting the future that we are walking into. Through our votes, our lobbying, activism of various kinds, social media education and many other strategies, we can indeed play a role in this process. One of the most important roles and conflict competencies that I can see for us in these relatively early days would be that of allowing and encouraging healthy, productive, honest debates about these concerns. We all have a stage and audience, large or small, where we can encourage and foster constructive debate, and as we have seen earlier, at this stage our questions may be as important as our answers. 


I agree with Roberto J. Gonzalez that 

“Before data-driven war becomes normalized, it’s important that those able to understand the potential effects of the new militarized technologies provide policymakers and the general public with broad contextual views of the issues at hand.” (Gonzalez 293).   


We who are concerned about these conflicts and how they will shape our futures are those “able to understand”, and we could play an important role in shaping the conflicts of the future.   


REFERENCES AND SUGGESTED READING (CHAPTER 4)    

1. I, Warbot, by Kenneth Payne 

2. War Virtually, by Roberto J. Gonzalez 

3. The Age of AI, by Henry Kissinger, Eric Schmidt and Daniel Huttenlocher    



Comments
* The email will not be published on the website.