36 min read
02 Mar
02Mar

CHAPTER 5 

AI AND GEOPOLITICS     


Although political opposition has thus far prevented the full-scale implementation of such a system, we humans are well on our way to building the required infrastructure for the ultimate dictatorship.  

Max Tegmark    


When the true power of artificial intelligence is brought to bear, the real divide won’t be between countries like the United States and China. Instead, the most dangerous fault lines will emerge within each country, and they will possess the power to tear them apart from the inside.  

Kai-Fu Lee   


5.1 Introduction 

5.2 The AI arms race 

5.3 Conflicts in cyberspace 

5.4 Cyberattacks and the attribution problem 

5.5 The influence of artificial intelligence on political worldviews 


Introduction  

Geopolitics, as the relations between states and other increasingly important role-players, will both bear the brunt of the disruptive edge of the AI revolution as carry much of the responsibilities in shaping these forces, maximizing its potential, minimizing the harms inherent therein, and somehow synergizing and balancing the clearly incompatible interests that run through these conflicts, present and future. As we have indicated in Chapters 1 and 2, our definitions of both AI and conflict extend to the various forms of advanced technologies that should, from a conflict education and preparation perspective, be studied together, and this includes, as far as a comprehensive study of geopolitical conflicts are concerned, the area of cyber conflicts.


 Cyberspace has, in fact, generally been acknowledged in conflict studies as the fifth conflict or military domain (together with land, sea, air and space). Artificial intelligence will change the way geopolitics is conducted, and change the very foundations from which these processes and decisions are generated from. Andrew C. Dwyer observes that “From drones, cybersecurity, to robotics and beyond, international conflict is intricately intertwined with computation, articulating norms that increasingly consist of articulation rather than social negotiation.” (Cristiano 19). In that one, very accurate observation lies so many conflict realities that we will need to face in future geopolitical and diplomatic engagements. AI has already drastically changed statecraft and diplomat manuals. From access to and the gathering of information to new conflict causes and creative solutions, the geopolitical arena is a dynamic new field of conflict resolution focus. From the UN to the EU’s increased recognition of AI as a force to be reckoned with, and both those institutions’ increased development and implementation of conflict management tools such as mediation for use in these geopolitical arenas, we notice how politicians, diplomats and regional leaders are coming to terms with these new realities. 


These are changes that affect more than the simple parameters of geopolitical conflicts, they have an enormous impact on the way that these parameters are set in the first place. New possibilities abound, with access to information that would never have been possible, or never would have been regarded as relevant, increasingly paying a very real role in shaping geopolitical activities.


 GEOPOLITICS: CONFLICT CHALLENGES 

The AI revolution then is changing not just the content of geopolitical activity, but the way in which it is conducted. Relatively stable relationships, strengths and weakness and the way to reason about these are set to be turned on their heads. Governments no longer need a proverbial man in Havana, the local informant’s role has been diminished, and comprehensive information about a country’s socio-economical strengths and weakness are easily available. While we have already seen a collapse of the traditional national lines in conflicts, with commercial, religious and criminal actors all playing increasingly effective and significant roles in regional wars, through proxy wars and individual influences, even this blurring of the lines have seen further developments as a result of the AI arms race. 


These proxy wars and hybridized conflicts will continue, now that the means for information gathering, spreading of disinformation and other forms of propaganda have been created and expanded, with the surrounding communities bearing the brunt of such conflicts (Turner 233 and 252). The very lines between war and peace have become blurred, as many of these cyclical regional conflicts show. Spiralling, unresolved conflicts are held in place by local and other actors to manipulate local conditions and hostilities for their own ends, all enabled by AI-driven process such as the ones we will consider later on.  AI, cynically applied, offers a wide spectrum of abuses and ethically dubious applications to governments or other bodies that may be tempted to make use of that. 


Especially (at least) in these early years, oversight may be tenuous or non-existent, competition in those specific arenas may be cut-throat and ethical boundaries may be blurred. The political, corporate and even personal benefits from AI manipulation or abuse are manifold. Maybe, as authors Buchanan and Ibrie speculate in their The New Fire, the world is more zero sum than we may want it to be. Geopolitical polarization, nationalism, populism and other shades of the hardening of worldviews are all so easily created and driven by those knowledgeable in identity conflicts and with access to AI tools that can reach the masses and every individual forming part of that crowd. 


Any study of politics of course evolves around societies and groups as participants and drivers of those processes. Our political leaders will need to learn some new conflict principles and strategies if they wish to remain relevant in the age of AI. Politicians will be expected to shape, make sense of and to an extent protect their constituencies from the real and perceived harms of AI, and they may often become the visible face of these competing forces and results. How will AI shape various societies? How will increased automation affect a society where the individual is used to being the centre of attention, value and focus? Over the next few years, how will societies react to increased unemployment and the devaluation of their jobs, especially if these harms are blamed on politicians and other geopolitical actors? How will certain reactionary or anti-development societies deal with the presumed progress that AI brings with it? How will AI affect a society or community that believes that females should not vote or be educated, or that AI should not be allowed in that society at all? 


All of these explosive conflicts are latent and waiting their own conflict ripeness. I predict that not only our political conflicts, but our very concept and understanding of politics as a mechanism of social engineering, as well as the boundaries of certain ethical questions we long regarded as settled, will change drastically and permanently in the next few years. Let’s have a look at why this may be so, starting with a term we have used earlier, and one which we can focus on in greater depth: the AI arms race. 


5.2 The AI arms race 

Tass, the Russian news agency, reported on the 1st September 2017 that Vladimir Putin stressed, noticeably in a public statement, that “… whoever takes the lead in artificial intelligence will rule the world”. This statement is often popularly seen as the informal start of the AI arms race, a point from where we can start tracking the really important AI developments, at least in military and security terms, between the US, China and Russia. 


Convenient and highly visible though that event may have been, we also see earlier signs that point in that exact direction, such as the report by the New York Times, in 2016 already, that “Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power.” (New York Times, 26 October 2016). In 2017 senator Ted Cruz had already warned of the “grave implications for national security” if leadership in the AI race was conceded. In the US, the Defense Advanced Research Projects Agency (DARPA) had been overseeing and coordinating projects involving these technologies for decades. Russia, China and the US all make no secret of their open, unvarnished AI ambitions, not just to compete but to rule, to dominate the AI environment.   


And it is by no way an exaggeration to refer to this as an “arms race”. Popular level media, such as Wired magazine talks of this, and an “AI Cold war” that “threatens us all” ( The AI Cold War With China That Threatens Us All | WIRED ), and Pentagon officials have commented on the fact that China’s dreams of being the world leader in AI by 2030 may affect the US’ ability to maintain its military advantage. As Roberto J. Gonzalez so perceptively points out, “Once rival superpowers are convinced that they’re on parallel tracks, the possibilities are frightening.” (Gonzalez 102). Just as the Cold War arms race(s) affected so much more than purely military goals, so too does this AI arms race easily and completely transcend purely military and security considerations. This is an arms race about world domination, about securing your place in the de facto new world order, from where other projects can be launched. 


It is an “arms race” where “arms” should be seen as any method, technique or weapon that aids the parties in the full spectrum of conflicts that we are investigating. If we understand, even at a rudimentary level, how AI works and what is necessary to drive these forces, we see the multilevel complex conflicts that must, nearly by definition, arise from such activities. The arms race involves not just countries or other geopolitical configurations of interest groups, but also commercial competitors (such as we have earlier on seen with the JEDI program litigation). Just like in the Cold War era, the primary AI arms race between the bigger role-players directly and indirectly affects all other countries, whether they wish to participate or not. 


Even as the small kid on the block, getting your alliances, your infrastructure or your geopolitical conflict strategies wrong will have immensely prejudicial consequences, a lesson that some politicians may learn too late, or be unable to skillfully manage in time. In its simplest form, this sets up a battle for resources. While raw computing power is becoming both more accessible and less necessary in AI mastery, there is still a range of requirements that are necessary to achieve, and then maintain, that mastery. This includes everything from certain rare materials (used in the manufacturing of computer chips and certain robotic components, as examples), computer skills and talent necessary for programming, deep learning projects and other highly technical, rare skills, and then of course, the “new oil”, data. Although here as well, the urgent and sustained need for data drives many of these conflicts, with countries, multinationals and corporations vying for various components of that data, the race is about more than just this data. 


We already see a race for the hardware and the knowledge involved in and required for success in this endeavour, but there will be other, less transparent goals in this race as well. In democracies and generally free societies the arms race will require some very nimble political footwork to convince us how all of this is really in our best interests, and this, from a conflict mapping perspective, should correctly be viewed as an essential part of the arms race. But it is also of course no longer enough to just collect reams of data, much of the AI potential centres around the accurate analysis of that data. Accurate data analysis leads to accurate predictive abilities. And this where you and I come in, dear reader. 


Our contribution to the AI revolution may in time be not much more than our data. Information about our preferences, our likes, our fears, our abilities, our weaknesses, our health, our spending patterns, all become the fuel that feeds the arms race on many of its levels.  With the exception of military and security focused AI, that is the essence of the AI arms race. The thinking behind this is that with information at this macro and micro level will come predictability, and with predictability comes success and profits. Or so the story goes. If geopolitical leaders  can know what people earn, what they want in a luxury motor vehicle, what their age, gender and political preferences are, what colour they prefer, what they are prepared to put up with, this information can be gathered, graded and used in commercial, security and political ways unknown to decision makers up to now. 


From these modern day banalities can be crafted national and political campaigns and strategies focused and targeted right down to the individual, delivered eventually through the powerful tools that AI now provide to leaders on several levels. These are some of the goals and prizes up for grabs in the AI arms race. For these and other crucial reasons, global geopolitics has, as no surprise to some of us, found a new and improved energy to restart the arms race, the new clash of empires, the Cold War 2.0. 


While some of the old conflict classics will of course remain, there are interesting, and terrifying, new weapons and strategies used by the big nations, the ones holding on to empire and the ones building empire. If we accept the proposition (as I suggest we should) that the leader in the AI arms race will dominate geopolitics and economics (not to mention the AI battlefield), then we can focus our study on just where AI features in these geopolitical conflicts, both overt and covert. 


The two main contenders are clearly the US and China. Russia plays a very important role, and of course we have a long list of smaller countries that do not have pretensions of actually leading or winning the AI arms race, but to simply secure full benefit for its people from these opportunities, across the spectrum of human activities, as a result of which they are nevertheless involved in the arms race. Given the unique components and potential of AI in itself, this, as we have seen earlier, is of course not really an arms race referring to weapons in the conventional sense, but to the various decisive, and possibly permanent, advantages that a country or group can secure via AI. The stakes are therefore significantly more complex and far-reaching than even the heady heights of the previous arms race(s). This arms race also need not be conducted in an openly hostile manner, as we can see from diplomatic efforts to say the right and soothing things about cooperation (eg in discussions between Xi and Biden in 2023), and between Xi and Bill Gates also in that same year (see eg Exclusive: Xi Jinping tells Bill Gates he welcomes U.S. AI tech in China | Reuters ). 


The AI arms race can be, and is, conducted as much with a smile as a traditional scowl. The influential Chinese entrepreneur and investor, Kai Fu Lee is quite open and enthusiastic about China’s chances of being victorious in this AI arms race. He regards China as a “bone fide AI superpower”, and then goes on to tell us why he says so. If we accept how he tells the story, he is quite probably completely right in his assessment. And it is not just Lee who is so confident about Chinese prowess. The former Google CEO, Eric Schmidt, very clearly warned attendees at a 2017 conference on AI and global security against complacency regarding Chinese AI capabilities. And what does recent statistics tell us about these debates? In 2019 already, more than 65 countries, including more than 34% of NATO allies and 48% of NATO partners were already using Chinese AI-enabled smart city, smart policing and facial recognition technologies. 


Lee is certainly a voice worth listening to. He tells a balanced tale of China’s capabilities, its incredible data harvesting potential, its focused investment in AI industries and technology, its sheer number of people, its developmental aims (for instance in the development of its “AI cities” and other technological frontiers), a story of potential and a friendly, confident throwing down of a gauntlet, tempered with a much more sober look at the widespread disruptions and profound socio-psychological effects that all of this will have on people. He is an eloquent example (and there are a few) of people who can debate the AI arms race beyond national clichés and to the benefit of us all. Lee is particularly adept in sketching the large volumes of data necessary for the effective running of neural networks (at least in the early stages), and how a combination of China’s vast numbers of people, its connectivity levels and its views on political and human rights may assist it in achieving the edge in these important clashes. 


Throughout his work there is an undercurrent of an existing but also growing, inevitable future conflict between China and the US, largely based on this particular “arms race”. Even though not all is as hoped for by Chinese developers of these emerging technologies (China’s biotech industry, for example, is really not showing much progress in taking off as expected), it is very early in this part of the adventure to make definitive statements. But again, conflict management assists us in granting us a wider, more practical perspective on the details of this race. We need to, and should, generally monitor and be aware of the arms race, but from a conflict perspective the very fact that it exists is already sufficient to raise the red flags and to inspire us to a better understanding of this seemingly remote part of our worlds. 


Was the Cold War, at any stage during its lifespan, the less important, the less dangerous depending on who was argued to win or have the edge in it? Can we even meaningfully talk of a winner in such a global conflict? The very fact that this competition, for these high stakes, exist, in itself is a complex conflict completely removed from the eventual result. If we are right in our assessment of the importance of AI in future empire building and world domination, even of an economically advantageous and / or benign nature, then just about everything else in the conflict cupboard will be subsumed under the AI banner. Nationalism, fascism, autocracy, surveillance, misinformation, even physical expansion will all be easily accommodated in the AI tent. “All geopolitical excesses and abuses can be better with AI”, could become a modern slogan, whispered if not placed on political manifestos. This AI arms race will establish and drive the various conflicts flowing from it. 


In fact, as we have seen with the Cold War and several other global conflicts, the very fact that a country or group may realize that it is losing such a conflict may in itself cause very dangerous insecurities and defensive conflict patterns. Cyclical conflict patterns of the making and breaking of global alliances, promises made and broken, all in the execution of this AI arms race will become the norm, with fairly predictable geopolitical conflict outcomes. A crucial conflict dynamic that is sitting there in plain sight at this stage, and which I believe will play an enormous role in deciding and shaping these AI skirmishes, and no doubt have hugely important consequences for all or most of us in the West, would be the meaningful differences in how the two societies, China and the US, approach a variety of rights and laws, and what their governments can do and not do in these all-important conflicts. 


Here, worldviews, political systems and their boundaries will be far more than philosophical niceties to debate at the dinner table. As we will consider, political constraints, even though phrased in laudable terms and being perfectly reasonable and arguably morally superior, may have distinct disadvantages in the AI arms race and its downstream consequences. Very simply put at this stage, I am not really convinced that the Western cliché that a free and democratic society will allow greater and faster technological growth and development to take place than a more autocratic one. Those 1950s comparisons of American vs Russian living rooms, and related comparisons, are of limited modern use. China’s AI expansion efforts are heavily supported and funded by their government, and creativity and innovation is both encouraged and valued. Lee proudly talks of China’s “gladiator entrepreneurs”, and we can see their work all around us. 


Here again, a few Cold War comforts will need redrawing. Add to this a commercial environment in China where we are dealing with a largely collectivist society, where our Western idea of copyright simply does not exist, where marked levels of autocracy are both expected and tolerated, where very different work ethics and legislation apply, and the playing fields seem heavily weighted in favour of an eventual Chinese domination. The success that they have shown in the establishment and implementation of TenCent, WeGo, TikTok, to name but a very small set of examples of technological and commercial successes, challenges some of our comfortable democratic and free society assumptions. 


As a presumably self-evident advantage in the AI arms race, a free society may not have the edge it is thought to have.  Regulation, consultation, legislation, safety measures, political activism, litigation, public outcries, dissent, election pressures and much more are all obstacles that a dictator or autocratic government may not need to deal with. This of course does not now mean that we need to change our evaluation of those free societies and its other benefits, but it is most certainly a crucial factor to heed and make provision for in any political or commercial conflict strategy of the future. An absence of commercial guard rails and legal protections that are regarded as all but axiomatic in the West, may very well be strengths in other countries when we do that comparison and measure eventual arms race results. 


For example, Qihoo 360, China’s then leading web security software company, at its commencement, simply copied Internet Explorer’s logo (but in green) and started trading. The 2008 – 2010 open warfare between Qihoo, Tencent, Renren, Groupon and Kaixin001 would be particularly instructive on the rather boundary-less commercial conflicts which Chinese companies have within China, much less when it comes to any outsiders. Their competitive conduct, to put it politely, would in somewhere like the US, be met with an endless barrage of social ostracism, antitrust investigations, litigation and political or regulatory oversight. Within this environment we see the excellent expansion and development shown by companies involved directly or indirectly in AI development (and hence the AI arms race) such as Baidu, Alibaba, Tencent and WeChat, just to remove any remaining delusions of automatic purely capitalist or democratic advantages. China has access to data in both quality and quantity. China’s number of internet users exceed that of the US and Europe combined. 


There is also a subtle but important difference (so far) in what data is being harvested, with the US tending to gather information on our online behaviour (searches, videos watched, photos uploaded) while China focuses more on actual behaviour, such as transport patterns, meals ordered, actual purchases and so on. This provides its deep learning programs with a very high quality of information of what is happening in the “real world” (Lee 55). AI expertise and government support round off this assessment of Chinese potential in the AI arms race, and it is hard to see how Lee (and others) can be far wrong in their optimistic assessment of imminent Chinese domination. The current Chinese lagging in generative AI systems will also not be an enduring shortcoming, I would think. 


The US-China AI arms race is causing concern in expert circles and among individuals that know the risks that such conflict escalations can bring about. Henry Kissinger, who few could match in diplomatic experience and knowledge of the effects of global conflict, in 2021 warned against this race escalating to an all-out AI high-tech conflict. He warned against any Chinese hegemony (he seems less concerned by a US hegemony) and called for the peaceful co-existence of these two nations. For those skilled in diplomatic reading-between-the-lines the very existence of such a statement from someone such as Kissinger should come as a chilling reminder of what is at stake here. 


Why should the latest iteration of global domination Olympics concern us at all? As we will deal with in the book elsewhere (and Lee concedes this), we are on the edge of a new technological colonization. We are but a few years away of smaller and slower nations becoming data fields to be mined, and when these countries fail to keep up with the AI developments and expansions, as inevitably they must, the power plays and concessions that will need to be made in order for them to receive the scraps off the AI table will impact each and every country. The growing gap between the US, China, possibly Russia and then the rest of the world will in all likelihood expand alarmingly in the next ten to twenty years, with direct and indirect conflict consequences across the entire spectrum of human endeavour. The perceived benefits of so-called cheap labour, once viewed as a strategic balancer of fortunes and bargaining positions, will diminish and disappear very soon, again leading to power and other conflict dynamic disparities. 


Entire large lower classes of economic labour will shortly simply disappear, when even these bottom of the chart minimum wage earners can be replaced with even cheaper AI labour. Once we understand these dynamics, the ways in which AI will shape a new AI geopolitical world order become quite clear. If we are right in seeing how a free and democratic society can have some very real drawbacks in the new world, some of our more domestic rights and hard-earned freedoms could very well be lost, or eroded at best. For politicians, as for military and national security commanders, the temptations and the power flowing from AI capabilities may simply prove too much, and the temptations of keeping up with the enemy, or the neighbours, or the inevitable socioeconomic trade-offs that the new AI calculus will demand of us, may prove to be of great harm to citizens and individuals in the not-too-distant future. 


We can already see some of the shape of things to come in actual, current applications of AI in everyday social life, where the Chinese experience is very illustrative. Driverless trucking routes, the ubiquitous use of facial recognition systems on public transportation and in public places, and the building of vast grids of information into so-called “city brains” are all either already in use or in advanced stages of planning and implementation (Lee 82). Here Alibaba has, for now, taken the lead on these “city brains”, which should become more and more normative in developed and suitably developing countries, where massive AI-enabled networks improve and enable security and service delivery by using data from CCTV footage and data, social media and location-based apps. These are all examples of geopolitical developments influencing our social and economic realities, which then in turn flow back into further geopolitical realities. I anticipate some very interesting and far-reaching geopolitical debates and conflicts in the West in the next few years resulting from this interplay between proven administrative and social control (call it service delivery, if you will) on the one hand, and the expected constitutional and privacy concerns on the other. 


We may very well see, as an important if unintended result of this AI arms race, a redrawing and reconceptualization of our human and constitutional rights, a free society and democracy itself. We may, in time, be convinced that a certain loss of those rights of privacy and a free society may be worth losing in relation to possible gains along the spectrum of benefits that AI may bring. An important part of the AI arms race is the Chip Wars, with a complex and ever-developing race for the AI chips (computer chips, semiconductors, transistors) necessary for development and application, or more practically stated, for effective participation in this race. Generative AI is one of the more important areas of this competition, with Nvidia’s GPUs for now setting the standard in many respects, including in becoming the 5th most valuable company in the US by mid-2023, but with Microsoft, Dutch tech company ASML and others competing with these dynamics in trying to build their own chips for use in the training of large language models. Intel and AMD are all strong contenders to compete meaningfully with Nvidia, These types of chips are crucial to the development of the entire AI project, from autonomous cars to facial recognition, and in themselves must create and drive new complex conflicts.   (see for example Chipmaker reported Q1 earnings for its fiscal 2024 that drove shares up more than 25% in after-hours trading. (forbes.com) for the popularity of Nvidia shares). 


The scale of these AI arms race skirmishes tell the story of the importance and scope of these developments, and why they should form an intrinsic part of our conflict studies and strategies.  One particularly illustrative example is Nvidia’s involvement in Israel’s new AI supercomputer, costing “hundreds of millions of dollars”, a project that should be partially operational by the end of 2023. The system is called Israel-1 and will be one of the world’s fastest AI supercomputers, delivering up to eight exaflops of AI computing (one exaflop can perform 1 quintillion calculations per second). Nations involved in this arms race, whether directly or indirectly, already have to keep a very close watch on its competitors, with the mercurial and unpredictable nature of tech development adding a few brand new conflict drivers to the existing mix. The role of these components in the range of conflicts that arise from the dependency on these chips mean that several complex smaller but related contests must be added to the calculations and strategies in play, such as access to the chemicals involved in such manufacturing processes. EU wants copyright rules for generative AI - TechCentral And these dynamics make an AI arms race escalation inevitable, and with that comes inevitable conflict across the full spectrum of human endeavour. Yoshua Bengio, one of the AI pioneers, believes that this AI arms race has become dangerous and harmful to our human interests on many levels, including the meaning of our concept of the truth itself. 


We are, even as individuals, already participating in that AI arms race. Have a look at the sheer enthusiasm with which Google and Microsoft, as merely two popular examples, vie for our personal office AI tools. AI is already in our own hands, a tool that we can use from our home offices, enhancing our performance, our days and our expectations of what is to come, and a driver of what should be delivered by these tech giants. We are a part of the conflict cause, in many respects, not just consumers or someone somewhere on the beneficiary / victim spectrum. 


Europe 

AI will shape geopolitics in Europe in very particular ways. These influences and developments will mostly happen in an environment where a variety of existing conflicts must be borne in mind, and which conflicts will in themselves shape the direction that AI development will take. If we needed a reminder that these AI conflicts do not occur in a vacuum, Europe is a good illustration. As it is, the European Union and its member states are already investing in and implementing a range of emerging and disruptive technologies. As should be expected, the US-China AI arms race has far-reaching conflict consequences for Europe. Expert in disruptive technologies and international security, Dr. Simona R. Soare sketches the current European conflict canvas as follows: “Brussels fears the geopolitical and geoeconomic consequences of lagging behind the US and China, France fears “irreversible dependencies” and Germany fears the loss of European geo-economic competitiveness if Europeans do not strengthen their technological sovereignty and strategic autonomy. The US wants Europe to invest more in defence and safeguard against malign Chinese investments in key technologies and critical infrastructure. 


However, Washington remains sceptical of European strategic autonomy for geopolitical reasons, particularly if it leads to a European “third way” or European “equidistance” between the US and China. Beijing is encouraging European strategic autonomy notably from the US, and Russia is sceptically weighing the alternative futures of the European project. In short, international actors do not see sufficient strategic intent and technological and hard power behind the EU’s narrative on strategic autonomy. As demonstrated by the 5G debate, the US and China perceive Europe as the grounds for geopolitical confrontation - an outcome Brussels and major European capitals badly want to avoid.” (Cristiano 77) A whole mare’s nest of conflict causes and triggers, current and future, eloquently set out in one paragraph. European geopolitics have always been extremely complex, with an interdependency that, in nearly every conflict, exacerbates the existing considerations, but with the AI arms race these nuances and incompatible interests seem to be even more intractable in many instances.   


Whether AI, in direct military application or indirect security usage, now contributes to greater European autonomy and conflict stability, or an increase in regional insecurity and threat levels, remains to be seen. The 2022 Russia-Ukraine war has understandably highlighted many of these existing regional and relationship insecurities and concerns, and the next few years should provide a rich vein of conflict study material and practical, useful experience for other geopolitical arenas as these conflicts play out. 


5.3 Conflicts in cyberspace 

Just as assets and interests, of a commercial or security nature, have become seemingly less tangible and are now stored in banks of data, so too have attacks on these interest become more digital, far more sophisticated, and in the process so much more difficult to contend with. With this increase in technological ability and reach, we also see a diversification of conflicts across the geopolitical range. We will consider a representative sample of these cyber-conflicts, and extend our understanding of these modern forms of conflict accordingly. We should note here that in assessing cyber security and conflict debates we are not always necessarily dealing with AI systems (only), but with advanced technologies that fit in our wider understanding of conflict creating advanced technologies. 


We have already assessed the practical difficulties inherent in regulation as a conflict strategy. Cyber conflicts are beset by those, and additional intricacies, often caused by the nature of the technology itself, or a lack of clarity and political will around regulation itself, make this a highly complex and volatile arena. International law expert Dr. Jack Kenny points out that only a “limited number of states” have made meaningful declarations on how they consider international law to apply to the intricacies of cyber operations (Cristiano 231). The covert nature of much of these operations, as well as conflict dynamics such as the problems with attribution and liability assignment, all play into a murky geopolitical reality as far as cyber-conflicts are concerned. The use of disinformation, deep fake attacks, interference in elections and various other sophisticated forms of cyberattack all make for advanced geopolitical conflict battlegrounds that need a new understanding of both the technology as well as the conflict principles involved. 


So-called “offensive AI” can be used by individuals, rogue groups, proxy opponents, commercial competitors and a range of other attackers to execute a similarly extensive range of attacks including theft, fraudulent activity, spying or infrastructure interference. Given AI’s capabilities and speed of such execution, the only really effective strategy with most of these attacks would be to fight AI with AI. This, as we have seen, can be problematical. Cyberattacks have also engineered its own range of brand new conflicts that require their own specialized knowledge and strategies. 


So we see, for example, the occurrence of data poisoning (where training data is manipulated and compromised), model stealing, where an attack can access an AI model and system, surreptitiously, and either interfere with its operation, simply steal benefits from such system, or use it for its own nefarious purposes, or large-scale denial of service attacks. Most of us are aware of the dangers of phishing (and there is smishing, and vishing…), and with AI’s increased abilities and ease in impersonating specific individuals such as corporate management figures and even people known to us, these new conflict frontiers affect us all. The speed at which these technologies are developing also make effective countermeasures essential and costly. These strategies would include effective system monitoring, AI behavioural analysis and the adequate and ongoing training of cybersecurity staff. Here we should see a modest growth in a few very specialized human jobs in order to effectively steer and safeguard systems and assets. 


Geopolitically distance and conventional borders are of course now hardly a barrier to these attacks. The statistics involved in these attacks tell a grim tale. Striking examples would include the fact that in the first half of 2020 no less than 36 billion data records were compromised, on average only about 5% of an organization’s files are protected against such attacks, and it can take up to six months to detect an attack, with another six months necessary to contain it, if the attack is sufficiently sophisticated. With the rapid and extensive adoption of AI across the various areas of human endeavour, the threat surface for these attacks and the resultant conflicts has of course increased exponentially and to a point where even understanding the threat level, much less the solutions, becomes a challenge. 


The end of the Cold War (should we, like with the World Wars, start talking about The First Cold War?), and the last three decades in particular, has seen many developments of offensive and defensive weaponry extend into cyberspace, and many countries have understandably entered this new and highly specialized potential battlefield. This now however also brings into play all of the related challenges and risks of modern cyberwarfare, such as the fluidity of the truth, the shifting borders of misinformation, reality being shaped and technology being abused in new and creative ways, in a new world where monitoring of your opponent’s capabilities and movements have become a very different challenge, and where even accountability and responsibility assignment have taken on new facets. 


All of these developments bring new capabilities and new vulnerabilities. The lines between conventional commercial espionage or attacks and more military attacks have clearly started to merge, with the hybrid nature of some of these giant tech companies making this rather inevitable. Cyberwarfare, as it is often referred to, is distinguished from more conventional warfare (even in the AI age) in that these attacks are often not designed or intended to cause conventional damage, with other goals such as information theft, financial or infrastructural disruption and so on, as we have seen. 


Conflicts in cyberspace have also started blurring the boundaries between the more conventional and easily managed understanding of war. These conflicts often defy easy classification into war, or peace, and end up in what Lucas Kello calls a state of “unpeace”. This has very important consequences for these new conflicts, and AI (specifically quantum computing, machine learning, natural language processing, deep learning and neural networks) has opened vast and quickly developing new frontiers of attack and defence, so much so that even these traditional concepts (attack and defence) are often interlinked in practice. AI’s possible convergence with quantum computing (using 5G and beyond technologies) all add vast complexities to these conflicts, especially to the various potential outcomes involved in the AI arms race. Here Google’s own quantum computer and its impact on AI systems is a current point of interest to monitor. 


AI is perfectly equipped for cyberattacks, with its advanced pattern recognition, incredible speeds and the capacity for machine learning. The debates about cyber warfare involve diverging issues and disciplines such as the automation of information (and the possible reduction of human interaction and control), the arguments about the development and use of lethal autonomous weapon systems (LAWS) and decision making and the limitation of access to civilian data. All of these are clear examples of the creation of new levels of conflict, or the recalibrating of existing ones. Again, as we have seen before, some of the tried and trusted conventions, rules and strategies will have to be reconsidered and rewritten. 


These conflict realities will logically change not just geopolitical assumptions, but also other linked arenas like the modern workplace in important respects, including a range of traditional workplace conflict areas such as confidentiality management, restraints of trade, intellectual property considerations, the monitoring of staff and various disciplinary management categories. It further follows that these capabilities and the strategic importance of these technologies play into the larger AI arms race in all of its manifestations and levels involving various allegiances and positioning, ongoing geopolitical conflicts and the very important developing question of digital sovereignty. Geopolitics and cyberwarfare have become vastly more complex, just as some of its original operational challenges may have been removed or eased. Cyberwarfare experts regard it as perfectly feasible that such attacks could include the hacking of electronic and water grids, the hijacking of healthcare computers and so on (see eg. Payne loc 1515). 


The fact that cyberwarfare so clearly involves national and international security, and the fact that the systems involved as targets may so easily include civilian computer infrastructure at some level, is bound to increase our conflicts at various levels of exposure, risk and complexities. As we have noticed, new technologies bring new conflicts. The building, development and maintenance of these systems must, nearly by design, create new actual and potential conflicts, some with far-reaching consequences across a range of conflict areas. With new abilities come new vulnerabilities. A few simple statistics on one important geopolitical development, that of undersea cables, should give us an illustrative example of how serious some of these vulnerabilities are, and what the consequences of such conflicts may be. Undersea cables transport some 95% of global internet data. NATO reports that approximately half of those cables are deemed critical. These cables, largely unprotected and vulnerable to even crude attacks, carry approximately $10 trillion (US dollars) per day in financial transactions. 


The SeaMeWe-6 cable system is an example of such an information system. It will be a brand new system that carries data between Asia and Europe via Africa and the Middle East, and this project will exacerbate various geopolitical tensions and conflicts, especially between the US and China. The cutting of two undersea communication cables between Taiwan and its Matsu islands in early 2023, disconnecting 14 000 Taiwanese citizens on the islands from the internet, with all of the suspicions and stage whispered accusations that followed, gives us a glimpse of the intensity of these conflicts and what may very well be a not-so-rare occurrence in the future. Here also we again see the merging of military, geoplitical and civilian interests. These subsea cables could very well become one of the major cybersecurity battlegrounds between the US and China (and several other formal and informal other players) in the near future. The design and deployment of these cables are said to allow for eavesdropping and interception of information, even without actual disruption. In March 2023 Qin Gang, the Chinese Foreign Minister, thought it necessary to remark that China and the US will remain locked in “conflict and confrontation” until such time as the US ceases its philosophy of “containment and suppression” of China. 


These cyber-capabilities have changed the applicable conflict strategies on all levels of political and diplomatic interaction, and accordingly have changed the dynamics of the very relationships and alliances involved. It is inconceivable that any major nation would not at some level participate in this arms race, in keeping at least somewhat in touch with these capabilities, even if that is only defensively. These technologies and systems exist and here in particular the die is cast for these countries, and opting out seems a quaint, even irresponsible consideration. While some experts (Payne loc 1532) believe that we are still in the early days of cyberwarfare developments, it is undeniable that we have set foot on some very dangerous new ground in our conflict history. 


5.4 Cyberattacks and the attribution problem 

One of the many new facets that cyberwarfare brings to conventional conflict is the attribution problem. While international law applies to cyberspace conflicts, a conflict where you cannot accurately determine your opponent or assailant makes the point a rather academic one in certain instances. We have earlier dealt with this developing phenomenon in general, and here we can note that in the geopolitical arena this reality adds several new nuances and complexities to a variety of conflicts and diplomatic engagements. Given the stealth capabilities of some AI systems, the emergence of proxy wars and a range of informal conflict actors, often dispersed globally, the simple attribution of a cyberattack could be a practical, diplomatic and operational challenge with new dimensions and considerations. How do you know who attacked you? 


As Angela Merkel’s complaint about Russian interception of her emails in 2015 and the Mueller inquiry into foreign interference in 2016 show, as well as the general allegations and confusion surrounding interference in the 2020 US elections, hacking and other cyberwarfare interference, including the allegations of such activities, are already happening, and some very sophisticated technology is required to defend against such attacks. While the art of interfering in elections is a very old and popular state hobby, and we need not get too involved in the who did what to whom and when part of the debate, we can accept that such efforts will simply continue, at a much more sophisticated level. 


Tracing and accurately ascribing responsibility for a cyberattack can be a remarkably difficult task, with the possibility of misdirection and false flag operations being accepted as very probable eventualities. Even locating an attack to a specific geographical area or digital address may have limited practical use to the victim of such an attack. Assuming that a future motive exists to attack corporations or individuals at this level, what possible chance would there be to detect or defend against such an attack? If countries and large corporations are struggling to keep up and maintain efficiency against some of these cyberattacks, what possible chance does the average citizen have against such assaults? 


Would this become a future political expectation, a human right – to be protected and kept safe from such interference and harm? To what extent can, and should, we expect our digital gatekeepers to protect us from these threats? So far adequate institutional protection has been seen more in the promises than the delivery departments. In conventional geopolitical engagement a more effective and, arguably more probable, development would be some form of cyber balance of power, some form of upgraded system of the Cold War tightrope walking strategies that would see nations arm themselves in these capabilities while at the same time trying to achieve some form of responsible limitation and deterrence argument. This could include attribution protocols that could, to a limited degree, assist in such a problem. We have had some meaningful debate and regulatory developments in cyberspace conflicts (war in particular). 


As far back as 2013 NATO, through its ambitiously titled Cooperative Cyber Defence Centre of Excellence (the CCDCOE), drafted a document that eventually came to be known as the Tallinn Manual. This document sought to set out the boundaries of humanitarian law (as it was then understood) on the conduct of various types of wars (ius ad bellum and ius in bello, in particular) in cyberspace. Understandable difficulties in the accurate drafting of this relatively new form of conflict, technological developments and practical difficulties and anomalies arising from that drafting then caused a necessity of the redrafting of such document, with the 2017 redraft of particular value, and which process then culminated in the 2021 Tallinn Manual 3.0 Project, as it is known, a realistic five year project trying to keep up with developments and the resultant or preventative drafting of this document. This draft recommended the establishment and maintenance of an international regulatory body (named the AIIP – the Agency for Information Infrastructure Protection), and in 2021 various UN experts and working groups acknowledged the dynamics and challenges of, on the one hand, accepting that humanitarian and international law apply to cyber conflicts, but on the other hand grappling with how this plays out and should be applied in practice. 


Various measures and strategies to build confidence, trust and to enhance cooperation have resulted from these activities. From this, can some sort of cyber balance of power, an effective level of deterrence be established and maintained as a conflict tool? While the detection and monitoring (and dismantling) of nuclear warhead sites were relatively easy to justify and execute, how would this be practically managed nowadays? The potential of direct harm and disruption flowing from cyberattacks are clear enough. Years ago the Stuxnet attack already showed the level of sophistication available in targeting very specific, defined targets. There are of course also potentially indirect conflict outcomes that could be extremely disruptive. One of the recognized strategies to defuse or prevent a cyberattack is for a target nation to disconnect its systems, which would include its internet and other cyber services and facilities. Even a successful defence can then, in this example, prove to have adverse consequences for the general public. Cyberattacks, now driven by AI, brings new possibilities to our conflicts, some of which I believe we have not yet properly understood and taken into consideration. 


5.5 The influence of artificial intelligence on political worldviews 

Crucial to our understanding of upcoming global political conflicts, and its inevitable direct and indirect influences on us personally, is a clear understanding of the differences in how AI is viewed, created, managed and applied by different political worldviews. This is, maybe more correctly, a two-way street of mutual influence and power, as worldviews also influence AI and the direction it can and does take. Here again we need not speculate. The current dominant AI environments, in simple forms, show us, as one example, the US with its democratic values, largely free market and capitalist corporations who have (at least in theory) no obligation or contact with the government of the day, and where a range of constitutional rights and specific norms are upheld by the rule of law. 


An environment that sees organizations like Google and Meta running into conflict with government oversight and regulatory efforts, where claims of free speech and privacy can be heard, and where the courts and legislators can play an oversight role. An individualistic society, where there are no real limits to personal developments and rights, where the individual is regarded as the ultimate arbiter of her or his best interest, with limited state intervention. In the other corner, we have China (and we can include, to a limited extent, Russia), a state where collectivist values are paramount, the rights and interests of the individual are regarded as secondary to the interests of the nation (as represented by the state), where rights of privacy and private property are curtailed greatly in comparison to the US or European experience, and where the boundaries between state and corporation are often more of a pretence than the reality we see in the US. In the collectivist state, the state will not be bound by the same limitations in place in the other, more libertarian state. Questions of copyright, access to and protection of intellectual property, security and data breaches, limitations on the gathering of data, and the eventual use of some of these AI applications will all be severely more developed, strict and a factor to be considered in the US than in China. 


These divisions will play itself out in a myriad of other countries across the globe. The political, social and economic worldviews held by a nation, or even more regionalized community, will increasingly play a role in how AI is received and applied in that area. These worldviews in themselves will be the causes of future conflicts and efforts to resolve them, as we will see. This underscores our earlier observation that technology is not to be seen as truly value neutral, and that is viewed through a variety of perceptual lenses, creating or preventing conflict in the process. These worldviews will have a dynamic and ongoing bearing on the application of these technologies. One community may accept and even welcome the development and presence of some form of facial recognition technology in a commercial area, but fully expect transparency and limitations in its application, while another community may, in contrast accept the use of such technology and have far less expectations as to its transparent use. 


Our worldviews shape our identities, and our identities shape our conflicts. Governments, politicians and other community leaders will increasingly come up against the great benefits (from their perspectives) arising from AI systems used in the process of worldview manipulation versus the boundaries of Western ideas of freedom of speech, privacy and other civil and constitutional rights. In instances where they prefer the former category I have no doubt that we will be seeing subtle but very effective campaigns at social and worldview (on specific topics) engineering taking place to align these two concepts closer. This, I believe, will for a decade or so require a variety of social, security and constitutional attacks on human and other rights in the West. The depth and complexity of these conflicts can be grasped from the outset, when we realize that such efforts will all be done “for the common good” and “for our benefit”. 


As much as Prof Shoshana Zuboff talks convincingly of “surveillance capitalism” in her wonderful book “The Age of Surveillance Capitalism“, these security and constitutional breaches are often more directed at the general public than against the developers and implementers of AI technology. In democratic states the so-called will of the people may find expression in who they appoint as their leaders, but that can, and will be manipulated in ways that will make disinformation campaigns like the 2016 and 2020 US elections look like the good old days. 


Democratic states, in all the hues and shades of that concept, may also be hamstrung by privacy laws and oversight institutions to further complicate the creation and maintenance, as well as the expression of, these worldviews.  Free speech, as limited as this may be, and even if it is just via social media or a free press, would  limit such a democratic country from succumbing to some of the larger temptations on offer from advanced technology. A more autocratic or fascist state may not have such limitations, and this could of course give them an undeniable edge in the development and deployment of AI, an environment where an advantage of a millimetre could grow to a mile in a very short space of time (alliteration wins out over metrication in this instance). Worldviews, in turn, give expression to a variety of various lived political systems and realities. 


So for example, conflict studies show that AI can link to populism in a variety of ways (Coeckelbergh 2022 106). Once we have a variety of AI driven and enabled processes interfering with the general public’s ability to discern truth from falsehood, once the possibility of informed decisions become diminished beyond a certain point, democracy can start to look very different from what we may be used to. 


We tend to think of democracy as a freely allowed, only necessarily circumscribed expression of our democratic wills, but what if those wills, those worldviews, are constantly interfered with, influenced and  polluted by technology, mostly without us even knowing that this is being done? How far is democracy away from simply being the opportunity to choose our own highly curated, socially engineered version of someone’s reality? What proportion of our worldviews are still really what we want to believe and accept? AI has made the creation of worlds possible, and we are going to freely live in those worlds. 


At a certain simplified point, the worldview, cultural, religious  and societal differences found at various different local levels can simply be respected (to the extent that they are respected at this stage), and AI need not interfere with these expressions. But, as I suspect, the interconnected nature and disparate socio-political expectations will inevitably come into contact and conflict with each other now that we become even more interconnected. Worldviews will clash in ways that we could up to now avoid by geographical isolation. We may be increasingly called upon to accept and co-exist with worldviews and their expressions that are accepted in one part of the globe and rejected in another. One example could be the Saudi app called “Absher”. This is a freely available app (at the time of writing I accessed it via Google’s Playstore), designed by the Saudi Interior Ministry, where Saudi Arabian men can exercise their “guardian rights” over women by accessing and tracking their locations and blocking or restricting their ability to travel, access and execute financial transactions and even to obtain certain medical procedures (Kanaan 217). Millions of people will simply agree with the premise and execution of the app, while for millions more it may prove to be offensive and in breach of those women’s rights. An app making pornography freely available probably offends those groups in inverse order. 


At a local level these questions seem easy enough to manage: people should have the right to self-determination and to conduct their affairs as their cultures, religions or societies determine. Questions of sovereignty and jurisdiction, self-determination and localised solutions all fit well, in practice. But AI will dissolve these neat and convenient borders even more effectively than the internet and social media have done. Social responsibility and issues of social justice, brand management and incompatible moral objections will be heightened and brought to the fore more than ever. Would governments, corporations or national platforms be able to boycott certain countries or groups because of these real or perceived injustices and abuses? Can a corporation continue to make its AI products available to such a country or group even though its own home country disapprove? Where do proprietary rights and pressure groups take these arguments? 


What are the implications of data gathering, storage and distribution for those disparate communities? These worldview differences, and the resultant domestic conditions in which AI is applied and implemented, can have tremendous future consequences for our global conflicts. Wartime atrocities committed by a country where a free press could report on it, and where images of such atrocities could cause a government to collapse, would be a far cry from those atrocities committed by a country where there is no free press, and where its population do not know of such acts, do not care about them, or are not effectively allowed or able to speak out against them. AI assisted technologies already make this distinction a reality, with everything from “alternative facts”, signal suppression, synthetic realities or deep fakes and so on making new levels of society control and propaganda manipulation possible that would have had Joseph Goebbels grinning. 


Freely available technology makes it possible to create alternative realities for mass consumption, where revered leaders and hated enemy leaders can be created and displayed over and over until the truth becomes just another option. We have automated disinformation, and people continue to build their worldviews on these manipulated tales. Many of the modern challenges facing politicians, including those in the West, can very efficiently be addressed by using AI-driven systems and machines. Opinion formation and maintenance, influencing, intelligence gathering, crime and crowd control, communication with various levels of individuals and groups – these are all problems for which AI has cost-effective, unique solutions. With new technology comes new ethical and moral questions, and it is here, with these new and often unprecedented new frontiers that we face some of our biggest future conflicts. As mentioned earlier, I predict that with these new conflict causes and parameters, we will soon be facing new ethical conflict boundaries and quandaries in democratic societies. 


We already seem to be highly offended if our own telephone communications are intercepted by the state when at home, but rather indifferent when it happens to those bad people over there, a continent away. We do not want the RQ-170 Sentinel drone over our neighbourhood, but it is acceptable when it conducts surveillance of Osama bin Laden’s hideout in Pakistan. With the power of surveillance and destruction that AI already possess, the necessary boundaries between states, groups and individuals will be tested like never before. Some very difficult decisions will have to be made by modern citizens living in the age of AI, and some of those difficult decisions are going to be made for us, without us being at the table. 


Many of these conflicts, in the end, will be with ourselves and our consciences. As always, our worldviews may very well be the final arbiter in what our future societies will accept and what not. And who will be the puppet masters in shaping those worldviews when AI starts showing its dark benefits to besieged politicians and geopolitical leaders, how difficult will it really be to use these new technologies to align our worldviews with what is perceived as beneficial social behaviour, the national interest and other favourite political slogans? What tremendously harmful conflicts lie ahead when these explosive debates will need to occur in an already heavily polarized society? 


Not convinced of the new challenges? Let us assume that an unmanned AI-assisted drone hovers over your neighbourhood for a few days, and in the process, picks up a telephonic conversation between your neighbours plotting the assassination of a senior politician. Can this information be used to arrest and detain them, and as evidence in their trial? If one of those participants was your son? What if the overheard crime was the illegal sale of a small envelope of crack? Where would you draw the line? Can an insurance company use a heart murmur picked up from your exercise watch to affect your insurance premium? Can your college application be influenced by video footage of your sexual activities that you regarded as private but collected by our friendly neighbourhood drone? Can we use those technologies against “the enemy”? Where do those fences run? International terrorists? Insurrectionists? Racists? People who are “clearly wrong”? 


Your answers to all of these existing questions all flow from your worldview. Our worldviews are important to us, and they have real world consequences, maybe now more than ever. Again we need to remember the short causal chain we are working with in these conflict considerations. A worldview can, and probably inevitably will, influence the algorithms designed and implemented by the AI operators in any given country. This will then, again rather inevitably, lead to bias intended or unintended, in how AI assisted technologies either make decisions for us or give us information on which we base our decisions, and here again we inevitably end up with the product that we started building when we started writing the algorithm. 


It will take a particularly hardy type of naivety to believe that the algorithms designed in a totalitarian dictatorship will always lead to the same place as one designed under the scrutiny of a democratic state. Algorithms and other AI “minds”, from chess games to facial recognition programs, are all written with a specific or discernible goal in mind, and those goals are not the same, our worldviews dictate them, on conscious and subconscious levels. Add the black box problem (where we are not able to discern how a result or decision was arrived at), and worldviews become essential drivers of AI outcomes and their resultant conflicts. For now at least, understanding how a decision was arrived at may be impossible or extremely difficult, and we may need to show with a high degree of accuracy and certainty how such a process unfolded, whether that is in military assessments, legal or security attribution and liability questions, fairness and inclusion inquiries and so on. In these few examples of very probable or even already existing scenarios we see how worldviews aid and abet the causation and drivers of our conflicts. 


Bergstrom and Jevin point out that    “Understanding the intricate details of how a specific algorithm makes decisions is a different story. It is extremely difficult to understand how even the most straightforward of artificial intelligence systems make decisions. Trying to explain how deep learning algorithms or convolutional neural networks do so is more difficult still.” (Bergstrom 198) Here we see the clash of worldviews and bias, intended or otherwise. How are our geopolitical leaders going to deal with these realities? 


We have seen the extensive range of crucial decision making processes and real world areas that they are going to be able to manipulate, so here again we need to be constantly alert as to these conflict causes and drivers. Arguments about the supposed purity and objectivity of AI decision processes are often a misunderstanding of the basics of algorithm design, or a tired abdication of responsibilities that belong with us. Even at its highest current levels, and where deep learning is occurring and the machine develops its own “thinking” about a specific problem, that decision making process is untroubled by conscience, by ethics or an understanding of consequences. In the popular computer game Fallout 4 a robot is given the task of reducing suffering in the human community. This it “achieves” by ending such suffering by killing all humans it encounters. The good and lofty intentions written into the coding of AI programs may not always translate into similarly lofty results. When AI becomes an unprecedented tool in the hands of those with extreme worldviews, we start redefining our conflicts. 


Worldviews, and their resultant competencies and limitations as this manifests in the hands of states, multinational platforms and interest groups, will continue to shape and steer AI development and application. While these differences may seem insignificant at first, I believe that we will see increasing divergences in both those spheres, and that ultimately, both the development and application of these technologies will converge again, led by the lowest common denominator and survival instincts of the least constrained of these participants. This will create a unique conflict environment. These developments, this AI arms race, will shape those conflicts, some of our ethical boundaries, our politics and even the understanding of democracy and a free society in itself. What is this free and open society in the new age of AI? 


If, through these modern technological strategies, such a society gets manipulated and nudged into conformity, is it still a free society, would truly free and dissenting voices still be heard? I believe that AI will do more for autocracy than for democracy, and that this will create important conflict disparities, at the very least in the global and geopolitical arenas. While there is a popular perception of the superiority of democratic and free societies in all of these global conflicts, the evidence in the application of AI technologies in democratic vs authoritarian states shows that these perceived democratic benefits may be negated by the use of such technology in more autocratic states (Reuben 116). 


This is a complex geopolitical conflict reality that will need extensive scrutiny and research over the next two decades. This again may drive some very uncomfortable debates and political conflicts in the West, at least for a while. Yuval Harari states bluntly that “technology favours tyranny”. This may not always have been the case, but as our investigation shows, AI may very well be making this troubling viewpoint demonstrably true. 


I am not convinced (yet) that the much vaunted Western entrepreneurial spirit and perceived endless commercial possibilities still exist at such a level that at least China cannot compete and exceed those influences, such as they may be at this stage. Kai Fu Lee speaks of China’s “entrepreneurial gladiators”, and China’s performance shows that the story of the West as the unfettered, unparalleled home of invention and progress is in dire need of reassessment. We should also expect geopolitical clashes involving human diversity in all of its forms, as it grapples with the AI revolution in the near future. 


Much of the value and attraction of AI in a geopolitical sense lies in its standardization of processes and its predictability of outcomes. That predictability naturally feeds back in to standardization. How will AI affect human expressions of diversity, of culture, of our wonderful differences? If we ourselves (or at least some of us) struggle so much to make peace with cultural diversity, what do we expect AI to bring to these conflicts? Regional cultural and identity expressions of such AI power is good and well, but it does not begin to deal with the inevitable areas of our new world where these local solutions will not work. Of course algorithms can be designed to respect and give expression to cultural diversity, to differences of opinion and worldviews. This should technically not be difficult at all, and we already see AI being able to target political communication to groups that hold widely diverse points of view. But will this allure of homogenization, of “the beauty of gray”, of a one size fits all population always be resisted? 


Are we not already seeing geopolitical efforts in that direction, with subtle social engineering, with the carrot and stick behaviours employed by certain governments in resolving internal conflicts, and with China’s social good citizenship program? Personal, community and at least national conflict of a wide array of methods and motivations are bound to follow, at least in the next decade or two, as people start noticing and pushing back against such standardization policies. 


GEOPOLITICS: CONFLICT OPPORTUNITIES  

AI will most probably do some of its most positive disruptive work in the political arena. Correctly and responsibly applied, AI capabilities should be able to assist leaders to make informed, timeous and effective decisions involving everything from traffic congestion to crop cycles, from migrant movements to environmental developments and impacts, using ethically obtained data for real-time decision making and policy implementation. The threat of cybersecurity breaches, in all of its many forms, is clearly understood by many nations worldwide, and organized responses have been devised to deal with this fast-moving threat. 


So we see, as a well-known example, the US Cybersecurity and Infrastructure Security Agency (CISA), an agency of their Homeland Security Department, established in 2018. This response was one of the measures flowing from the Cybersecurity and Infrastructure Security Act of that same year. These increased efficiencies could hopefully also lead to a reduction in some of the vast bureaucracies needed to run some of our modern governments, although of course these developments could lead to an increase in unemployment. This could lead to more money being available for socioeconomic projects, a reduction or even termination of conventional taxation, and in an increase in service delivery efficiency. 


The logical conclusion of such developments would of course mean that increased decision making processes run by AI or AI assisted processes may reduce the need for and eventually do away with the position of politicians as we know them today. If global and national decisions can be done better and cheaper by machines, if a corruption free environment, free from the costs and disruption of regular elections can be established in time, would we assent to it? Where would our democracy be in that event? 


How will state sovereignty be affected if it makes most of its decisions based on AI processes? How will it be affected if it makes most of its decisions based on information it gets from another country, based on AI processes that is dependent on data from other countries or multinational corporations? In the next few decades, at least, human-machine collaboration between government actors and AI will become a fascinating focus of study, among others for the conflict management field. 


But could AI in time really reduce or remove our reliance on politicians? This notion is certainly not as far-fetched as it may sound to some of us. Speaking at the 2023 Wall Street Journal CEO Council Summit, Elon Musk speculated that he believes that it is possible that AI could eventually take over the control of humanity for our protection, as a sort of “uber-nanny” (see also the Suggested Reading section for a link to a fascinating discussion with robots on this topic). I have no doubt that, at least in the next two or three decades, each society will have to have its own relationship with AI, each will follow its own wisdoms and processes in dealing with these tremendously important questions. Different local and global problems will see different solutions being tried, trial and error will be an inherent part of at least the next few years. 


Much will depend on results, on how future democracies will be shaped and manipulated. Self-interest, that subjective conflict driver of old, will loom large over these questions. The margin for error will be very small, with consequences that could have enduring socioeconomic effects. To what extent will we have access to decision making processes in these geopolitical and regional developments? If the original process initiators do not have a complete, or any meaningful, idea of the processes and considerations that may have led to a decision or recommendation delivered by an AI system, what can they convey to the public? How much would they want to share in any event? Will we be allowed to even have these decisions, source information and processes placed before us? 


Are we returning, in practice if not in theory, to royal decrees delivered from on high? These are all vivid examples of the thin edge of advantage / disadvantage that we are walking with these real and potential geopolitical AI conflicts. Can progress made in the establishment of responsible and transparent AI bring us the benefit of more responsible and transparent government? Here we already see various conflicts, of various intensities, coming to the fore, as the potential for benefit and abuse of AI systems are becoming better understood. It is encouraging to see early awareness and monitoring of AI in various spheres being developed in Africa, with good work already being done by organizations such as Research ICT Africa (see for example As Global Cyber Conflict Breaks Out, AI Technologies Bring New Risk to Africa – Research ICT Africa ) Africa, with its unique potential and vulnerabilities, will need a very specific and nuanced approach to the AI revolution and its conflicts, as we will consider below. 


I also look forward to the development of AI in the role(s) of peacekeeper in the various ongoing and increasingly important geopolitical peacebuilding work that lie ahead. The UN has developed an impressive framework of conflict resolution and mediation regulations, best practices and practitioners over the last decade, and the synergies between these systems and the help afforded by AI systems show inspiring promise for, amongst others, the various regional African conflicts. There is another important point that we should however not lose sight of in all of this exciting pursuit of AI possibilities in the geopolitical arena, and that is the reminder that the debates and work around AI regulation, development and application should not become replacements, distractions or proxies for crucial work that remains to be done as far as humanitarian and inequality challenges are concerned. This is of course not a zero sum challenge, but one of focus and a proper allocation of priorities and resources.   


GEOPOLITICS: CONFLICT COMPETENCIES  

AI brings about geopolitical new rules in diplomacy, information gathering and conflict solutions that would have made George Smiley very jealous. The spy novels and movies of the future will be very different from James Bond’s heyday. Edward Snowden’s revelations about the US National Security Administration spying on the American public through the internet has given us a glimpse of modern life and abilities under the surveillance microscope.


 Here again, having an updated knowledge of what is happening around us, and seeing the causes and triggers for the variously relevant conflicts, including our role in them and the risk we expose ourselves to with those processes, is in itself a good point of departure as a conflict strategy. In complex conflicts, a high level of simple awareness has its benefits. Geopolitical conflicts and developments are still influenced by democratic processes, and here we can shape such events, even if that may be in a limited manner. Our votes, our activism, our lobbying and general influence on how AI is developed and used, at least in our own environments, would all be practical measures that we can take, and remain involved in, to play a role in the future of such events and how they impact our daily lives. 


Here I agree with Christopher Blattman when he reminds us of the caution necessary in our own geopolitical participation: “But a stable and successful society must take a dimmer view of humankind, leaders especially, and build our systems for the worst of them.” (Blattman 48) An above-average knowledge of conflict and AI, and their interplay, granting or withholding informed consent, awareness of how these influences play out in our specific professional and personal lives, and an alertness to the various ways that geopolitical abuse start and become entrenched in societies should round off a good foundation in our preparation in this particular category.   


 REFERENCES AND SUGGESTED READING (CHAPTER 5)    

1. Angela Kane and Wendell Wallach’s handy primer on AI and geopolitics can be found here Artificial intelligence is already upending geopolitics | TechCrunch 

2. Can AI-enabled robots run the world better than we can? Let’s ask them AI robots at UN reckon they could run the world better (yahoo.com) 

3. For an extensive analysis of cyberattack statistics see for example 166 Cybersecurity Statistics and Trends [updated 2022] (varonis.com) 

4. Tortoise Media publishes a regular AI Global Index, which seeks to show the various components and results in the AI arms race. This handy index can be accessed at AI arms race: This global index ranks which nations dominate AI development | ZDNET  


This post is an excerpt from "Hamlet's Mirror: Conflict and Artificial Intelligence", by Andre Vlok, published by Paradigm Media (2023). All rights reserved. 


The book is available via Amazon, the publishers or the author direct. 


Andre Vlok can be reached at andre@conflictresolutioncentre.co.za or via the publishers at enquiries@paradigmcommunication.co.za 






Comments
* The email will not be published on the website.