25 min read
03 Aug
03Aug

The final manuscript of Hamlet's Mirror: Conflict and Artificial Intelligence: Understanding the intersection of technology and human conflict has now been sent off to the publishers, with publication being anticipated late August 2023. Here readers can have an advanced read of the first few pages of the book, in its pre-publication format. The book will be available on Amazon, and from the publishers (Paradigm Media (SA) or the author directly, in hardcopy or e-book formats. 


CHAPTER 1   

INTRODUCTION     

It is a capital mistake to theorize before one has data. Insensibly, one begins to twist the facts to suit theories, instead of theories to suit facts.

 Sherlock Holmes, in Arthur Conan Doyle’s “A Scandal in Bohemia” (1892)   


The Traveler had leaned his ear towards the Officer and, with his hands in his coat pockets, was observing the machine at work. The Condemned Man was also watching, but without understanding.  


Franz Kafka in his short story “The Penal Colony” (1919)   


Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. 


Nick Bostrom, Oxford philosopher   


The AI age needs its own Descartes, its own Kant, to explain what is being created and what it will mean for humanity.  


Henry Kissinger, Eric Schmidt and Daniel Huttenlocher   


AI is a disruptive technology with widespread influence that may cause: transformation of employment structures; impact on legal and social theories; violations of personal privacy; challenges in international relations and norms; and other problems. It will have far-reaching effects on the management of government, economic security, and social stability, as well as global governance.  


State Council of the People’s Republic of China “Next (New) Generation Artificial Intelligence Development Plan,” July 20, 2017   


Few people doubt that machines will one day surpass most distinctively human capabilities; the disagreements are about the rate of travel, not the direction.  


Martin Rees 


We ourselves create the reality of human experience with the questions we ask and the procedures that we undertake to find the answers to them. 


Paul Levy 


1.1 Holding up the mirror 

1.2 Our perspective and focus 

1.3 The structure of the book 

1.4 Our strategy 


1.1 Holding up the mirror 

In William Shakespeare’s Hamlet, prince Hamlet tells his mother, Queen Gertrude, 

Come, come, sit you down; 

and you shall not budge; 

You go not till I set you up a glass 

Where you may see the inmost part of you.  

(Act 3, Scene 4) 


What will we see when we look into Hamlet’s glass? This is really a book about us humans, our stumbling, beautiful, awful, journey to here, and what lies ahead. It is of course set against the backdrop of artificial intelligence, how these enormous developments occurring all around us will influence our conflicts and what we can do about that. But it is ultimately a story about us, with artificial intelligence playing the role of Hamlet’s mirror, reflecting the many images we need to take a hard look at throughout this book. In working with these images, what they do and may mean to us, we will come face to face with a few uncomfortable realities, and we will have some decisions to make. 


I hope to show you that this process is necessary, for all of us. It is urgent, it is complex, it is contentious, it is important. It is possibly one the most important process that most of us will ever be a part of. If you are sceptical as to AI’s ability to harm us considerably and permanently, even on a personal level, I intend convincing you of a more realistic appraisal of the situation. If you are concerned that AI is about to do immense harm to humanity I intend to show you where I agree with you, where we need not be too concerned, and what we can do about the real risks that we face. For both ends of that AI spectrum, and all points in between, I hope to bring the calm assurance of an informed, reasoned assessment of the AI age, its conflicts and what we can do about all of this. 


I have no doubt that we are entering an age that will be judged very closely and critically in decades to come. The things we do, and the things we don’t do, in this next ten or so years, will have far-reaching consequences for ourselves and generations to come. Whether we like it or not, whether we asked for it or not, whether we are ready or not, what happens now matters. We each stand to gain or lose much, we each have a role to play in this process. 


What do we know about artificial intelligence, and how can we make sense of the avalanche of developments, apps, the doomsday or cheery scenarios occupying our screens and discussions, how can we, how dare we, speculate about what artificial intelligence is doing to our societies, to the value and importance of being human? Surely these developments do not concern us, surely other people are dealing with it? Read on, dear reader.   Fortunately, it turns out that there is a lot we can do about the uncertainty, the hype, the threats and risks, the debates and the information overload. As we will see, the book takes a simple but very practical approach to all of these moving parts, and it is my hope that at the end of this book we are all more knowledgeable, more capable of meeting what lies ahead, and more confident about our abilities to survive and thrive despite the conflicts arising in the coming months and years of the AI age. Parts of our journey may prove to be rather disconcerting, disorienting, and we may be tempted to rather step back and go and ask the Netflix algorithm what to watch next. This is understandable, and as our own personal experiences with conflict may show, a sense of trepidation or concern before we engage with that conflict often leads to a better assessment thereof, improved performance during such conflict and better conflict outcomes. These particular conflicts, those that are caused or exacerbated by artificial intelligence and the related technologies that we will be studying, are also of course quite new, strange and difficult to properly assess for many of us. There is an effective solution for the occasional disorientation resulting from the surrounding technological sophistication, and that is to educate ourselves, to a level appropriate to our professional or personal needs, on the philosophies and systems that drive all of the smaller moving parts. 


Our goal here is ultimately a practical one. What we are dealing with is important: important to humanity, to our values, our ways of life, and to our personal interests, safety and security. On that rather ominous note I can at the same time assure you that I will not be asking you to accept any specific worldview, political or socioeconomic position in the work that we will be doing here, in fact, to the contrary, we will be charting a course that will bring you home regardless of those views you may hold, whatever they are. As someone who works with conflict every day, I invite you to approach the material ahead with an open, inquisitive and even critical mind. Disagree with me where you wish to – that is exactly the spirit we need for the coming years. 


As you will see, our investigation, our assessment and our eventual conflict strategies are not dependent on what new AI development we find in the news or on our mobile phones, it is not dependent on placing our bets on the right horse. In a truly modern conflict management approach we will expect that uncertainty and complexity, we will value dissent and disagreement, and we will construct strategies that effectively make it possible for us to be effective and resilient in these coming years and beyond, regardless of where and how these winds may blow. 


As we will see, this is not the normal tech talk that we have become used to, and (as I have found to my ongoing fascination), it is not even always normal conflict management talk. We are dealing with new horizons, new challenges, new opportunities, new conflicts. Disparate issues and fields are starting to line up in these conflicts, other traditional conflict relationships and assumptions are proving to be outdated or of lesser value. To borrow a wonderfully descriptive phrase, we are “… integrating nonhuman intelligence into the very basic fabric of human activity” (Kissinger 81). 


This will cause confusion, dissent and conflict. We are breaking new ground here, and it is to be expected that we struggle for solid and familiar answers, for assurance and comfort. Some of you may even be struggling to see how any of this amounts to conflict in the first place, and even if these are conflicts, how they relate to our professional or private lives. This is what our gaze into Hamlet’s mirror will show us. As modern people we are fully justified in asking “What does this all have to do with me, what are my risks, what do I gain here?” Clarity is a wonderful tool in conflict, whether we are always comfortable with what we see or not. It is exactly those questions, natural as they are, that deeply trouble me. We are dealing with very sophisticated and complex, nuanced ways in which our lives are being changed, for better or worse. We will study disruptive technologies that have already done us harm, disruptive technologies that only the truly naïve could ignore. This is not “something to do with computers”, this is not something that belongs only in the tech section of your newspaper. We will be looking at a practical explanation of what conflict entails, how it relates to us, and how AI is bringing about these conflicts. 


We can, and should, come to our own conclusions as to how to deal with these conflicts as they impact us, but one thing we cannot do, and that is to look away or deny that these are crucial, unique, new conflicts that will impact our lives. In an important sense, we are facing these times by acquiring new skills and improving on existing ones. This will take a measure of work and application, and we will each decide what level of such work is needed to serve our interests optimally. Artificial intelligence is already in the process of affecting our lives, as we will see very clearly throughout the book. Denying this simple and demonstrable fact, minimizing it, even delaying our preparation and response too long, will have direct, adverse consequences for us as individuals, groups, countries and for humanity itself. 


If we can get past our initial fears and scepticism, and if we can resist the temptation to look the other way, we will find that there is much that we can do to improve the situation, that we need not be hapless victims of these changes, or spectators at our own downfall. Between our general fascination with AI and a sense of its inevitable march to the control of humanity, and in studying the Janus-faced presentations of AI, we notice a wide array of very human responses. 


These responses, the potential consequences of AI in our human conflicts, the dangers we face and the dangers we create, the opportunities we face and the opportunities we miss, these all cry out for an urgent and comprehensive study and consideration of, and response to, this simple question: how will artificial intelligence influence our human conflicts, and how can we adequately prepare ourselves, professionally and personally, for that? The common denominator running through all of these questions and concerns is simply that: our human conflicts. One of the best ways we can engage with Hamlet’s mirror is to ask questions. Preferably informed, focused questions, but ultimately and for now, any question that can move us forward in the process. 


I believe that we are still at a stage in the development of artificial intelligence where intelligent questions in their own right have an important role to play in our efforts at understanding and mastering the forces inherent in this crucial debate. You will certainly encounter more questions than answers in this book. Maybe that is the way it has to be at the dawn of something so monumental in our human journey. Maybe, for once in that journey, we can learn not from our continued mistakes, but from our questions, from our debates, from our disagreements.  


In having this in-depth look at the new conflicts that AI will bring into our lives, how it will change existing conflicts and how we can best prepare ourselves to apply our newly required conflict competencies, we need not be experts in artificial intelligence itself. We need not understand the intricate workings of computers and AI systems, or the many technical aspects arising from the development and implementation of these systems. But we can observe certain trends, conflict patterns, we can deal with actual, current realities and we can learn from current and past lessons on how to be conflict ready for what needs to be done. To that assessment and stocktaking we can then applied the latest research and best practices in conflict studies and conflict management. We all have, to some extent, experience with conflict, and it is there that we will build our strategies and take our stands. In doing so we need not indulge in speculation as to exactly what AI is, when and whether it will be achieved, or what the timeframes for these probable developments may be. We can remind ourselves that human conflict is ubiquitous and that it will inevitably follow on life-changing technologies occurring, and that much of our conflicts will still be created and steered by existing biological and psychological causes and drivers. 


When John F Kennedy announced that the US will be the first nation to land on the moon, the technology that would in fact put them there did not yet exist. In preparing for world-changing disruptions and conflicts, it is best if we plan and prepare for where these moving targets are going to be, not where they are when we start. In the AI revolution, that time is now. We will remind ourselves not to be unduly drawn into the flash and hype surrounding AI discussions, but to keep our eye on how AI creates, affects, changes and perpetuates our human conflicts. That is where those of us outside actual AI development will encounter AI, that is where we will be forced to deal with it, that is where AI will really and immediately be relevant to our lives. And that is where we will be successful in adapting to it, or be harmed by it. That is what this book is about. 


In a certain sense then, all of the developments and speculations, fears, anxieties and hopes surrounding AI is relevant to our discussion, but only to a limited degree. Our study accepts the inevitability, and even current existence of, these technological developments and realities, and on those realities we base our assessment, seen through the lenses of the inevitable human conflicts. But our assessment will also notice an important disparity between the slow pace at which humans develop (and arguably want to develop) their more important traits, such as wisdom, creativity on the one hand, and the blurring speed at which AI and its rollout is being developed on the other. We should notice how certain of the dynamics involved in our study move and develop at different speeds, and be aware that this will have an impact on the conflict strategies that we will need to adopt. The speed at which some of these developments are coming at us will require our understanding of these conflict dynamics, and that we cannot deal with these realities at the pace of the slowest moving part. 


We need to take a calm and dispassionate look at the influence of AI on our conflicts, but we also need to do so urgently. If we read enough artificial intelligence fact or fiction (not always that easy to distinguish from each other), we may be excused for thinking of AI as some alien force, some external persona, an opponent. While I may at times in the book refer to AI in that manner, this is really unintentional, or a sardonic use of the concept. Let us be clear and honest about this: our AI conflicts are primarily conflicts with ourselves, with our need for security, expansion, dominance, our commercial needs, our greed, selfishness, and maybe even our conflicting needs for discovery and improvement. 


There are some really intense, grim doomsday scenarios and descriptions of the coming of the machines out there, and not all of them are to be found in our fictional creations. These are often very picturesque descriptions of our impending doom and destruction, with doses of our slavery and eternal bondage thrown in for clicks. In May 2023, for example, the Center for AI Safety, a San Francisco-based research non-profit, issued a brief statement simply saying “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories to this warning includes OpenAI CEO Sam Altman, professors of computer science Dawn Song and Yoshua Bengio, and others. ( Statement on AI Risk | CAIS (safe.ai)


Personally, and for reasons that I will explain in the book, I do not subscribe to the full doomsday spectrum. For example, I do not agree with Yuval Noah Harari when he states that AI has “hacked the operating system of human civilization” and I am not too concerned that AI will write and start a new religion, as he has speculated on. We have enough of those events in our own camp, we need not fear competition. 


There is nevertheless something about AI that quite easily sets the most reasonable people off on these speculative trips. And often, as we will see, they have perfectly good erason for such concerns and speculation. In this preparation of ours, some speculation and thought experimentation is good, even necessary. The question is at what stage in any given conflict such speculation becomes harmful to our interests. But, while being carried away by excessive speculation, science fiction and Chicken Little scenarios is to be avoided, we should also stop being wilfully squeamish and conflict avoidant about the possibilities of harm arising from AI developments. Turning away from reality because it makes us feel uncomfortable is an awful conflict strategy. There are more than enough smart, stable, responsible, knowledgeable people who are raising alarms of various levels of tone and pitch for us to be justified in taking the qualified, focused look at artificial intelligence that we will be doing in the book. 


And they hardly come more responsible and knowledgeable than Dr. Geoffrey Hinton.  In April 2023, Dr. Hinton (who somehow ended up with the media tag of the “Godfather of AI”) quit his job at Google, indicated regret at his life’s work and warned about AI causing harm to humanity ( ‘The Godfather of AI’ Quits Google and Warns of Danger Ahead - The New York Times (nytimes.com) ). He has since then expanded on and repeated this warning. As Hinton’s example, and so many others we will be looking at here, underscore beyond any reasonable doubt: we can remain realistic, we can prepare, we can acquire relevant skills, but we cannot say that we were not warned, we cannot ignore the warnings pitched at a certain reasonable, plausible level. As one of the practical skills we want to acquire we should aim at the ability to spot the meaningful issues among the piles of distractions, to be able to distinguish the relevant from the irrelevant. I hope the book assists you in that process as well. 


This will be an important ability to ensure that we can distinguish what facts, what developments are relevant to our own development and conflict needs. For example, many of the more incendiary, cautionary and alarming quotes and statements of the dangers of AI fail to distinguish between “narrow” AI and the so-called artificial general intelligence (AGI) or superintelligence (see Chapter 2 for explanations of these concepts). These quotes, from Stephen Hawking, Bill Gates, Nick Bostrom and Elon Musk, to name but a few, are generally concerned with AGI / superintelligence, a debatable eventuality that may be very far off into the future. Some of these debates also become derailed as a result of a struggle to distinguish between allowing for the specific technology to exist and what it is used for. Critics and sceptics of a particular application or use of AI are quickly met with boos and hisses of “Luddite” and of being “anti-tech”. We will, throughout our preparation, be mindful of these boundary lines to our enquiries. As we will see, we can get our job done whatever side we hold to, or even if we do not pick sides at all. 


I have often, throughout the writing of this book, felt an instinctive need to apologize to readers for some of the harsh topics that we are dealing with, some of my unvarnished opinions on what lies ahead and what needs to be done. I am not convinced, months later, that such an apology is appropriate. If your discomfort, your concern, your anger even, prepares you better for these conflicts then this may be of lasting value to you. I would love to receive an email from you, years from now, berating me for crying wolf unnecessarily. If we are going to understand our options and our solutions then we need to understand our problems, challenges and limitations clearly, and our assessment of what this entails will depend on us seeing these challenges (if not the conclusions) in largely the same way.This is simply solid, modern conflict management in action.   


In my conflict practice, both in the execution of mandates on behalf of clients as well as coaching work, I make a point of not sugar-coating or suppressing conflicts. Conflict avoidant behaviour, as a mountain of case studies and practical experience show us, leads to inaccurate conflict assessments, poor differentiation and preparation, inadequate conflict strategies, cyclical conflicts, cynicism and distrust. Nevertheless, no part of our enquiry should be seen as fear-mongering or panic in the face of the AI revolution. As experience with complex conflicts will show any of us, an accurate assessment of the challenges ahead is simply a responsible, wise approach to take to any meaningful conflict. There is a clear, important line running between panic and prudence, and we will walk that line. We need to face the wolf at the door, not the dragon or the lamb as they are often misrepresented to us. 


Our general reading on AI in the popular media will, on the one hand, tell us that all will be well, that we stand to benefit from AI in unimaginable ways, and that to criticize or be scared of the AI revolution is unnecessary or even quite harmful. We must not hold up progress, we must not be alarmist. Some of this may well be true, and all of it is understandable. The darker our environment, the more we appreciate the promise of a new light, and humanity has been going through some difficult times recently. A digital solution to all of this – who would not be enthusiastic about this? On the other hand, we will be told about the catastrophic and irreversible harm that AI and related technologies can do to us. And these “sides” are both right – they are both dealing with scenarios that are most certainly still on the table, as we will see. The seeds of many different results ranging across a spectrum of these possibilities all still exist, all awaiting further developments and decisions. 


These decisions are, in many important respects, intensely personal, they affect us in important ways, and they can change our lives in ways that we would want to keep as much control over as possible. Trying to pick a side, or to ignore these developments, are all good and understandable options, but I suggest that we have an even better, more effective strategy open to us. These disruptive technologies are already doing us harm. There is, even now, already a strong case to be made for complex conflicts that have already become part of our societies, conflicts that require urgent attention. Ignoring or diluting these threats will be quite irresponsible, as we will see. An unduly rose-tinted look at AI will serve us badly. This “digital utopianism”, to use Max Tegmark’s phrase, is tempting and understandable and fun and easy, but as I hope to show in the book, it could do us as much harm as overestimating the risks inherent in the AI revolution. It follows that I equally deplore unfounded alarmism and fear-mongering, amongst others for purely technical conflict management reasons, as I do unfounded optimism.  


Panic and sustained negativity in complex conflicts lead to conflict paralysis or over-reacting, setting off their own patterns of overcorrection, loss of confidence and scepticism. Case studies show how these influences negatively affect not just the conflict decisions that we need to make, but in our actual abilities to gather and internalize information necessary for those purposes. Unfounded optimism and an inaccurate assessment of conflict risks lead to conflict avoidant behaviour, poor preparation and a lack of sufficient time and options when the unexpected in fact transpire. In conflict management terms, whether you miss the target to the right or to the left, you have failed in your goal. 


Consequently, our best approach to all of this is founded on a clinical, unemotional and, above all, accurate look at where our best interests lie. We need to map and then follow a course that is not dependent on the roll of some global dice that no-one seems completely in control of. And, of course, while we are setting up these scales, and given that we are dealing with human conflict, we should also stop approaching this debate as if humanity’s best interests is one homogenous lump of rights and a list of benefits. Were it that simple, as we will see. We are at an inflection point in human history, and it is important at this stage to be asking questions. Allow yourself to ask questions, even if you are in search of peace and clarity. An honest, detailed Socratic stroll through these questions may gift you just that. 


The right questions define our problems accurately, without which benefit we cannot arrive at the correct solutions. And, as with all complex conflicts, there are crucial time and sequence components involved in our task. There will come a time when some of these questions no longer make sense, or where they have become of academic value only, where our joint or individual ability and opportunity to meaningfully shape and affect these energies may have passed, where it is too late to protect ourselves. As it is, and as we will explore, I believe that we have reached the late stage of several important questions already, such as those relating to regulation, industry protection, income replacement and so on.  


AI based systems have been helping humans deal with huge volumes of data, making decisions and diagnoses since the 1980s, but we have reached an important crossroads in this development, one where developments and the application of AI have started having real consequences for humanity as a whole. We have already started implementing several of these advanced technologies without too many questions having been asked. The wheels of progress seem to be turning faster. 


In this assessment we should, throughout, bear in mind that we should accept that there will be short-term questions and answers, and more long term questions and answers, and that in each question, each conflict possibility, we should approach our solution with that dual timeframe in mind. Many of us simply want to know that everything will be ok in these days of AI, and we seek answers, simple and easy ones, and this makes us vulnerable. Depending on the specific details of professional and personal positions, we should rather see our engagement with AI conflicts as processes, not once off events that could be disposed of with one or two decisions or neat answers. As we move through the book, keep brief notes of specific areas that may affect you in one of the spheres we deal with, and see for yourself how different conflict challenges will require, by their very nature, different timeframes to be applicable. 


Kai Fu Lee, the Chinese venture capitalist and entrepreneur, in his eye-opening book “AI Superpowers” distinguishes between four “waves” of the AI revolution. Whether we agree with his categorization of these waves or not, his larger point is well made, and that is that this revolution, this blurring of the lines between the physical and the digital world, is happening in stages, and that it is not a once-off static event (Lee loc 1732). 


1.2 Our perspective and focus 

From this perspective then, whether artificial general intelligence happens or not, whether the world successfully regulates these developments, whoever wins the AI arms race, whether we lose our current jobs in the process, what is to become of income derived from the workplace and a hundred other fascinating and crucial questions facing humanity all lead into our one focus point: these are all manifestations of human conflict, and we need to ask how do, and will, these realities shape our human conflicts, how can we best prepare ourselves to survive and thrive in the midst of such conflicts. 


We will (with a few entertaining exceptions) then leave the speculation on these momentous questions to others, and focus on the one thing that we cannot afford to get wrong: our survival in all the conflicts and in all of the areas that are important to us as human beings. We do not intend to deny or over-analyse the coming storm, we are simply readying ourselves to meet it as a reality, and to remain standing, thriving, once it has passed. We will therefore focus on the AI revolution only in the areas that we require to understand its impact on our conflicts. 


The entire AI debate can do with more focus and more nuance not just in focusing on its real or perceived impact and consequences, but on who it has this impact. It helps not just to see the questions we need to ask as an Indra’s Net of inquiries, but also of answers and solutions, of a refinement of much of the generalizations being experienced in AI debates. In chapter 2 we will specifically investigate what we mean by these conflicts, and we will see that anyone, or anything, any process that directly or indirectly affects our best interests is in conflict with us. This will be key in our assessment of AI and in the process of making sense of what is happening all around us. 


As we will deal with later in the book, the very idea of artificial intelligence is one that has its own wonderfully muddled lack of consensus in the popular mind as to what the idea entails, what it implies, whether we have reached a stage where we can even talk meaningfully of AI, and what this means for all of us. While the understanding and use of popular applications of the Matrix and Terminator-like images and tropes may be as amusing as it can be easily dismissed, we also have a few far more important questions and decisions to deal with as a result of what, for now, we can simply refer to as artificial intelligence and its impact on that perennial favourite: human conflict. 


And here at the outset, let us quickly deal with one of the many temptations available to us in facing the AI revolution: how inevitable is the arrival of all of these developments and trajectories? How much of this can we ignore or simply not engage with? We will deal with those tempting strategies in detail later on, but for now let’s make a few introductory observations to prevent you from deciding that “I really don’t need to read this book”. 


What many of us accept as the inevitable march of the machine, some reject. Computer scientist Erik Larson, in his Myth of AI (Larson 1) rejects this inevitability, and submits that the future of AI is limited to what we humans know of intelligence. He argues that much of what is wowing us now is nothing but low-hanging fruit, and that we need to understand that AI progress need not be incremental, that there are ditches and valleys that must be crossed to where AI becomes a serious problem, and that these distances may never be traversed. 


Larson states that no computer has really passed the Turing test, and that the intelligence we are seeking to recreate and unlock is, in reality, not a specific thing but situational, contextual, and situated in your societies and cultures (Larson 9). Others disagree, and I would suggest that programs like ChatGPT-4 and a few other LLM could with some conviction be regarded as having passed the original concept of the Turing test. 


I look forward to taking part in the AI debate, in all of its fascinating facets, but arguments like Larson’s are part of why this book has such a specific focus. The sceptical reader is invited to throw out each and every speculative argument, discussion and debate in this book, and the conflict framework and strategies will be as relevant to the AI and advanced technology conflicts that we already have with us, as if and when these speculations (or some of them) become reality. The debate (which in itself of course ironically causes and drives conflict) and its outcomes simply determines the specifics and parameters of our future conflicts. 


While I disagree with Larson’s vision of the limits and potential of AI as a technology, we can grant him that opinion, and as the book will show, we are still facing a troubling series of very new conflicts that must affect us in our professional and personal lives. In addition to my focus being on our conflicts, too much speculation (wherever placed on the utopian / dystopian spectrum), especially here in the early days of the revolution, can lead to a type of self-fulfilling prophecy taking hold, making our responses when things change less dynamic, less nimble. If AI is a myth, are we not in conflict with the purveyors of such myths? AI sold as a threat to our way of life while it does not materialize as anything like that would in itself cause a lot of harm in the next few years. Even in that event, we are facing a complex range of conflicts ahead. And, if some of the AI speculations are not in fact mere myths, then we have to deal with those developments that impact on our best interests. 


As we will see throughout the book, we do not serve our own interests well by over- or under-assessing the threat and risk levels. In perceiving AI in its present forms as near omnipotent and powerful, as an inevitable juggernaut rolling over our futures, or viewing our own roles in the process as limited and maybe futile, and in getting involved in too many speculative adventures as to superintelligence and AGI and when we are going to reach that, I believe that we get the AI risk-reward calculus wrong in several respects. I believe that philosophy professor Mark Coeckelbergh is a lot closer to heading in the right direction when he warns that 

“There seems to be a real risk that in the near future the systems will not be smart enough and that we will insufficiently understand their ethical and societal implications and nevertheless use them widely.” (Coeckelbergh 2020, 14). 


The risks, the threats, lie less in the machines than in the humans who build and apply them. And, once we have cleared the lenses through which we view these conflicts in this manner, we are dealing with and notice good old human nature, something that, as complex as this may be, we are much more comfortable with.   


We can try to read the AI tea leaves correctly, or we can prepare ourselves in a dynamic way to be able to deal with whatever outcome we will be facing. I love the reference to these next few months and years that we are living through in AI development as “The Between Times” (Agrawal 12). We can see the potential, the risks and dangers, we can map some of the trajectories, and we may not be sure exactly where this will lead. 


Conflicts of various shapes, sizes, complexities and life-changing, world-changing intensities are guaranteed, and if we focus on those conflicts in terms of the roadmap we have designed here we can remove a lot of the real and imagined uncertainty, the risks and the threats inherent in this process. These “Between Times” need not be a stumbling around in the dark. It is not justified to argue that it is too early to become actively involved in our AI conflicts, or that time will tell. This need to look into that mirror at all is complicated in some respects by the unique nature of the AI age and its challenges. It is tempting and quite understandable for most of us to simply leave the decisions and the developments of all of this to developers, corporations and our governments. 


As history so eloquently reminds us, however, the wrong technology in the hands of our leaders hardly ever ends well, and humanity has a knack for finding new ways to harm each other. As we will see, and need to consider in our conflict preparation, AI is not just another new fad or phase in technology, a new iteration of cell phone or software program. It is not comparable to the step up in technology from the broadsword to the musket, from the horse to the tractor, from candles to electricity. It is a paradigm shift in humanity’s future, and it is crucial that we understand as much as we can about this new development, not for the sake of the technology, but for its inexorable influence in our lives. 


The responsibility for much of this lies in our own, individual or community hands. In many respects this AI revolution is already here. It is not tomorrow’s problem, it is not the next generation’s problem. Our study of its effect and impact on our human conflicts can deal simply with existing conflicts, and it is already necessary, regardless of the probable future conflicts that we will be looking at. As Kevin D. Williamson says about our propensity, and the temptation, to minimize the risks and harms that we are facing, “Defining danger down consists mainly in elevating the importance of hypothetical evils over real evils.” Whether we see the current level of AI development, as you read this page, as “evil” or otherwise, it is here, it is affecting our conflicts, creating new ones, adding layers of complexity to others, resetting some of the progress we have made in past conflicts, creating new battlefields where the uninitiated, the unskilled, will be at a terrible disadvantage. 


And here I would suggest that we already find a curious observation: we seem to already be reacting to much of AI’s developments, without there being a sense of co-ordination and phased planning in all of this, on any level. We place it on the agenda for the next board meeting while the newly installed algorithms hum in the background, we argue about regulation using the latest app on our smartphone. We have already seen the destruction or devaluation of jobs and industries, the direct influence and prejudice that artificial intelligence has brought, even here at what would probably be the relative Early Days of this development. And for what? So that we can play around with impressive book covers that we can “design” in seconds? That AI can do our homework essay for us? What do we understand about the cost / benefit numbers at play, what are the trade-offs that lie ahead, who wins, who loses, and what is won, what is lost? If we stop AI development right now, are we dealing with a net gain or net loss to humanity? 


AI is going to be difficult to confront on an emotional level for many of us. It is a reality manifesting all around us, it is our cultural zeitgeist, we see articles and tweets about it everywhere, it may already be knocking on our workplace doors – and yet, it still feels alien, distant for some. Stephen Hawking told us that the invention of AI could be the worst event in our history, Elon Musk says that we have “summoned the demon” and that humanity will probably go extinct if we do not merge with the machines or escape to Mars. Others tell us more soothing stories of a future filled with hope and a better life for all. Unlike any other topic, AI places pressure on our existing conflicts, unfinished business, weaknesses and structural defects, and we see cracks appearing everywhere. Of course this must lead to anxiety in many people. 


How do we deal with that reality? In conversations with involved role-players, clients and friends I often hear that disconnect between AI’s glitzy, glamorous, fascinating hype and the need to really deal with it, especially on a regional and personal level. With new frontiers broken by AI, with digital manipulation and augmentation, our realities and experiences are no longer extensions of what we already know and are familiar with. And while there is a cacophony of AI information to deal with on what feels like a daily basis, clear and reliable information is also available, often presented in easy to understand and work with formats (see eg the very helpful guide “State of AI in 14 charts” from Stanford University at 2023 State of AI in 14 Charts (stanford.edu) I hope to convince you  that the very fact that these realities are changed and manipulated, that we are nudged in directions benign and otherwise are already situations where we are in conflict, or potential conflict, with those putting us through those experiences. 


As we will see from our clarification of the term “conflict” in Chapter 2, the AI revolution has, as it is, in its current state, and as you can see from switching on your cell phone, already brought conflict to all our shores. The only really remaining question is what we do about that. The energies expended in the creation, application, advocating for or against AI is certainly not matched when we get to actually doing something about the best or worst case scenarios. We look for our fiddles while Rome may be set alight. The size of the tasks ahead, the real or perceived complexity in these conflicts, and the perception of this being a unique challenge all contribute to this sense of decision making paralysis that we observe. 


For some, this whole process feels like something being done to them, not as if it is a process that they are a part of, or one that they can opt out of or influence positively. As a result, these people ignore these developments, seeking to understand as little as possible, waiting for others to do what is right and to tell them, eventually, what follows next. This is ignoring the problem as conflict strategy, a popular but ill-conceived approach, especially in dealing with a complex conflict such as the AI revolution. Looking after ourselves in the AI age will take more than simply waiting for a digital Godot. 


I promise the reader that I am in no manner anti-tech, and none of my arguments and speculations will come from that futile place. I would very much like to live in the times envisaged by our best utopian science fiction, where we all have tons of free time, no financial concerns, where great equality reigns and where we all amuse ourselves in some way while AI takes care of all the drudgery and dreariness that we currently complain about. 


But the stakes are incredibly high. While I do not believe that I will one morning open my office door, only to be fired by an AI overlord, or that I will ever have a holiday in a Westworld type of environment, I do firmly believe that we are on our way to allow harm to human society in the haphazard way that we are allowing AI, such as it is, to seep into our ways of life, and importantly, into our conflicts. But seeing the challenges clearly need not lead to panic or denial. 


Because of the conflict strategies that I discuss in the book, I am not anxious about what lies ahead, and neither do I want you to be. I do not so much see AI overlords walking the streets as I see us doing immense harm to our economies, to our interpersonal relationships, to our ability to step back from this precipice and to remain in control of some of the conflicts that could arise out of a misunderstanding or a mismanagement of this watershed moment in human history. AI may very well hold an existential threat to humanity, but not in the way that our movies, novels and some experts depict it. 


A big part of my confidence in this future world stems from the fact that I believe that once we start working with the important clues and puzzle pieces in all of this, we will prevail, and that our tasks will be much simplified, so much so that we need not suffer much harm in the process. But none of that will be a sugar-coated fairy-tale, there is much urgent, important work that lies ahead. As we mentioned earlier, if you are sceptical about this position, I value that, and I invite you to consider the case that I will present here in this book, and I ask that you engage with the facts as we have them. If none of these discussions convince you to accept that we have real cause for concern, or that our conflicts, new and existing, are changing rapidly in ways that could change or destroy important parts of the fabric of society, I envy you your peace. If you are in agreement with me, I invite you to join me in our assessment of how we can improve our conflict knowledge and skills in practical, meaningful ways. If you believe that human society is doomed because of AI then I ask that you suspend your judgment until the end of the book. On whichever point in the spectrum you end up, this is necessary work. Accept or reject the situation, but do so as an informed exercise of your obligations to your own best interests. 


Are you still tempted to rationalize the threats, or look away? Do you remember the Y2K soap opera? As January 1, 2000 approached, the world had a few paroxysms of technophobia and end-of-days narratives about Y2K. While the fears of those brief days are nothing comparable to the sophistication and reach of the AI challenge, it may serve as a small, limited reminder of how we need to get our facts correct before we start expecting the sky to fall. And one way to see to it that we make use of accurate facts and that we deal with all plausible eventualities is to comprehensively deal with the AI revolution as a sort of high-tech modern Pascal’s Wager for the future of humanity. If we prepare for the worst, within reasonable limits, we stand to gain more (or lose less) than if we ignore the worst and all or some of it transpires. If AI ends up a damp squib, with none of these debates ever proving of any relevance and importance in the real world out there, you can file this book with your other science fiction books and movies, and we can all have a relieved laugh at how wrong I (and a few others) had it. 


If, however, we get most of this right, we will have much to do, and much to gain in return for our vigilance and our preparedness. The philosophical premise of the book then flows from that, and it is a simple but critically important, and urgent, one: our society has the privilege, and the responsibility, to be in charge of disruptive technology that outstrips the potential of harm or benefit that we have had to deal with up to now, such as atomic power, space travel, nuclear power or the internet. We are the collective, often reluctant custodians of what happens next. 


To however deal with all of this as primarily challenges of a technological nature is to deal with the symptoms and not the cause of what we need to prepare for. All of this, every important part of it, can be traced back to the complexities of human conflict. And, as we will see, in that realization lies our best chance of effectively dealing with all of this. As our histories show, we seem to create best when we create through the furnaces of our conflicts. 


We will see that however described in the media or around the dinner table, the practical heart of the AI debate, for the vast majority of people, lies in the conflicts that are heading our way, the conflicts that are already here, and how we will cope with that, how we will survive them and thrive under these new circumstances. 


I find great value in mathematician Alan Turing’s observation that we need not focus on the mechanism of machine intelligence, but rather the manifestation of that intelligence (Kissinger 50). We need to keep our eyes on the important parts of this, and not be distracted by the scary or the shiny parts presented to us by a variety of agenda-driven interest groups. I believe that much of the AI inertia that we see around us, at least up to this place in history, stems from the curious balance between an absolutely unique, revolutionary technology and all that entails, on the one hand, and the sheer banality of much of its everyday use. It is a sleep-enhancing app here, a game there, some new tech in the car, one or two wearables and a lot of hype. It is multifaceted, driven by so many moving parts, so many opinions, so many possibilities. It just seems so easy for some of us to close our eyes and let all these shiny, mundane waves wash over us. Everything will be ok, right? 


So we can start our new conflict skill journey by not so much learning much about AI itself, but in promising ourselves that we need to simplify, pare down the questions and arguments, distil what is of real importance, and then use that wisely. The entire AI debate, fascinating as it no doubt is and will be, is then of little interest to most of us unless we view it through the lens of one of humanity’s oldest activities: our human conflicts. If AI will not create new conflicts, complicate existing ones and require a new type of response, then all of that will be of academic interest mostly. These technologies, as we will see, create and influence our human conflicts across a very wide spectrum of activities and interests, and we need to understand and prepare ourselves accordingly, or in other words, become conflict competent to meet, survive and be skilled in these conflicts, some of which are rewriting conventional conflict management textbooks. That conflicts, new and expanded, are all but guaranteed flows from the observable fact that the various state, commercial and personal interests that we will need to define and try to manage effectively are, in many instances, incompatible. 


Some of these conflicts are going to affect our lives not because they are driven by evil caricatures from mountaintops, but because they are smart, innovative ways that perfectly ordinary people have designed to simply improve their (and purportedly our) lives. Some very creative conflict management days lie ahead for humanity. 


We need not fear AI, we can develop it (some would say help it develop) to its full potential, but we need to be crystal clear on the dangers, as well as the potential benefits that it can hold for humanity, especially insofar as our conflicts are concerned. This is about far more than Alexa getting your playlist right on Spotify or a more accurate picture on your Google Maps search. This clear-eyed assessment, of necessity including some wise speculation and informed best guesses, must be done before we move much further down the road of the AI adventure. In addition to the more obvious strategic and everyday questions that AI brings into our lives and our conflicts, we also will need to keep an eye on our own motivations in what we allow, in what we build and in what we break. 


While there is the understandable yearning for improvement and advances in all sorts of fields of human endeavour, from medicine to engineering, from the future of work to the end of warfare, I fear that a tired, disillusioned humanity may also seek to abdicate some of its responsibilities in a variety of our conflicts, to pass the responsibility of difficult, costly and unpopular conflict outcomes to Something Else. 


Our motivations, at all of these various levels of the next few years, matter, and as in all complex conflicts we will need to keep an eye on that as we make progress. Mixing profit, huge fortunes, the interests of others, societal versus individual benefits, various worldviews, identity conflict clashes and already existing conflict fault-lines into the uncertainty and anxieties that may exist around AI is all but guaranteed to increase the conflict levels across the spectrum. 


It is also conceivable, to put it politely,  that there will be political and corporate abuse in the development and application of current or future developments in AI to address current conflicts and problems involving say regional wars, migration, incarceration of offenders, crowd control, information gathering, surveillance and law enforcement. We have recent examples, such as the Covid-19 pandemic, of how certain politicians and leaders conduct themselves when there is big and easy money to be made from a crisis, burned into our collective consciousness. How responsibly, efficiently and fairly are our politicians using existing technologies, even before we get to artificial intelligence assisted technology? How well will current or future political leaders deal with the temptations or competitive pressures of the various forms of digital authoritarianism made ever so possible by these new technologies? We will investigate these conflict dynamics as well. 


In preparing ourselves to be conflict competent for these modern times, we will consider quite a few global AI developments that, at first consideration, may seem remote and unconnected to our own interests. As we will however see, the AI revolution affects a variety of interconnected systems, and chaos theory’s butterfly effect should always be in the back of our minds. A few opening questions then to get your mind working in these new directions.


What is AI really, and who is building and shaping it? Is it a monolithic concept capable of regulation and responsible, measured application? Who would be responsible for the assessment, policing and general guidance of AI, assuming such ambitions are even possible to realize? If we want to regulate Facebook, should we not regulate AI? Can we ignore these questions, or limit our interaction with AI, and what are the consequences of such efforts? What are the time horizons for these developments? How can a vendor’s decision to sell facial recognition software to a shop-owner in a remote country impact on my civil rights? What happens if I am not really making use of computers? As you can see, dear reader, it is not just AI’s intelligence that will be expanding in the next while.  


These technologies are forcing change, and change brings conflict. Some of us hate change in itself, we are uncomfortable with the very idea. One of our new required (and fun) conflict skills will be to carefully monitor relevant changes, and to interact with such change in an informed, focused way, to be comfortable with a certain level of change, not because we like change, but because that limited level of engagement with it preserves other important parts of our lives. 


We can focus on the ten thousand shiny pieces of AI development and change as it enter our lives, or we can focus on the handful of conflict causes, drivers and solutions that we need to master during these times. 


Artificial intelligence is shaping our society, and it is in turn being shaped by our various societies. This causes an inevitable and deeply serious, urgent impact on our existing and future conflicts. This blitzkrieg of decisions, regulations, apps and problems is already here, and we do not get to unplug the machine. Pandora’s Box (or jar, for the purist) is wide open, as we wait to see what was in the box in the first place. We do however get to shape and deal with it, we get to shape and deal with our role in this brave new world. The Covid-19 pandemic has “dramatically accelerated and deepened the impact of digital scale, scope, and learning on the world economy and society.” (Iansiti and Lakhani xxv), and we now get to participate, one way or the other, in this important era. 


These are some of the questions we will be focusing on in this book. Together with AI researcher Eliezer Yudkowsky we should also have a healthy dose of humility in this process and remember that “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” This, among several other conflict management mechanisms, is one of the good reasons why our focus should not be on AI but on human conflicts caused by AI in the present, and as much as we can see of its near-future trajectories. 


Complex, urgent, scary, exhilarating stuff. I am glad you could join me.


Comments
* The email will not be published on the website.