7 min read
24 Sep
24Sep


Synthetic reality or "deep fakes" has become ubiquitous, largely unregulated and available to practically anyone for the asking. These advanced computer systems and programs have made a comprehensive assault on relatively settled concepts and understandings of truth and reality possible, for abuse and individual or crowd manipulation in geopolitical, commercial or other spheres of our lives. The following is an excerpt from "Hamlet's Mirror: Conflict and Artificial Intelligence" where we take an in-depth look at the existing and new conflicts created by these developments, and what we can do to prepare ourselves to effectively deal with these new realities. 


(Excerpt, from Chapter 8 of "Hamlet's Mirror", published by Paradigm Media 2023)


One of the more dramatic conflict frontlines of these new technologies in the next few years will be the so-called deep fakes or synthetic media products. When philosopher Jean Baudrillard wrote (in “Simulacra and Simulation”, in 1981) about reality and representation blurring (a concept he called “hyperreality”), he could hardly have imagined how absolutely correct a description of our AI-generated realities that would become. This is a concept beyond a good copy, or a convincing fake as we would find in art or currency scams, this is influencing and, at times, creating a new version of reality.


 Artificial intelligence (and related applications) is already making technologies available that are, in very real senses of the phrase, changing reality, asking us to accept alternative versions of reality. Telling the difference between the two are becoming increasingly harder. It amounts to image manipulation, often in real time, at advanced levels of production, scale and realism that seemed quite impossible to reach until recently. The commercialization of these technologies is putting it in the hands of the general public, with absolutely no meaningful regulation or control at present. It is a vivid example of how we have started seeing, and are being persuaded to see, history, knowledge and truth itself, in what has been described as “a move toward a postmodern form of war and conflict and security” (Reuben 21). It also makes somewhat of a mockery of certain stern regulatory efforts when this type of technology is already available and being used by teenagers and various interest groups. 


We are openly invited to question and doubt the nature and boundaries of truth itself. Much of conventional conflict, at least on a subjective level, is built on foundations of real or perceived truth, of right and wrong, of a level of moral and factual certitude. These dynamics, and the brutally efficient way that identity conflicts are manufactured and manipulated in the modern world, will add new levels of possibility and challenges to conflict management. 


This is not Photoshop 2.0, and new “augmented reality” events can be created or added to existing videos in real time. And these augmented reality technologies are already creating conflicts simply in their use, as we can see from the 2023 dispute and discord created by the proposed use of augmented reality glasses for the Bayreuth Festival’s production of Richard Wagner’s opera, “Parsifal”. This shows the creation of these conflicts over and above its actual use, when merely the availability and application thereof becomes contentious. The use of this technology will, at least for a while, bring about cost implications that will create tough commercial, creative and even ethical decisions, which in themselves will lead to conflicts wherever these technologies are used. With algorithms changing the advertising and campaign focus from the macro-targeting of earlier days to individualized micro-targeting, the work of the modern propagandist can be very specific, very much tailor-made for the group or individual target. Advert and information campaigns, based on your actual digital footprint, can be designed by an AI system in seconds, and then used to shape your opinions and preferences, creating personalized spam that will feel like a conversation with someone that knows you very well.  


This new conflict frontier includes, to use the term that Baudrillard used so effectively, the use of scientific and other simulacra, a representation without a real world existing example, without an original. These, for practical conflict purposes, would include scientific and other papers and studies that (often in extremely convincing tone) contain various levels of made-up, non-existent “facts” and “findings”, including fictitious authors and attributions. The modern messenger of Truth (however defined at the time) can spread disinformation, fear, anger, doubt, false certainty, hate and very specific methods of manipulation, at times reaching a level close to actual control of a target group. 


The very value of truth, apart from its identification, is eroded in the process, with “your truth” and “my truth” and a mess of relativist options offering a seemingly reasonable (if lazy) way out of this challenge. The chilling power of these new technologies is best understood by looking at a few actual examples. In March 2023 a Twitter user posted a series of photos depicting the arrest of former president Donald Trump. These depictions are now all easily accessible on the internet, and show the remarkable realism achievable in creating an entirely (at the time) fictional event. The photos, depicting events that did not occur (as presented) are remarkably realistic, extremely unflattering and possibly defamatory, and must be quite disturbing to one of Trump’s supporters or family members. The images were created by a relatively simple process that can be repeated by a lay person, using nothing but commercially available computer equipment. Not surprisingly, the images created a storm of arguments and discussions involving such “arrest”, and for much of these online debates it was as if the arrests actually occurred. Similarly we can search for and study Putin’s “trial” and Biden’s “speech”, all very convincing and none of which happened in reality. For those interested, the excellent Ben Chanan / BBC One television series “The Capture” vividly shows how real time image manipulation can develop into really unique and unprecedented conflicts. 


When you see these created events, does it really exonerate their users if we know, and if the creators admit openly, that they are fakes? I would suggest that their mere use is already inflammatory and harmful, and that some of these legal grey areas will be abused in political and maybe even commercial campaigns of the very near future. Voice samples of actual persons can be manipulated, crowd simulations can create the impression of large crowds at say a political rally, the creation and manipulation of apparent reality seems rather endless. Once we know how the average voter or shopper make up their mind, and by what they are swayed, we can see the endless abuse that these synthetic realities can bring about. 


While these may be the pioneering days of the dark art of the deep fake, this technology is already among us, and it already has far-reaching conflict consequences. Case studies have shown how difficult it is for most humans to distinguish between real and fake news, and an hour spent on any social media platform will confirm this observation. And even if we believe that we will do well in telling fact from propaganda, are we now to subject each and every news item we consume to this testing process? Some images can have a tremendous and lasting emotional impact on us, and I have little doubt that deep fake adverts and political propaganda will hesitate very briefly before crossing some of those lines, and we remain vulnerable to manipulation even if we are aware that the observed scenes did not happen in real life. The reader can easily imagine a few scenarios that would be difficult to unsee once a purported leak or advert has been aired. 


And a lie told in this most convincing fashion, even once detected, would need undoing. The liars’ dividend and the ensuing uncertainty following even an admitted deep fake events could be all that a politician or other campaigner would wish for to manipulate public opinion. This arms the modern propagandist with weapons that would have made George Orwell rewrite “1984”. If millions of people can be swayed with a simple tweet during say the pandemic debates, how incredibly effective will deep fakes be in swaying vast numbers of people. What will remain of the ideals of democracy if we manufacture opinion at this level? As Nina Farahany states, it may have become time where “We must update our concept of liberty”. And these synthetic reality conflicts will blight the globe, as we can see from events reported such as this Chinese man’s experience  'Deepfake' scam in China fans worries over AI-driven fraud | Reuters    


A completely fake video “interview” between financial advisor Martin Lewis and Elon Musk, advising on the wisdom of a particular investment, paraded on Facebook, with these two individuals being portrayed as real human beings, to the point where even people who know Lewis could only assume that it was fake based on their assumption that Lewis would not advise the support of such a product. And these conflicts will have complex ethical boundaries as well. Can deep fakes be used at some level if we know something about an opponent but we have no evidence (such as was explored in the television series “The Capture”)?  Are we allowed to do it if they started it? Is it acceptable for a propaganda piece to tell a lie if we tell viewers that it is a lie? Can we create these pieces against the will of the “actors” involved, such as was the cases with Putin, Biden and Trump? What happens if your neighbour creates one of these videos, featuring you, and this goes out on his successful YouTube channel?  


The conflict causes are new and seemingly endless.   Our political landscapes will change in important respects as well. Even OpenAI’s chief executive Sam Altman, testifying before the Senate Judiciary Subcommittee on the 16th May 2023, expressed concern at the large language models’ ability “to manipulate, to persuade, to provide one-on-one interactive disinformation”. 


During his own testimony before this law-making body, professor emeritus in psychology and neural science at NYU and machine learning specialist Gary Marcus raised similar concerns about deep fakes, the extent of the disinformation that can be spread and its impact on democracy itself when he said: 

“Fundamentally, these new systems are going to be destabilizing, they can and will create persuasive lies at a scale humanity has never seen before. Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened." 


Earlier that same month president Biden and vice president Harris met with the leaders of Google, Microsoft, Anthropic and OpenAI to discuss the responsible use of AI, ensuring that its benefits are realized and shared equitably, and that its harms are avoided or minimized. These concerns and high-level talks highlight the urgent work that will need to be done by inter alia the US government insofar as regulation and guidance are concerned. 


We have studied the complex challenges that such processes, well-intentioned as they may be, will be encountering. Bi-partisan US legislation designed to make the development and use of AI safe for all involved has been showing good progress since 2023, with positive reception thereof and collaboration from some of the big role-players. How these US efforts will enhance and strengthen, or compete with, the European Union’s more developed regulatory work will be an important dynamic in the regulation debate. 


The EU has of course a small head-start with its 2021 commencement of a range of draft discussion documents, guidelines and ethical requirements for the development and implementation of AI and related technologies. Of course, at the speed with which AI is moving at this stage, an earlier start may not be the advantage that one may think it is. The EU, for example, still has very little of practical value to show for its hard work, with its actual legislative process probably adding years to the good intentions and excellent drafting skills on display. A measure of personal data protection, privacy and data autonomy is taking shape as a result of EU efforts, but we will need more time to see how this works out in practical terms. The standoff between the EU and Canada (and others) and Meta on data privacy and autonomy should provide us with some helpful direction and lessons in these conflicts. And it is particularly naïve to think that the answer to deep fakes and other forms of image manipulation lies in the simple detection of these fakes. 


We can rigorously interrogate reality as it is presented to us (for most of us a new frontier in itself), and still be left with justified doubts. Bergstrom and Jevin cautions us against a simplistic acceptance of mere alertness: “After all, extreme content is highly effective at drawing an audience and keeping users on a platform. Technologically, the same artificial intelligence techniques used to detect fake news can be used to get around detectors, leading to an arms race of production and detection that the detectors are unlikely to win.” (Bergstrom 35) 


Good work is being done in developing watermarks to determine and confirm that an image was created by AI, but even if proves to be an effective technological barrier, it hardly deals effectively with some of the other concerns we raised here, such as the impressions and harm done by these created images even where we know they are fake. 


Behind the obvious story conveyed by the manipulated material lies a deeper, darker conflict cause: an attack on the very nature and content of truth. As we have already seen in other, far less sophisticated conflicts, one of the goals and results of such manipulation is simply to leave the public exhausted in trying to determine fact from fable, and to then give up and accept their daily dose of “truth” from a trusted source, however misplaced such trust may be. This is conflict at an advanced level, conflict that bends reality, that creates a modern day hall of mirrors where you can get anyone to say, and believe anything, where the actual truth becomes re-enacted, re-interpreted, packaged and sold to a target market. A process that ultimately seeks to destroy the truth so that it can be rebuilt, rebranded and commodified.   As we have seen with nearly all of the areas that we are focusing on, here also there is that crossroads of opportunity and harm. 


These virtual reality simulations can be used to great positive effect in training and coaching programs across a range of areas, with cost and efficiency benefits as some of the positive outcomes, especially in the modern workplace. We are already seeing some established artists arguing in favour of deep-fakes as far as their own creative output is concerned, and a variety of business models are being developed based on full consent to the creation and use of such AI images being used. This is a complex topic that needs a lot of experimentation and research, and one that will doubtlessly generate several conflict causes in the near future. 


And regulation? Regulation in this area seems to be an absolute fool’s errand. Leaving aside the fact that the technology is so easily acquired already, what would regulation here even look like? What is the practical difference between an algorithm used to create a movie in whole or in part, one that falsely convinces voters of an event or a scam defrauding shoppers out of their money? How practical would conventional debates around intent and result be given these risks? Needless to say, here again we will end up with attribution problems as well, and, as we have seen, merely regulating social media platforms have extremely limited use in really serious attacks.


(end of excerpt)



Comments
* The email will not be published on the website.