4 min read
20 Sep
20Sep

One of the many arenas where AI stands to have an important impact on life as we know it is our current understanding and practice of democracy. Hamlet's Mirror deals with these current and probable future developments, and suggests a few modern conflict strategies in order to prepare ourselves for these developments. The following is an excerpt from Chapter 9 of the book. 


(vi) AI and democracy 

Our reflections on the new parameters and levers of free will should of course at some stage lead to the popular and public expression of an important section of that free will: democracy. What should our societies do if a future suitably competent algorithmic policy direction, advice or action (intended or completed) is out of kilter with the democratic vote (say as determined by a referendum)? Do we accept that AI knows best, do we limit the advice that we accept? What if we cannot agree on those questions, or on the competency levels of AI? What if sufficient numbers of us concede the superiority of AI in decision making processes, but insist that humans must remain in control of our national political decisions? What if we make some of these decisions and then later wish to change our minds? How do we draw those lines? If we accept that AI knows best (generally or specifically) what place does a conventional democracy have left in our society? Do we then need politicians at all? 


It is certainly not an exaggeration to regard AI as a threat to democracy once it becomes sufficiently adept at the skills and processes used in that section of human activity, and when its use and implementation has become ubiquitous. For the moment, AI seems to struggle with forms of independent thought and strategic thinking, but progress is being made steadily in all of those fields. Even now, and disregarding an “AI for president 2032” campaign, AI can (and is) used to assist and influence a wide range of political decisions, and again the abovementioned examples of potential conflicts and questions would arise. 


I believe that the next ten to twenty years (if that long) will see the very concept of Western democracy come under severe and repeated tests and debates as to its actual utility and value beyond rhetoric and nostalgia. With AI ethicist, lawyer and philosopher Nita Farahany I believe that we will need to update our conceptualization of ideas such as liberty if we want to maximize the good and minimize the bad from neuro-technoloy (Farahany 5) and AI, but I also believe that we are in the process of having those conceptions changed for us, nearly imperceptibly, without us putting up much of a fuss, and in some instances even knowingly participating in the process. If we attend to these processes mindfully and responsibly AI can act as a catalyst for some much needed change and improvement in some of our received wisdoms. 


Democracy is one of those important human arenas where we will be forced to at least reconsider ideas such as what it means when we speak of the will of the majority, what value consensus has if it is manufactured or manipulated beyond a certain level, whether some of our democratic processes are still our best options and so on. The conflicts and questions arising from these developments are either already here, or they seem imminent, or at worst, some of them no longer seem all that far-fetched. For example, if neuro-technology is going to start practically reading our minds, would there be protection against self-incrimination, can our brain data be sold or stored, judged, censored, altered, who does it belong to, when do we gain and when do we lose these rights? Where would such developments leave our right to cognitive liberty, to use Farahany’s beautiful phrase, or neuroscientist Rafael Yuste’s “neurorights”? The 2024 US elections will upon analysis, as one very prominent example, have much to teach us about the influence of AI on democracy and its underpinnings. 


Our thoughts and our emotions are still private, our own property, but machine learning algorithms are getting closer and closer to accessing and translating these processes. A confluence of the gravitational pull towards conformity, predictability and the eradication of conflict so prevalent in geopolitical and AI processes could lead to democracy being retained in name only, leaving little more than a watered-down semblance of what it can and should be in order to appease Western populations. While democracy has become a focus on equality and a balancing of rights, much of it is still supposed to be a safety net for getting complex decisions wrong, for avoiding abuse and dictatorships, for ensuring equality. We would rather allow, in theory at least, the entire population to take part in decisions because, so we tell ourselves, that ensures a measure of control and transparency over those important questions. 


But what if, at some future time, we generally accept that we are not the best equipped to make those decisions, that in every important respect machines can take better decisions for us as far as housing, defence, the running of our economies and so on are concerned? We may gradually move out, and want to move out, of the territory of democracy and into that of a hopefully benevolent dictator. Here we should be seeing some very creative and constructive human-machine collaborations in the near future. While I am not (yet) much interested in the Blade Runner-type debates about extending citizenship to our AI friends (such as Saudi Arabia did in 2018 when they extended national citizenship to a robot called Sophia), I am greatly interested (and concerned) about the content and boundaries of our current and imminent citizenship rights and obligations as these foundational rights may be affected by AI and related technologies. 


Once we understand that some of the more attractive features of a political AI-driven machine will not always be conducive to human and civil rights, and that the politicians of the near-future will continuously be faced with these parameters and temptations of application, we start understanding the conflicts that lie ahead. Where do we go with democracy, as a valued tool in the hands of a responsible, elected government, when we contrast for example US democracy vs Uganda’s 2023 anti-homosexuality bill, the latter also a product of a functioning democracy. We continue to be reminded, even in the AI age, that different values and goals drive our conflicts, and that the conformity that AI may need and drive may not always be forthcoming or easily achievable. Different AI algorithms will lead to different results. Democracy means different things to different people. 


An essential part of this question about the integration of democracy and AI should also include a wider question as to what type of society we are building at the moment, and what type of society do we want to have at the end of all of this. This in itself is a complex conflict in itself. 


Mark Coeckelbergh gets the enquiry off to a good start: 

“In order to avoid AI totalitarianism (and to maintain democracy), it is not sufficient to point to the responsibility of people in tech companies and governmental organizations and say that they should improve the design of the technology, the data and so on. It is also necessary to ask the questions: what kind of social environment could be created that supports people to exercise this responsibility and makes it easier for them to question, criticize, or even resist when resistance is the right thing to do? What barriers can be created to hinder the described shifts from democracy to totalitarianism? And how can we create the conditions under which democracy can flourish?”

 Coeckelbergh 2022 122) 


How attractive will a future democracy be in the marketplace of ideas if other less democratic societies keep up or excel in crucial competitions like the AI arms race, general commercial prosperity and/or military prowess? Would there be much qualitative difference between the traditional type of dictatorship that we have come to know in areas of the world and one simply run by or on AI-enabled systems? Here a human-machine collaboration seems to hold much promise, and demand that we get this equation right, at least in this important arena. The short-term solution and soothing political reply is of course to delineate certain areas where humans-in-the-loop will retain certain areas of autonomy and ultimate responsibility, either independently or as advised and guided by AI systems. Would such a system and compromise last for long, and in what direction would a gradual chipping away at those boundaries take us? 


Even at this early stage of these crucial conflict considerations, can we say that these trajectories are taking towards or away from an improved form of democracy?



Comments
* The email will not be published on the website.