5 min read
16 Sep
16Sep

We will publish a few short extracts from "Hamlet's Mirror: Conflict and Artificial Intelligence" here from time to time, showcasing various conflict topics dealt with in the book. 


The book is available in e-book formats from Amazon, or e-book and hardcopies from the publishers (Paradigm Media) via enquiries@paradigmcommunication.co.za 


The Amazon link can be found at Amazon.com: HAMLET'S MIRROR: Conflict and AI: Understanding the intersection of technology and human conflict eBook : Vlok, Andre: Kindle Store 


In this first excerpt we take a brief look at some of the influences of artificial intelligence on the legal profession.



It seems that in the practice of law William Shakespeare may very well get his way metaphorically when he suggested (in King Henry VI) that “The first thing we do, let's kill all the lawyers.”


The AI revolution is in the process of radically altering the legal profession, if not necessarily the content and application of much of our laws, and we can accept that much more of that is on the way. A few years ago I still believed that the practice of law will be largely unaffected by artificial intelligence. No machine can replace the human skills of logic, of reasoning, of decision making powers, right? Well, let’s put that belief in my Top 50 list of AI Errors.


 The legal profession is already undergoing seismic changes due to AI. Legal research, drafting and case studies have changed drastically, and with that of course time spent, the ways that fees can be charged, who gets involved in these processes and how they are applied. Legal decision making have been greatly affected, and in the US, the Compass system has been used to predict who is more likely to reoffend and used in bail and sentencing arguments (Coeckelbergh 2020, 3). 


In China, the legal system used systems like iFlyTek to apply AI to making past cases, evidence management and so on available to judges and other court personnel, speech recognition and natural language processing is used to assist with evidence, and even sentencing is included in this developing field. As with all other areas affected by AI, the practice of law is no different, and in many respects reflect some of our more serious ethical and practical dilemmas when dealing with the AI revolution. We consider a few of these. An important part of the power of the generally accepted legal process is its transparency. As a general framework, evidence gets tested in a public, open court, using transparent (if imperfect) processes of evidence evaluation, and then a public judgment is delivered, with reasons (most often in written form) for the decision making process and any resultant sanction being delivered. Factual or legal errors in these judgments are subject to review and appeal processes. A finely-tuned system of checks and balances exist that strives to protect our interests as far as possible. 


With some of the current AI decision making processes, and no doubt increasingly so with future and increased decision making levels in the legal profession, we may not continue to have sufficient insight in those decision making processes. Decisions that are taken by AI or AI assisted processes may or may not be perfectly valid and logical, but completely inscrutable. Even a written judgment at the end of such a process may not reveal all of the factors taken into account by a purely mechanical process. But AI will not just cause disruption and conflict at these levels. Some of the very foundational concepts of responsible jurisprudence will have to come in for major overhauls. The well-established concept of causality is one of these. In law, a causal chain must link intent (or negligence), action, and result, in general terms. With effectively a third party (AI decision making processes) now often involved in major decisions on the battlefield and the boardroom, where do we place human accountability?


If a human acts on advice given by an AI process, and ex post facto that turns out to have been a wrong decision, how does this affect our existing views of negligence, of intent? Who gets held accountable, who gets punished? The offender, the corporation or state, the algorithm? Are we about to see an increase in strict liability legislation and decisions, where accountability will follow a set of predetermined sanctions linked to proscribed results, and how will this change our normal conflict dynamics? 


We may have to eventually agree with Kenneth Payne that “The only way to get satisfactory justice for the performance of machines to hold humans to account.” (Payne 230), and the intended s230 protection exclusion for AI plans and legislative measures in the US (see eg First look: Bipartisan bill denies Section 230 protection for AI (axios.com) seems to be heading in that direction. Existing legal solutions will continue to try and contain the AI conflicts, while legislation and case law development try to catch up. I suggest that the field of autonomous vehicles will soon provide us with its own specialized share of these legal conflicts. If an automated vehicle injures or kills a pedestrian, where does the liability lie? How does it affect the calculus if the vehicle acted contrary to its algorithm, or if it simply followed the logic inherent on such algorithm? Do we hold the author of the code responsible? The ambulance chaser in his new guise should be an interesting specimen. 


What does our legal systems do with a military or law enforcement commander that has the power of oversight in a security process, who uses her common sense and experience to override an AI suggestion, which suggestion then turns out to have been the correct suggestion? It seems rather naïve to even expect human capabilities to be able to keep up with AI processes, to monitor and police its processes and output. These questions will have tremendous importance in these legal situations as well as in the human-machine collaboration debates. If we allow AI to gather information in the practice of law, where we then base our decision on (a seemingly reasonable and modest point of departure) that information and evidence, the next step, again quite logical, would be to let AI make some decisions for us.


 In the beginning maybe a few minor motion applications, minor bail applications, a few administrative decisions. Some of these processes can be codified to a large degree – to get to a specific result the applicant must have these three or those six requirements fulfilled. So far so good. It seems inevitable that this relatively simple and uncontentious phase will be reached, developed and applied. Depending then on the abilities of AI we should see a slow creep of delegation of these decisions, and it is really not too far-fetched, I believe, to realistically visualize a time where machines will be making important trial decisions, even acting as representatives, court orderlies and so on If we do reach this stage, the research, document drafting and decisions based on codified and step-by-step processes should be a great boon to the legal profession. 


The overburdened legal and justice systems, legal aid defence attorneys, access to legal representation, a meaningful reduction in trial backlogs and a long list of other negatives should be alleviated or a thing of a past, factors which in themselves cause and exacerbate everyday human legal conflicts. An improvement in legal cost affordability and accessibility to competent legal representation will in itself have far-reaching and welcome effects on our more formal, legal conflicts. Just as in education, we may see a split in machine-based representation and human-based representation acting as a market force, at least for a while. The legal profession would in my view be in the forefront of some of the more important developments in AI conflicts. Some of the building blocks used in the pursuit and practice of justice may not be all that easy to tame and program into a machine. Discretion, reconciliation, mercy, distinctions based on justifiable nuances in case law, these may all remain the particular skill domain of humans. 


Here we may, very soon, have a clash between a push towards a simplification and over-codification of legal practice and jurisprudence into manageable algorithms on the one hand, and an understanding of the necessary human touch in law on the other, at least as far as sanction stages of criminal law processes are concerned. As a result of these dynamics, the legal profession may very well be one of the first areas of complex human behaviour where we can field test human – machine collaboration. Several conflict strategies come to mind, such as ring-fencing certain levels of discretion and nuanced thinking, for example certain judgments, sentencing and so on. And what conflicts may arise from being on the receiving end of such AI or AI-assisted jurisprudence? When humans get an adverse decision in a civil or criminal trial, when liberty or property rights get lost or compromised, when a decision from such a forum becomes inscrutable or difficult to understand, how will people react to such outcomes? 


The introduction of AI in legal decision making processes will have to be a very gradual one in order for these results to become better established and accepted by participants. Of particular interest here will be the public acceptance or rejection of such machine-based decisions, even given the presumed objective and unbiased delivery of such processes and decisions. Carefully managed, timed and sequenced introduction strategies may have to be applied in order to slow down actual technological advances. A legal system that is technologically advanced but that does not have general acceptance and credibility may cause serious conflicts of its own, including a few very troubling political ones. If we accept the dictum that law and morality lag behind technology, we are in for some unpleasant conflicts in the next few years. 


As we see with so many other areas of AI conflicts, here as well AI is not just one of the conflict dynamics, but also a part of the story itself. Canadian judges, for example, must be advised by lawyers if AI was used in the presentation and arguing of cases (through the use of drafting, accessing and arguing case law etc). The legal profession will also, like most other commercial activities, have to experience its initial period of trial and error, with some errors happening during trials, such as the two New York attorneys who ran into disciplinary problems with the use of non-existent case law during a trial, provided by ChatGPT ( New York lawyers face sanctions for using Chat GPT for legal research, citing fake cases - Washington Times ).


(From Chapter 9 of "Hamlet's Mirror") 


Comments
* The email will not be published on the website.