7 min read
15 Aug
15Aug

INTRODUCTION 

Early 2023 saw artificial intelligence step onto the world stage with renewed energy and focus, this time around bringing real, everyday benefits to the general public, promising to help us with difficult work tasks, personal and professional organization, the effortless drafting of everything from novels to term papers, from business plans to legal documents. The visible face of this technology was ChatGPT (Chat Generative Pre-trained Transformer), and this chatbot immediately became the fastest growing app in history. Using deep learning and natural language processing this AI process responds to simple prompts from its human interlocutors to create an unprecedented tool that had most of us experimenting with it, and many of us having already incorporated it into our everyday personal and business use. For some, even in this brief space of a few months, this technology has already become indispensable, with a world without ChatGPT (or any of its growing list of competitors) becoming increasingly difficult to comprehend. 


But, as we see with the development and application of AI and other emerging technologies, all is not just benefit and gain with ChatGPT. There are a few very clear risks and conflicts to be aware of, and effectively managed in its use, especially in the modern workplace. In this article we will have a brief look at a few of these new technological challenges. In doing so we will deal with the tension between an anti-tech, fearmongering approach on the one hand, and the effective management of these technologies in order to gain maximum benefit from them while avoiding the conflicts and adverse consequences that may result from their incorrect use on the other. This effective understanding and management of AI conflicts is fast becoming one of modern management’s most important challenges, requiring new skillsets and approaches. We will also not discuss here the important discussion about job losses or gains, as I deal with that in greater detail elsewhere. Here we simply deal with the technology as is, as a reality in our work lives.   


A FEW PERIPHERAL CONSIDERATIONS 

We need not pause here to extoll the wonders and virtues of AI in general and ChatGPT and related products in particular, this is already in ubiquitous use and we are finding the boundaries and new benefits of this technology as we move along. Before we get to the main potential conflicts and risks inherent in the workplace use of ChatGPT, I believe that employers and management team leaders should, as an introduction, also bear in mind a few other consequences flowing from the commercial use of these models that may not be all that apparent here in the first blush of this exciting new technology. While it is still of course extremely early to assess the full impact of this innovation, a few potentially troubling consequences can already be traced. Most of us have either already run into difficulties with the accuracy of some of these replies. The way in which AI and these models gather and produce their work make it, at least at this stage of its development, rather inevitable that some of these results will contain errors and inaccuracies, often delivered with seemingly great authority and detail. So we have already seen a number of lawyers getting into hot water for submitting inaccurate legal arguments to courts, and several other industries experiencing similar problems. This the modern employer will have to address with sufficient levels of integrity in its data production or acquiring processes, a vetting or cross-checking process, adequate training of involved personnel, and the clear marking of work or decisions generated or influenced by AI input. 


A second area that now requires some monitoring and creative management would be the effect this technology may have on certain areas of production and creativity. I look forward to some interesting studies showing how the availability and use of these systems may affect our creative and productive processes, our thinking and decision making abilities and the results inherent in those processes. Should we accept the argument that ChatGPT will now free management and creatives from humdrum drudgery, allowing them to focus on other, more productive and lucrative work, or will the knowledge that we can leave these processes to AI stunt our imaginations and creativity? We will undoubtedly become more free, but free to do what? 


Thirdly, and related to legal and other compliance risks, is the ethical question of the workplace use of these models. As we can see from global complaints, the work we are receiving may be infringing upon the copyright and other intellectual property rights of the creators of parts of this work. This places an additional responsibility on management to ensure ethical workplace practices involving the use of ChatGPT. 


I anticipate wide-ranging consequences also in other related areas that we may not be focusing on as yet in our haste to keep up and compete, such as the valuation of our work, the methods of remuneration used, the structuring of management teams and collaboration between humans and machines. I deal with those crucial issues at greater length in my upcoming book Hamlet’s Mirror: Conflict and Artificial Intelligence. There are also complex risks involved in the use of ChatGPT regarding modern conflicts such as phishing, the creation of malicious code and its use in synthetic reality abuses (the so-called “deep fakes”), but those aspects require a separate focus. 


We now turn to the main focus of this article, that being the dangers inherent in the indiscriminate use of ChatGPT in the workplace, and we look at a few remedies that employers can apply to manage and limit such risks. 


THE USE OF CHATGPT IN THE WORKPLACE 

We need not have a particularly advanced knowledge of how AI in general and ChatGPT in particular work in order for us to understand and manage the risks inherent in this application. All we need to understand for our present purposes is that these processes run on huge amounts of data (information, if you prefer) in the processing of our instructions and prompts. If we ask the program to provide us with a business plan for our engineering business or a strategy to increase sales in a particular area, the process scours the data on that topic that it can access and then provide us with an answer based on its own algorithms, programming, the data available to it and the parameters of our prompts. We receive, in simple terms, an amalgamation of ideas, gathered from other work, designed to comply with the parameters of our prompts. 


The source material used by ChatGPT to provide you with your quick and easy answer is proving to be very problematical, now that we are starting to understand the consequences of some of our new toys, with a few very complex conflicts heading our way in the near future. While OpenAI, the creators of ChatGPT, undertake not to sell your information, the very mechanism by which these systems learn and produce means that your information can become a part of its digital experience, and be used in future projects, or be accessed in ways that would concern most users.  In the US we have already seen the start of litigation based on unauthorized access by AI to the work of others, and with our specific point of discussion here it becomes very relevant to the employer’s best interests. 


Employees need specific solutions with their operational responsibilities. They do not need general advice, they need a specific job done. They need a three-year business plan for Acme Company, particulars of claim for Mrs. Ndlovu’s divorce summons or various construction options for the floorplan of the new mall. In order to get that result, they need to enter the specifics of their task into the process. They feed the information in, it gets processed, and a result is given back. This process often happens with or without management’s approval or knowledge, and of course by management itself. 


The problem snaps into clear focus once we understand how these machines, processes and the companies operating them store and use our data. The data that was used in order to give you your result was probably someone else’s sensitive and confidential data. The data that your employees shovel into the mouth of the machine daily now forms part of the data used to answer other people’s enquiries. Your data now forms part of that system’s expanded range of knowledge, it becomes a form of digital experience for that system. Various companies at this stage make different use of the storage and dissemination of that data, but as we can see from the regulatory and litigation difficulties experienced by Google, Meta and a range of other data platforms, all is not well with such storage and dissemination. In addition then to the problems we touched on earlier relating to the accuracy of such data manipulation processes, we now come to the really important issue flowing from our workplace use of these systems, sitting there in plain sight. 


Given the faceless nature of ChatGPT we hardly pause to wonder where our data is ending up, and for how long. We enter sensitive data into “our” computer systems without considering the downstream consequences of such practices. Our sensitive, confidential data now becomes a part of the building blocks of the answers available to the AI system. In a way, we contribute, with our data, to the next level of responses that the machine has access to. Needless to say, this has far-reaching consequences for the employer, workplace information systems and the handling of clients’ confidential information. This in itself raises current and future legal conflicts involving intellectual property concerns, data and privacy concerns, confidentiality obligations, professional accountability and liability, and a range of specific offences in terms of existing legislation (in South Africa we can look at the POPIA obligations, the Consumer Protection Act and a long list of others). 


Once we sit with the cold realization of how these systems work and what we are in effect doing when we type our information into ChatGPT, the ripples extend even wider, with the potential for some complex professional indemnity and disciplinary processes, insurance claims and evidence gathering conflicts on the horizon. Still not convinced of the risks? Companies like Google have started banning or limiting the use of these models at work, so this is certainly not a baseless concern. So do we stop using ChatGPT? 


A FEW SUGGESTED SOLUTIONS 

Some of the questions, risks and problems that we have touched on here can be viewed as the growing pains of an exciting, life-changing technology, something to be expected and something that will in time find its own level and solutions. There are extensive global efforts at regulation ongoing, with comprehensive and complex policy and legislative frameworks being designed to deal with these modern technological challenges. Professional bodies have started engaging with these risks, and it is good to see even the companies involved in the creation, development and operation of these systems increasingly getting involved in efforts to practically and effectively deal with these risks. This includes limitations on the accessing of our private data, the sharing and storing thereof. There are even a few technological possibilities among the answers being sought. The next year or two should bring a measure of effective resolution to all or most of these pressing concerns. 


In the meantime though the modern business leader has a few practical measures to address the current situation, a few of which would include: 


(1) Ensure workplace clarity as to how these systems work. So far the majority of risks are caused by ignorance, not malice. Once employees are knowledgeable as to how the systems work and how it exposes your sensitive information a large part of the problem is often addressed. 


(2) Limit the workplace use of these systems to teams or individuals that need to use it. Clear channels of communication limits errors, creates greater accountability and makes effective data and risk management more robust. 


(3) Have clear, written policies as to the use of these systems, who may use it when and for what, and what data may be accessed and used. Where relevant, make use of layered information divisions, regulating access to employees and hence to these systems. Design clear systems that ensure accountability, effective auditing and the best use of information. Prevent the formation of so-called black-box data systems, where decisions are made by, or on the strength of advice received from, AI systems that make auditing or management more difficult. Use modern dispute system design to synchronize your workplace and to prevent departments working against each other, or unproductive work being a part of the new system. These policies should include suitable sanctions for transgressions, and be integrated fully into the wider HR and workplace discipline and conflict codes.


(4) Clearly mark work that has been generated or influenced by these AI systems, and take industry-specific precautions to ensure ultimate management control and accountability. Teach appropriate personnel the appropriate level of human-machine collaboration as is necessary to the workplace at that time, and then update as required. 


(5) Ensure that clients are properly advised on these processes and the risks that may be inherent in their particular instances. Be transparent about the use of AI process and its limitations. 


CONCLUSION 

Technology has always demanded a proper, informed and responsible use, right from the first horse-drawn plough, the use of the musket to the internet and cell-phone technology. In those instances though, we remained in control of the information and processes that we were dealing with. AI however brings new dimensions to our workplace frontiers, and with that new risks and new conflicts. There will be some very expensive lessons to be learned in this process of growth and adaptation. We are finding new boundaries and challenges to data privacy and security concerns, and we need to adapt to these workplace challenges effectively and urgently. 


(Andre Vlok can be contacted on andre@conflictresolutioncentre.co.za for any further information


(My book Hamlet's Mirror: Conflict and Artificial Intelligence is due for publication at the end of August 2023, and will be available on Amazon, directly from the publishers or from myself. Suitable adverts will provide the details)

Andre Vlok 

August 2023

Comments
* The email will not be published on the website.