Generative AI and ChatGPT: navigating the risks

In a four-part viewpoint series I’m going to look at generative AI in detail and the contribution in-house legal teams can make.

Steve Bynghall on 22/03/24

In this first part, I’m going to look at the risks surrounding generative AI. 

ChatGPT and generative AI is a transformative technology that is going to impact the way we work, including the potential for in-house legal teams to develop innovative new approaches and internal services. But it also comes with considerable levels of risk, and in-house legal teams have an important role in guiding organisations to minimise their risk exposure by using ChatGPT. 

Generative AI (GAI) and more specifically ChatGPT has received unprecedented attention over the past year or so, from within the media, inside organisations and from regulators too. To some observers ChatGPT is arguably the fastest growing business app ever, and it looks set to change the way we work, as well as impact other areas of life.

2024 will undoubtedly see GAI seeping into the technology we use every day, particularly in the use of Microsoft products, where generative AI – branded as Copilot – is being embedded into multiple tools.

The risks around generative AI

GAI has many existing opportunities and positives, but it comes with multiple risks. Much of this is because of the lack of control over the usage and governance of generative AI that we have; nobody within the organisation really owns it, employees can (and will) use it, and there is a general lack of transparency surrounding it. Moreover, the landscape is evolving at breakneck speed and the degree of change is likely to accelerate. To say this is a journey into the unknown is a bit of a cliché, but it is true. 

In-house legal teams have an important role to play in helping organisations to navigate the risks, put the right guardrails in place and provide the confidence and governance, to allow businesses to innovate and take advantage of this exciting technology.

In this first post in this series of four I’m going to look at the different risks that come associated with GAI.  Before we explore the long list of risks that come with the territory, it is worth defining a couple of terms.

Generative AI can be considered to be a branch of AI that generates text, images, code and other media, based on requests written in natural language. These are usually based on Large Language Models (LLMs) that have been trained to understand and respond with natural language; to achieve this level of understanding, LLMs are trained with vast amounts of data. GPT 4 is an example of a LLM, while ChatGPT is an application that leverages the GPT-4 LLM. Other LLMs exist and some companies are even developing their own internal LLMs.

Let’s explore some of the common risks associated with GAI.

Data privacy

One of the biggest areas of concern about generative AI is the data privacy and the protection of sensitive information. This is because when you enter information into a prompt on ChatGPT – for example with an instruction to rewrite a report or clean up a transcript – you are effectively submitting that information so it can be used to further train the model. This means that you may break GDPR regulations, break a duty of care to safeguard employee and client data, or share things that are confidential. For example, the use of ChatGPT was banned at Samsung, where sensitive internal code was accidentally shared on ChatGPT. 

Protection of Intellectual Property

Another associated issue is with generative AI’s lack of protection of intellectual property and copyrighted material. AI and copyright already has multiple grey areas and GAI only adds to this with LLMs potentially being trained on and then potentially regurgitating copyrighted material with everything from passages of text to lines of code; the source is also seemingly never revealed. 

Unintentional plagiarism

Linked to the lack of protection of intellectual property, it means that anyone using information gleaned from a response to the public version of ChatGPT and then republishing that information or using it – such as incorporating code into a software development – might unwittingly be committing plagiarism. Here the liability is unclear.

Accuracy

Large Language Models (LLMs) are not infallible and come with the potential for factual errors. Much has been written about ChatGPT’s potential for “hallucinations” where incorrect facts are presented as being true. Some tech experts have speculated that while LLMs will likely get more accurate,  errors will always be an inherent risk. A related issue is that ChatGPT, for example, presents facts so convincingly and confidently, that we want to believe them. Relying on information from ChatGPT and presenting it in presentations or articles, or relying on it for decision-making, is an ongoing risk, that can only really be mitigated by user education about the errors in GAI.  

Ethical considerations

Because generative AI is so powerful, there is a potential to use it for a huge range of use cases and  processes. Organisations can also develop new products, services and approaches. But its use can infringe elements such as data privacy and even be used by bad actors, such as cyber criminals. The need to use generative AI in an ethical way has never been more important, with some organisations creating ethical frameworks and principles for AI use. These are welcome, but the risk of GAI being used in unethical ways remains. 

Cybersecurity

There are some fears that generative AI will create new cyber-security risks across several different fronts:

  • from employees using new generative AI tools and being more vulnerable to attack as they get used to them;
  • from organisations deploying new generative AI applications that might open new vulnerabilities;
  • from using generative AI itself to create more sophisticated phishing emails and other similar scams.

At the same time, it is hoped that generative AI will also help identify cyber-security threats too, so GAI may prove to be a double-edged sword in this respect. 

Bias

LLMs and AI in general is also open to multiple types of bias in the responses it gives, for example undermining diversity & inclusion. Bias can emerge depending on who has built the model, the data set and content that has been trained upon, and even the prompts that it has learnt from. Bias may not be deliberate, but it can lead to risks – from generating stereotypical images to even undermining fairness in the selection of recruitment candidates. The lack of transparency (see below) means it is hard to detect the bias. 

Knowledge management and document management

GAI can support in-house legal teams through sophisticated document automation that has the potential to build model documents such as employees contracts and non-disclosure agreements. We’re going to cover these opportunities in a later post in this series. 

However, there is also a risk that business users will start to use ChatGPT to build their own contracts or documents that need a legal review, effectively bypassing the in-house legal team. For example, it only takes a few seconds for generative AI to build what looks like a sophisticated NDA tailored to a particular organisation or client. 

Impact on training and learning 

The impact of automating simple tasks within certain professions – including the legal profession – has led to some concerns that people starting out on their career miss out on valuable learning and experience by not “getting their hands dirty” with some of the more basic, routine work. The use of generative AI seems likely to reignite that debate as more and more basic tasks are automated, and perhaps new entrants into a profession miss out on the work that helps you to learn such things as appreciating the more delicate nuances that can support problem solving and creative thinking. 

Impact on other processes 

Generative AI has a potential impact across a number of other organisational processes too, for example how we conduct meetings, write reports for clients, and even how we draft emails. The impact is hopefully positive, but there could be some unintended negative consequences that are unexpected and hard to predict.

Impact on roles

Of course, a key risk is the impact of generative AI on different roles, leading to potentially substantial job losses or the need for significant reskilling. The level to which this will happen is still hard to determine, but McKinsey estimates that up to 70% of current business activities could be automated by 2030. The perception that there will be job losses can also impact attitudes toward using GAI.

Lack of transparency 

One of the problems with LLMs as well as AI-driven products – especially where the technology and algorithms are proprietary – is a lack of transparency in the algorithms, the way the model has been trained and also the sources that have been used. It makes it very difficult to be sure about the accuracy and reliability of responses, whether data privacy or intellectual property has failed to be protected and the level of bias involved.

Unrealistic expectations

The debate about AI has tended to be couched in either utopian or dystopian terms, bringing a lot of noise to discussions about its use. While the use of GAI in organisations has focused on practical issues such as how we can actually make it work and also navigate the risks, there are still unrealistic expectations about what GAI can do and the speed with which it can do it, ignoring the foundational work that is sometimes required to reduce risks. It’s important that everyone goes into using GAI with their eyes open, with  a realistic view of what it can deliver, and the work required to get the very best out of it.

Changing regulatory and legal landscape 

All of the above is subject to change, particularly as the regulatory and legal landscape continues to evolve. Governments, the EU and data authorities are all taking a keen interest. For example, there is an EU AI act as well as a suggested UK government framework. At the same time different legal cases in motion may further change the situation. Ensuring that organisations keep abreast of these changes is key, and is an area where in-house legal teams play an obvious role. 

That’s a lot of risks!

All of the above might look like a slightly overwhelming list of risks but there is already a lot of progress being made in reducing these, in particular with organisations creating their own private versions of ChatGPT or even proprietary LLMs, that sidestep some of the concerns over data privacy and protection of intellectual property.

Employees and business functions are also getting to know the technology much better, and there is a lot that is being learnt. Regulators are also starting to set out their stall.

But there is still a lot of work to do and in the next post I’ll explore the significant contribution in-house legal teams can make to maximise the use and minimise the risk of this transformative technology. 

Related content