The essential role in-house counsel can play in supporting the governance of generative AI

In a four-part Viewpoint series I’m going to look at generative AI in detail and the contribution in-house legal teams can make.

Steve Bynghall on 05/04/24

In the first part we looked at risks, and this second part covers the essential role that in-house legal teams have in establishing the governance and guardrails surrounding generative AI.

ChatGPT and generative AI is a transformative technology that is going to impact the way we work, including the potential for in-house legal teams to develop innovative new approaches and internal services. But it also comes with considerable levels of risk, and in-house legal teams have an important role in guiding organisations to minimise their risk exposure in using ChatGPT.

What do in-house legal teams need to do around generative AI?

Last time we saw how generative AI is changing the way we work, but also it has multiple risks that need to be managed in a way that minimises the risks but doesn’t stifle innovation and adoption. That’s a difficult balancing act, and one that many organisations are still struggling with. Increasingly banning the use of all generative AI is not a sustainable approach. It is like trying to stop a juggernaut and in my view the middle ground between risk and reward is the way to go.

In-house legal teams have a significant role to play in minimising risks, but this also needs to be an ensemble effort with colleagues from IT, HR, Operations, Business Lines, Security and Leadership functions. Having this cross-functional view and acceptance that governance and consensus is required is one of the keys for success.

In-house legal teams have an important role to play in supporting the governance of generative AI and minimising its risks. Below are some of the areas where I believe they can and should make their voice heard.

1. Being a sensible voice in the rush to use generative AI

When it comes to AI and in particular generative AI there is a tendency for hyperbole which can even drift into utopian-type discussions. At the same time, there can be a distinctly dystopian flavour to some of the predictions about where AI is heading, even leading to the end of humanity.

Although there are both risks and also huge benefits, it is important to plan for generative AI with a level head and a clear picture of both the opportunities and the issues – and perhaps most importantly – a realistic view of what needs to be done to get the best out of generative AI. For example, there may be some level of foundational work and preparation to make generative AI work for your organisation.

Generative AI is an area where there is a flurry of activity and a range of strong opinions. In-house legal teams can play an important role by being a sane, sensible and level-headed voice to help steer the best path.

2. Contributing to ethical and strategic frameworks

Generative AI has many exciting use cases but there are risks that it can be used in areas which compromise standards, particularly around data privacy and related compliance. Tech giants such as Microsoft and Google have already issued their own ethical frameworks and principles around the use of generative AI, while other major organisations have also issued their own standards. While many of these are intended to be consumer-facing, having a strategic or ethical framework that provides reference for more granular processes and subsequent guidelines can be important to steer wider use and help to spread awareness about desired usage and risks. Again, in-house legal should have a seat at the table in setting the ethical dimensions around generative AI.

3. Influencing and reviewing usage policies and guidelines for employees

There will be a need to have robust usage policies and accompanying guidelines for employees to minimise risks and encourage the best use of generative AI. In-house legal teams have an obvious role to help draft and review these. In the early days of ChatGPT some legal and compliance teams forced through a ban on using it, but many organisations accept that this approach is neither practical and hard to enforce. Having usage policies and guidelines in place may also be very different for the use of a public LLM like ChatGPT compared to a private version that keeps your data within your Microsoft tenant, for example.

As use cases change, products evolve and the regulatory landscape changes – perhaps quickly – usage policies and guidelines will need to be revised and reviewed regularly.

4. Influencing procurement policies for new AI products

Most organisations will already have mature procurement processes that cover due diligence for suppliers and their products, usually with a checklist for technology requirements covering aspects such as information security. With the rush to include generative AI into software, procurement processes will need to be reviewed to cover generative AI and its use and potential risks. For example, the public version of ChatGPT would likely fail a due diligence review for many organisations. The in-house legal team will have a role in shaping and reviewing any checklist and related process.

5. Reviewing new AI software development (in-house or custom)

As well as a checklist for procuring products, a similar approach will be needed to cover new AI software development that is being either commissioned as a custom development, or built in-house. Some of this development will be for internal use, while other efforts will be creating commercial products and services.

The level to which organisations are creating their own AI solutions varies dramatically, with some organisations such as Thomson Reuters even creating their own Large Language Model. Microsoft recently announced the launch of Azure AI Studio which will open up the opportunity to create custom generative AI solutions to more organisations.

6. Reviewing commercial contracts that include AI

Of course, procuring or creating new generative AI products will involve either the review or creation of commercial contracts that will need to come under the scrutiny of the in-house legal team. Here there are lots of elements to consider including liability, IP protection and more, some of which are issues that are still evolving.

7. Ensuring data privacy is protected

One of the major risks in the use of generative AI is around maintaining data privacy compliance which can be broken, for example, if private data is entered into the public version of ChatGPT. If a person wanted a report they were working on that included Personally Identifiable Information (PII) in it to be rewritten, translated or summarised, and the report text was entered into the prompt in public ChatGPT, it could well be breaking your organisational commitments to data privacy and GDPR.

It’s also possible that AI-powered products or services developed or commissioned by your organisation that rely on personal data could also infringe data privacy rules.

This is an area particularly where generative AI presents an ongoing risk, and in-house legal teams can play a role in continuing to stress this risk and what needs to be done to ensure compliance. One area where progress is being made is through the creation of AI services where all your data stays within the organisation, for example by creating what is effectively your own private instance of ChatGTP.

8. Keeping an eye on the regulatory and legal landscape

The regulatory and legal landscape around generative AI is likely to keep on changing, particularly as services and products on offer continue to transform rapidly. Regulators are hovering and the evolution of proposed legislation such as the EU AI Act remains unclear at the time of writing. Rulings may also influence the landscape. In-house legal teams will be required and expected to keep up to date on events and are able to advise accordingly.

Generative AI is effectively a multi-headed beast that will impact various areas of the law. An article published by Lexis-Nexis lists the following areas that teams will need to monitor: commercial contracts, consumer products, products liability, IP and copyright, privacy and data security, bankruptcy, antitrust and employment law.

9. Reviewing use cases for AI and providing the guard rails where necessary

There are multiple use cases for generative AI and it’s an area where there is going to be continuing innovation and evolution. Some use cases for generative AI may have grey areas around either ethics or compliance where there is heightened risk. There is potential value in setting up a review and approval process for use cases involving generative AI that includes a risk assessment, which also could ascertain whether there are any additional guardrails or guidance that needs to be put in place. A review process might need to be in place anyway to prioritise budget and effort around generative AI.

10. Reviewing products for in-house legal use

Generative AI has exciting potential to support the work of in-house legal teams and automate some work, particularly in areas such as creating contracts and other model documents. There will likely be a growing number of products aimed at the legal market, but it’s also going to become easier and easier to develop custom solutions. We’re going to cover this area in more detail in the fourth part of this series.

Conclusion

In-house legal teams can play an essential role in minimising risks around generative AI, but in ways that don’t necessarily stifle innovation and opportunities to make productivity gains. In the next part of the series, I’ll look at the landscape of generative AI products and services, that is both fast moving and potentially confusing.

Related content

Viewpoint / The Bigger Picture Generative AI and ChatGPT: navigating the risks

Steve Bynghall on 22/03/24

Viewpoint / The Bigger Picture High performing team players

Amanda Lord on 26/04/24