In this last post in the series we’re going to look at some of the main trends and players in the space that we think its worth keeping an eye on.
However, this post does come with a major caveat in that it is very likely to go out of date; events almost certainly will have moved on by the time you are reading this.
Generative AI is a transformative technology that is going to impact the way we work, including the potential for in-house legal teams to develop innovative new approaches and internal services.
But it also comes with considerable levels of risk, and in-house legal teams have an important role in guiding organisations to minimise their risk exposure in using ChatGPT.
In a four-part ViewPoint series. I’m looking at generative AI in detail and the contribution in-house legal teams can make. In the first part we looked at risks, and the second covered how in-house legal teams can establish governance. The third part looked at how generative AI can support in-house legal teams. In this final part we look at some of the trends in this fast-evolving space.
Generative AI has emerged as one the most significant technology trends in recent times which has captured the imagination of the media, the businesses world and the public; ChatGPT has generated huge amounts of headlines and has also been recognised as one of the fastest growing applications of all time.
AI is very like to continue to dominate the headlines as the technology continues to evolve at a breakneck speed with businesses investing heavily in the sector, new products evolving and emerging, the regulatory landscape remaining volatile, and there also being wider debates on the future direction and ethics of AI.
The generative AI landscape
In considering movement across the AI landscape there is a useful article from McKinsey that defines the generative AI value chain, providing an overview of how the AI model and product landscape is evolving:
- Specialised hardware such as accelerator chips optimised to support AI.
- Cloud platforms that support AI.
- Foundation models on which generative AI applications can be built, for example OpenAI’s GPT large language model.
- Model hubs and Machine Learning operations that provide tools to utilise the foundation models to curate, build, manage and fine tune applications or the models.
- Applications that use the foundation models to deliver AI-powered capabilities e.g. ChatGPT is an application that is built on top of the GPT model.
- Services around leveraging generative AI, such as training and development.
OpenAI and ChatGPT
Since late 2022 the surge in interest in generative AI has been led by OpenAI with the release of the GPT series of Large Language Models, first GPT 3.5 , then GPT 4 and now GPT 5. The most popular AI applications – ChatGPT and the DALL-E AI image generator are based on the GPT models.
At some stage GPT-5 will be released, and there is also a mysterious “project Q” which is rumoured to have resulted in a significant break-through in AI technology.
Microsoft and Copilot
To a certain extent the events at OpenAI and Microsoft are intertwined, as Microsoft has invested $10bn USD in OpenAI. A representative of Microsoft also now sits on the OpenAI board with an observer role.
Microsoft continues to promote products and also “model hubs” that leverage the GPT-model including services within Azure that allow organisations to create applications that use the GPT model.
Copilot is the overarching branding for the various different AI-powered assistants that will start to appear across different Microsoft products; the ChatGPT-powered search experiences Microsoft Bing Chat and Bing Chat Enterprise are also being rebranded as Copilot. A new Copilot Studio is also encouraging developers to create their own custom versions of Copilot that are specific to their organisation. Copilot will continue to evolve and expand during 2024.
In all Microsoft’s developments it means most organisations are now able to effectively create their own private instances of ChatGPT that ensure data and intellectual property is now protected, side stepping some of the multiple risks associated with using the public version of ChatGPT.
Google Bard and Gemini
Google has been in the AI space for well over a decade, chiefly through the activities of Deep Mind, the AI research lab which was acquired by Google in 2014 and currently operates as a subsidiary. It has been developing various large language models for a number of years including LaMDA. Google announced the launch of Bard as a direct response to ChatGTP in February 2023, and which can be tried by users, although its release has received an underwhelming reception from some industry observers.
In December 2023 Google released Gemini a new AI language model which is being trumped as as its “most capable yet” and more powerful than OpenAI’s GPT model. Gemini will power various products like a new improved version of Bard, and inevitably Google’s search. One way that Gemini seeks to differentiate itself from other AI models is its multimodality, learning from non-text sources such as images, video, audio and code. It also comes in three flavours – Ultra for highly complex tasks, Pro for a highly scalable AI model and Nano for performing AI tasks on devices such as mobile phones.
Other foundational and Large Language models
It’s not just Microsoft and Google that are behind foundational models, and many other exist or are being worked upon. For example, Elon Musk has founded his own X.AI company that is developing its own model and accompanying bot called Grok. Some large companies who provide AI service and products are also developing their own proprietary LLMs including Thomson Reuters and EY. This will be highly active space in years to come.
Applications
Perhaps where we are going to see most of the movement is in the development of applications that use generative AI. This will be both for existing products that will now have generative AI added either using proprietary foundational models such as Adobe’s Firefly in its own products like Photoshop, or utilise ChatGPT, but also for new products coming to market.
The rapid evolution of the application landscape can be seen in what’s happening around products marketed to the legal sector that we covered in the last Viewpoint . A similar process will be happening in applications marketed to different professional groups, areas of software and industry verticals.
Formation of the AI Alliance
Of course, the development of generative AI at OpenAI, Microsoft, Google and other providers can be seen as a race to potentially make billions of dollars. Above this there is a higher-level philosophical debate about whether AI development should be more open source and transparent, a move which not only limits some of the risks but also allows for wider innovation.
In December 2023 a new AI Alliance group was formed that features over fifty leading organisations including Meta, IBM, Dell, Sony, Intel and several start-ups and academic institutions. It advocates for an open-source approach to AI development, that points towards a growing debate about the future direction of AI. IBM, for example, is a longer standing supporter of open source development, compared to Microsoft, but its stance may also be informed by commercial interests too.
Regulatory landscape
As well as emergence of both models and products, the regulatory landscape around generative AI is due for further changes. A hugely influential piece of legislation will be the EU’s AI Bill that has gone through various rounds of discussions in trying to come to an agreement. The first drafts were actually written prior to the release of ChatGPT and current evolutions around generative AI so to a certain extent regulators have had to play catch-up to make sure the Bill relates to current developments.
There had been some differences of opinion around the extent to which those who produce foundational generative AI models are allowed to self-regulate or should comply with different rules, as well as the extent to which AI can be used by law enforcement organisations to identify individuals in public spaces. Finally, an agreement was reached in December 2023 on the AI Bill that:
- establishes different classifications of risk for AI
- provides some exceptions for law enforcement
- clarifies special rules for foundational models
- establishes a new governance layer for AI
- establishes penalties
- requires a rights impact assessment for high-risk AI systems that will need to be registered in an EU database and require citizens to be informed who are exposed to it
- measures to support innovation.
The Bill still needs to voted on in the European Parliament, and it may not come into law until at least 2025.
Meanwhile in the UK, the government has created a proposal for AI regulation that is designed to be “pro-innovation” and set out five principles:
- Safety, security and robustness
- Appropriate transparency and explain-ability
- Fairness
- Accountability and governance
- Contest-ability and redress.
The paper is vague in terms of committing when to take this forward into legislation, adopting what is essentially a “wait and see” attitude, saying it does not want to stifle innovation, and potentially positioning the UK as a place where AI innovation can thrive.
Meanwhile there are also movements in the US where Joe Biden has signed an executive order that:
- Creates new security and safety standards
- Establishes guidelines to protect consumers
- Defines best practices for AI to be discriminatory and how they relate to the justice system
- And more.
Conclusion
The introduction of generative AI is a significant technology trend where events are moving quickly. Over this series we’ve covered in-depth the risks and the role in-house legal teams have to play, as well as the opportunities that present themselves to drive productivity. I think whatever happens generative AI is both here to stay and will have an influence on how we work. In-house legal teams have an important voice that needs to be heard in the generative AI journey in the short-, medium- and long-term.