Why your organisation needs to consider an ethical framework for AI

Artificial intelligence and related technology approaches like machine learning are consistently in focus in the media, capturing the imagination through their enormous potential, but also due to the fear of change that is associated with them.

Steve Bynghall on 23/11/21

While sometimes it is very difficult to separate the media hype from what is actually going on, virtually all medium to large organisations are likely already using AI and related technologies in some way, probably in existing software products. 

They are also almost certainly eyeing the potential for AI to deliver automation and efficiency in the present- and short-term, and possibly deeper transformation and the development of new products and service offerings in the medium- to longer-term. Using AI may be done through applying products and platforms that are currently commercially available, or through delivering innovation and creating something completely new. 

Dripping with risk

Whether you think the promise of AI is completely overblown or it’s the most exciting invention since the wheel, we are entering uncharted waters. AI takes us into various grey areas relating to both how it is used and the data that is used to power it. It’s an area which is evolving rapidly and experimentation is rife, meaning the emergence of even greyer areas is inevitable.

To say this area is dripping with potential risk is an understatement. While I don’t subscribe to the notion that AI and related automation are leading us to a doomed, dystopian future, the propensity for AI to cross a line relating to data privacy or ethical usage is high. That line between what is acceptable and unacceptable can be particularly fine and open to interpretation, particularly around the utilisation of individuals’ data. There can sometimes be a tension in what an AI initiative wants to achieve, and its ability to use the personal data required to achieve that aim in a way that is ethical and responsible, however laudable the intentions. There is also the capacity for misunderstandings and misinterpretations, and good intentions are not always fully comprehended. 

No central control over AI

One problem with AI as it is currently used in organisations is that there is little or no central oversight or structured governance to mitigate risks. Different functions are using different products that include AI, various divisions are producing their own apps that might be powered by AI and other teams might be trying out new AI innovations. Overall, nobody owns AI. While IT might say they do, in reality their role may be peripheral.
For example, let’s say an HR function chooses to use a product that utilises AI to filter out candidates for positions based on video interviews. In practice, IT may offer some kind of technical due diligence, but they are unlikely to be heavily involved in the decision. In the same way that a client-facing part of your business may engage an external digital agency with expertise in AI to create an app for your customers, a technology function may only play a limited (or non-existent) role in reviewing this activity.

Ethics and risk

Because AI has such huge potential and a wide range of uses, as well as often using data associated with people, many of the risks around AI are ethical in nature. These span a number of different areas: 

  • Data privacy: How do we ensure we protect the privacy of people’s data?
  • Diversity and inclusion: Bias is a real issue in AI that can have serious repercussions, for example, in facial recognition technology.
  • Impact on employment and professions: AI-powered automation can impact specific roles and threaten livelihoods, and can also undermine training within professions as trainees miss out on valuable on-the-job experience.
  • Use cases: What is the aim of your AI? Bots and AI analysing social media profiles have already been accused of undermining democracy.
  • Transparency and accountability: Algorithms are often proprietary and hidden, particularly in products, and who is responsible is not always clear.
  • Future trends: We don’t quite know where AI is headed and, as already noted, more grey areas are inevitable.

The need for ethical standards around AI

AI activity is already covered by regulations such as GDPR, but it is inevitable that there will be more regulation of AI to come, as projected with the EU releasing proposals. However, organisations should not wait for this to consider the ethics of AI.

To mitigate for the risks and provide clarity and guidance for those working with AI, it is essential to set up some ethical standards that can provide guidance and guardrails for use. Establishing clarity and consensus with something documented that has the backing of different stakeholders is a good start.

The ethics of AI is a growing area of interest that has attracted considerable academic attention. Several organisations and bodies are working on or have already issued their own sets of ethical standards and frameworks, including:

  • The EU’s Ethics guidelines for trustworthy AI
  • UNESCO’s draft recommendations on the ethics of AI
  • The UK government’s range of data ethics and AI guidance
  • Other organisations are issuing their own ethical frameworks both for internal use and to showcase their commitments externally, such as Google’s AI principles and Microsoft’s responsible AI. 

Creating your own ethical framework

Having an ethical framework for AI is a good starting point to drive responsible use of AI, although ensuring different teams actually adhere to it is another matter. There is some justified scepticism about the kind of ethics frameworks that are published by technology firms, for example, if there are no processes in place to police the framework or if it is only a guideline. However, producing some kind of written code is a good way to drive awareness and discussions about the ethics of AI, in turn leading to better practices.

There are no hard and fast rules for creating your own ethical standard around the use and development of AI, but using a pre-established framework as a starting point can give you a head start.

Ultimately, a framework needs to:

  • Be easily accessed and understandable
  • Map to or align with existing codes and governance frameworks covering data use, technical standards and other areas which have ethical considerations, such as Diversity & Inclusion
  • Have cross-functional support from major stakeholders to drive accountability
  • Ideally have an associated review process or checklist that helps teams adhere to the guidelines. 

The role in-house legal teams play

In-house legal teams have a critical role to play in defining the ethics of AI. They are uniquely placed to start the right conversations with different stakeholders and highlight areas of concern. They can draw attention to cases where ethical issues around the use of AI have led to problems, and further discuss what peers are doing in other organisations. They can help scope out what to include in a framework, advise on its articulation and write up the text. They can also help to keep a framework or code current in a rapidly changing area.

The ethics of AI is an important area where there is guaranteed to be continued focus. If your organisation isn’t considering this, it should be. It’s essential to drive awareness and consensus around AI usage, and reduce related ethical risks so it can be used for good and to add value.

Related content