Skip to content

Artificial intelligence in your business: Data protection

How do you ensure that any artificial intelligence (AI) you are using or developing throughout your business has data protection 'baked-in'?

With novel and developing technologies there is often a multitude of factors and risks to consider when deciding how they should be used across a business. AI is no different. In this article we will explore some key data protection implications of using AI in your business.

AI and data protection

Some AI applications do not involve personal data. However many do: examples might be automating a decision to grant a loan to an individual, or whether to take a job applicant to the next stage of the process. If that’s true in your case, then you should be aware of the potential data protection implications.

Data protection needs to be considered throughout the whole lifecycle of an AI system. Even where no personal data is involved in the design and development of the AI system, it’s important to build in ‘data protection by design and by default’ principles at that point, particularly if personal data will feature later on in the lifecycle. This is to ensure that, for example, humans can provide meaningful oversight of the AI system, that individuals’ data protection rights can be met and that the risk of bias is minimised.

Key considerations for organisations using or considering using AI

At each stage of using AI, you should be ‘baking in’ data protection compliance, and, in almost all cases, ensuring that a data protection impact assessment (DPIA) is carried out. A DPIA involves identifying any high risks to individuals’ fundamental rights, including privacy rights and other human rights, and then mitigating and managing those risks. Possible risks are outlined below.

Accuracy

The accuracy principle in data protection requires personal data to be accurate and, where necessary, kept up to date. Depending on how the personal data is being used, if it’s inaccurate every reasonable step must be taken to delete or correct it without delay. The statistical accuracy found in AI is not the same as the data protection accuracy principle. In AI, it is about the accuracy of the AI system, that is the proportion of answers that the AI system gets right or wrong; not the accuracy of the personal data.

AI works on the basis of probability. It doesn’t necessarily have to get it right 100% of the time. You need to assess the impact of the risk that the AI isn’t always accurate, mitigate that risk and manage any residual risk. You need to be very clear that output provided is a prediction, not a certainty. One way of doing that is to have and make available confidence scores, for example, that the percentage likelihood that a particular outcome will occur is 85%. Once the AI system is in use, you need to evaluate the statistical accuracy throughout its lifecycle because it can change over time.

Explainability

You need to be able to explain how your AI works. That involves providing meaningful information about the logic involved, plus what that means for the affected individuals, what the significance is and what the expected consequences are.

Understanding the Legal Landscape of AI

If you use AI in your business, it is important to understand the everchanging legislation. Read our recent article on the legal landscape of artificial intelligence looking at both EU and UK law as it stands.

Find out more

You need to consider the potential trade-off between security and explainability. The more information that is published about how the AI works and how it’s come to a decision, the more accessible and explainable you make it, the less secure the AI system will be, and the more vulnerable it will be to attack.

Fairness

Fairness in data protection means handling data in ways people would reasonably expect and not using it in ways that have unjustified adverse effects on them. There can be significant risks of bias and discrimination when you use AI, with ethical as well as legal implications. There are a number of ways that can happen. A key one is from having unbalanced training data (for example unrepresentative or inappropriate data), or from training data that already reflects past human discrimination. Or it can arise from the way that the data is labelled, measured and aggregated when training the model. Trying to get this right involves a lot of work at the design and development stage in particular.

Human oversight

A key factor in ensuring AI systems protect individuals’ privacy and other rights is to have meaningful human oversight and intervention. That’s difficult to do for two main reasons. The first is that humans are fallible, and often suffer from automation bias. We assume the computer or system must be right. That’s typically addressed through training the human reviewers, and then monitoring what they are doing in practice, for example whether they are just going along with what the AI system says or challenging everything. The second is that AI algorithms can be extremely complex, and the output can be difficult for a human to interpret. That’s typically dealt with as part of the design phase – so that the AI provides a confidence score against its output, for example.

An AI policy for your organisation

If your organisation already uses AI or intends to introduce it, an AI policy can provide useful guardrails. The policy should reflect and document:

  • Your organisation’s attitude to risk and how risk will be measured
  • How senior management will monitor and sign-off projects using AI
  • How to decide whether the use of AI is necessary and proportionate on a case by case basis, distinguishing between traditional and generative AI, and
  • How your organisation will protect personal data when using AI.

If you would like data protection advice on your use of AI or help with drafting an AI policy, please contact Judy Baker, or another member of our commercial Data Protection team.

Please note that this briefing is designed to be informative, not advisory and represents our understanding of English law and practice as at the date indicated. We would always recommend that you should seek specific guidance on any particular legal issue.

This page may contain links that direct you to third party websites. We have no control over and are not responsible for the content, use by you or availability of those third party websites, for any products or services you buy through those sites or for the treatment of any personal information you provide to the third party.

Follow us on LinkedIn

Keep up to date with all the latest updates and insights from our expert team

Take me there

What we're thinking