Artificial intelligence in a nutshell is a machine’s ability to perform thinking and reasoning which mimic the human mind. This means that machines can be creative and perceptive, interact with their environment, reach rational decisions, learn new skills, and solve problems.
AI is already used in everyday life, examples include Apple’s Siri, Google Now, Amazon Alexa, Microsoft’s Cortana and chat bots such as Chat GPT and Bard. AI is also widely used across industry from finance to healthcare and may include robotic assistants, virtual assistants and booking tools. The range of abilities are advancing fast.
It is not always easy to recognise when AI is being used within software and tech tools, or through suppliers, and it is important to follow our data protection impact assessment screening registration procedure at the very outset of any new process, procurement, project or business change.
The dangers of AI
Many tech tools with AI ability are attracting the interest of regulators around the world who are concerned about the pace of development and privacy risks when they are used to process personal data. AI operates through underlying data sets which may produce harmful or biased results. Ethical and commercial risks can arise and when personal data is involved, the requirements of the UK GDPR will also need to be applied.
Key things to be aware of:
- it is not always easy to recognise when AI tools are processing personal data and you may not have a legal basis to do so. There are legal restrictions on profiling individuals or using automated decision-making about them. Specific rights, information and data safeguarding procedures for individuals (and the organisation) must also be in place
- AI used by our service providers, who we share personal data with or who take decisions on our behalf, could potentially lead to the risk of legal action being taken against us
- AI can present security, commercial, and operational risks even if you are not using AI tools to process personal data
- national bans can be imposed at any time towards particular AI tools (Italy recently issued a temporary ban on the use of Chat GPT), and new legislation is pending
- many compliance procedures can be impacted such as privacy notices, freedom of information requests and security accreditations
We operate an accessible and supported procedure around data protection impact assessment to help you identify such risks and to notify relevant internal governance groups. You must always follow this procedure whenever you are considering a new process, project, procurement or business change. You’ll be asked an initial set of screening questions although it is likely that a DPIA would be needed when using AI. You can also use the procedure for existing processing which needs a risk review. If you have any questions, please contact your data protection liaison officer.