AI Safety: Data privacy

A description of common data risks when using generative AI, and how to protect your personal data.

Data privacy and data protection aim to give people control over their own personal data. AI systems are not always transparent with their use of your data, and it is useful to be aware of the risks and how you can keep your information more private. 

Why is AI data privacy important?

  • Depending on their data policy, AI systems can use images, voice recordings, and videos to create new content and train themselves further

  • System setting updates may change the privacy of prompts and information shared with AI tools without the user realising

  • So-called ‘AI jailbreak’ like prompt injection attacks are a way to use specifically engineered prompts to manipulate generative AI into giving outputs against its own policies. For example, revealing other people’s personal or identifiable information

Top tips to protect your data when using AI

  • Be mindful and careful which information you share with generative AI systems
  • Before using a (generative) AI tool, make yourself familiar with its data protection and retention policies, and rights to further train with or reproduce the information (e.g. text, images, voice recordings) that you feed into it
  • Review the privacy settings on the AI tool you are using and limit permissions
  • Stick to internal AI systems, where you can. The university’s recommendation is that staff use ELM over external tools.
  • Anonymise your or others information as much as possible.
  • Do not jump on AI social media trends (like generating Studio Ghibli versions of yourself) without assessing the data this uses from you and others

Relevant laws and legislation

AI systems and AI users in the United Kingdom must comply with UK GDPR and the Data Protection Act 2018.  This means:

  • You must use generative AI systems responsibly and transparently
  • Similar to publicly posting images or texts of or by another person or institution, you will have to make someone aware and obtain their permission before feeding their information into AI
  • Check you are complying with the copyright license of the material you wish to upload

What does it mean to use generative AI responsibly in practice?

To use AI systems responsibly and transparently, consider if your actions use the least amount of data possible, whether they are proportionate to your aims, and how they affect others. 

Example: transcription tools

The use of AI-facilitated transcription tools, like the one built into Microsoft Teams, have become more popular. They can be a great accessibility tool. However, their use may infringe data rights of other meeting attendees, unless you get permission to transcribe their contributions. Consider if a meeting transcript is necessary, and if yes, make sure you check with the organiser first.

Checklist

Before using AI tools, ask yourself:

Do you need to use AI?

  • What is your aim for using (generative) AI?
  • Can your aims be met by other means? For example, local data analysis tools, a minute taker, designers

Have you read and understood the relevant restrictions and policies?

  • If not, do you know where to find them?
  • What effect may they have on the data subject?

Do you know how to proceed carefully and responsibly?

  • How will you identify unexpected results? What is your contingency plan for them?
  • How are you ensuring transparency of using generative AI to process data? 

Data protection and the Edinburgh Language Model (ELM)

ELM retains data but does not share it with third parties, including OpenAI. Using ELM is generally safer than many other AI systems, but you are still responsible for how you use it and which information you input. EDINA has guidance on data privacy for ELM use in their guidance for staff and students (see Introduction above) and their ELM privacy notice.

ELM Privacy Notice | EDINA 

More resources

Data Protection and AI (no date). Available at: https://www.reading.ac.uk/imps/Data protection/Data Protection and AI (Accessed: 2 March 2026). 


© Ricarda Fillhardt, University of Edinburgh, 2026, CC BY-SA 4.0, unless otherwise indicated.