A collection of common safety and ethical concerns associated with generative AI (Artificial Intelligence) and ways to address them. Topics include bias, data safety, hallucinations, and more. The use of generative Artificial Intelligence (AI) has many benefits for your digital safety and wellbeing. It can help with wording to report online harassment and discrimination, free up time for creativity by taking on administrative tasks or make digital content more accessible. As with any technology, generative AI also poses risks to your digital safety. Below is a collection of these risks, how to spot them and what you can do to use generative AI more safely and ethically. The Digital Skills, Design and Training team are still adding topics to this webpage. If you have any questions or suggestions, please email is.skills@ed.ac.uk. The topics below are interconnected. For example, biased or distorted data may mean data processed with generative AI tools do not meet the transparency and accuracy requirements set by data protection standards, or deepfakes and inaccuracies (re)produced by generative AI tools lead to the sharing of fake news and discriminatory disinformation.The internal generative AI system for the University of Edinburgh is called ELM (Edinburgh Language Model). Its developer EDINA has published guidance for students and staff on how to navigate ELM safely. Generally, the university’s safety recommendation is for staff to use ELM instead of other AI systems (e.g. Copilot, ChatGPT). Using generative AI in your studies: guidelines for students (EDINA)Using generative AI in your work: guidance for Staff (EDINA)EDINA Whether you are using, implementing or developing (generative) AI, remember the core values of ethical AI: transparency, fairness, accountability, and privacy. More resources International AI Safety Report | Department for Science, Innovation & Technology and AI Security InstituteGenerative AI: product safety standards | UK GovernmentHuman rights approach to AI | UNESCO AI Safety: Bias A collection of common safety and ethical concerns associated with generative AI (Artificial Intelligence) and ways to address them. Topics include bias, data safety, hallucinations, and more. AI Safety: Data privacy A description of common data risks when using generative AI, and how to protect your personal data. AI Safety: Accuracy and misinformation An explanation of how to spot and mitigate inaccuracies in generative AI outputs. References Ethics, Transparency and Accountability Framework for Automated Decision-Making (no date) GOV.UK. Available at: https://www.gov.uk/government/publications/ethics-transparency-and-accountability-framework-for-automated-decision-making/ethics-transparency-and-accountability-framework-for-automated-decision-making (Accessed: 18 February 2026).Fairness, transparency, privacy (no date) The Alan Turing Institute. Available at: https://www.turing.ac.uk/research/interest-groups/fairness-transparency-privacy (Accessed: 18 February 2026).Memarian, B. and Doleck, T. (2023) ‘Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education: A systematic review’, Computers and Education: Artificial Intelligence, 5, p. 100152. Available at: https://doi.org/10.1016/j.caeai.2023.100152.Radanliev, P. (2025) ‘AI Ethics: Integrating Transparency, Fairness, and Privacy in AI Development’, Applied Artificial Intelligence, 39(1), p. 2463722. Available at: https://doi.org/10.1080/08839514.2025.2463722. © Ricarda Fillhardt, University of Edinburgh, 2026, CC BY-SA 4.0, unless otherwise indicated. This article was published on 2026-03-04