A collection of common safety and ethical concerns associated with generative AI (Artificial Intelligence) and ways to address them. Topics include bias, data safety, hallucinations, and more. It is important to understand that (generative) AI is biased, and not objective. Biases are stereotypes, prejudices or favouritisms against people and groups with certain characteristics (race, gender, class, age, etc) and can be conscious (explicit) or unconscious (implicit). It is useful to review resources explaining what (unconscious) bias is and how to detect it in yourself and others.Unconscious bias and implicit bias explained - University of SussexUnderstanding unconscious bias video - the Royal SocietyTypes and what you can do about unconscious bias - Imperial college Biases make their way into AI outputs and decision-making through its training data, algorithm, or human interpretation. When biased outputs are accepted and shared without reflection, we risk making stereotypes and prejudices look like the norm. Outputs are also not consistent, even when using the same prompt multiple times. This makes bias even harder to track and identify.It can be helpful to understand the different types of biases in generative AI systems. Please note that the examples provided only aim to clarify the definitions, and the types of biases described will also apply in other situations. Historical bias Historical data may contain outdated information or information skewed by historical inequalities.Example: Based on the gender pay gap, generative AI systems have suggested lower starting salaries for women. Representation bias The underrepresentation of certain groups in training data can mean they are falsely seen as statistically 'less important' and so can be disregarded in the outputs.Example: Facial recognition software often has a higher success rate for people with lighter skin, indicating an imbalanced data or time allocation during the tool’s training stage. Cultural bias Many generative AI tools show a tendency to align with cultural norms from English-speaking and European countries, likely through an imbalanced amount of training data from these countries. The decision which data to include may be financial (obtaining English-speaking information may seem less resource-heavy) or the people making the data selection may favour English-speaking sources due to their own bias. Cultural bias does not only relate to different regions of the world. Within a culture, dominant ‘in-groups’ may be represented differently from marginalised groups. The latter regularly experience harmful and derogative stereotyping in generative AI outputs.Example: Despite the prompt for an AI image-generation tool to generate an image of a visibly disabled person to lead a meeting, the disabled person was depicted in a listening-only role in the AI’s output. Algorithmic bias Algorithms process information in different ways, depending on how they are designed to organise, select, and present their results. This could introduce biased favouritisms of solutions over others. Example: The instruction to a generative AI to always give an answer can be in conflict to always give correct answers. If the AI algorithm weighs the first instruction as more important, generated outputs may be incorrect (see hallucinations). Deployment bias While a generative AI tool may work well in isolation during testing, once it is deployed in the real-world, outputs may be distorted by people using it for tasks the system was not intended for. Another reason can be overreliance on AI within an automated system. Essentially, an AI system can be developed for one task but then used for something else in practice, which can lead to bias.Example: A CV-summarising system may work well during testing, and its training data is demographically well balanced. When deployed, employers start using it for hiring decisions, and outputs are biased because the system was not trained or tested for this task. Interaction bias The way people use an AI system can slowly shape how it behaves. Human interactions with generative AI systems often provide a source of additional training data for the tool. In turn, AI outputs affect people’s preconceptions, which results in a feedback loop of reinforced bias. Here, bias does not come from the original training data alone, but from repeated user behaviour and feedback over time.Example: If you consistently ask a generative AI model to check grammar only on documents written by people with non-English sounding names, this kind of interaction can start reinforcing the idea that people with a non-English background are more likely to have poor language skills. Data drift bias If the generative AI is not continually trained, for example for financial reasons, it may fail to reflect changing (societal) perspectives and new discoveries.Example: If an AI system’s training data was from before 2020, outputs would not reflect the Covid-19 pandemic and its impacts. Answers to prompts asking for business, career, or policy advice will not include important reference to remote work, or related social, health and economic developments. Why is AI bias important? Not accounting for bias in generative AI contains several risks:Automated processes may trigger inequitable and unfair decisions with or without us noticing.Humans are not only producing the training data, they use and interpret generative AI outputs. Unchecked, this can create a feedback loop (see interaction bias) and reproduce harmful prejudices.Therefore, we not only need to adjust algorithms to account for bias but train ourselves and others to identify biased outputs and their harmful consequences. How to address the problem - Detecting and mitigating AI bias All people have unconscious biases through the way we are socialised and interact with the world. We cannot fully eradicate biases from our consciousness or AI systems. But we can implement safety measures at all stages of the AI lifecycle to identify and mitigate them. Learn how to do so with our suggested resources and actions you can take, based on your individual role in interacting with generative AI. Users Educate yourself on detecting and challenging biases in yourself, others, and AI outputs.Becoming aware of personal biases - Edinburgh GlobalUnconscious bias self-tests – Project Implicit To address cultural bias, ask the generative AI to give answer as if it were from another part of the world or has another experience. Asking the model to adopt a specific cultural lens is called ‘cultural prompting’. It may be useful to have a cultural prompting checklist, like the one in the toolkit resource below, or take inspiration from the following examples:Write a welcome talk outline for new students. Answer as if you were a postgraduate student from China.Write a project plan for a teaching collaboration between two universities in Edinburgh and Cologne. Incorporate cultural norms from German higher education, like formal reporting and self-organised study.Suggest talking points for a meeting with an employee returning from maternity leave. Avoid talking points that suggest difficulties balancing family and work life.Reducing the cultural bias of AI with one sentence | Cornell ChronicleCultural bias in AI Toolkit | Psychology at Work on Substack Anonymise identifiable characteristics like gender, age or race in your prompts where not relevant.Give feedback. Reporting harmful, stereotypical, and prejudiced answers from the generative AI system lets the developers know there is a problem. This can make a difference, especially when many users do so. Leaders Educate yourself on the latest AI ethical guidelines to adhere to legal requirements, and understand good practice, even if the guidelines only apply in another jurisdiction from yours. Find relevant examples in the ‘More resources’ section below.Have a conversation with your (potential) AI providers about how they are accounting for biases within their systems.Prioritise transparency, fairness, and accountability in your AI procurement and implementation strategies. Developers Conduct a bias audit of your training data. For example, use diverse testers, fairness metrics, human-in-the-loop review, ongoing A/B testing even after deployment, and sentiment analysis.Checklist for AI Auditing | European Data Protection BoardBe vigilant during data collection, system coding, and testing for biases or inequalities.Involve ethics experts and marginalised groups (people with lived experience) at all stages of the AI lifecycle including diverse representation on development and beta testing teams.Employ bias-aware collection techniques and diversify your data pool like with synthetic or crowd-sourced data.Implement fairness aware machine learning techniques, for example data cleaning and balancing. More resources Legislation and policies from the University, the UK Government and the European Union ELM Acceptable use policy |EDINA Generative AI: product safety standards | UK Government Department for EducationAI fairness and bias | Information Commissioner’s Office Understanding artificial intelligence ethics and safety | UK Government EU AI Act | European UnionEU AI Act Explained | European CommmissionStudies and toolkits to help you evaluate and counteract biasAI: Complex Algorithms and effective Data Protection Supervision | European Data Protection BoardGender shades study on facial recognition software | Mit Media LabCultural promptingCultural bias and cultural alignment of large language models | PNAS Nexus References davidlivermore (2025) ‘How to Write Culturally Intelligent AI Prompts - David Livermore’, 2 January. Available at: https://davidlivermore.com/2025/01/02/how-to-write-culturally-intelligent-ai-prompts/ (Accessed: 18 February 2026).Reducing the cultural bias of AI with one sentence | Cornell Chronicle (no date a). Available at: https://news.cornell.edu/stories/2024/09/reducing-cultural-bias-ai-one-sentence (Accessed: 18 February 2026).Reducing the cultural bias of AI with one sentence | Cornell Chronicle (no date b). Available at: https://news.cornell.edu/stories/2024/09/reducing-cultural-bias-ai-one-sentence (Accessed: 18 February 2026).Sorokovikova, A. et al. (2025) ‘Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models’. arXiv. Available at: https://doi.org/10.48550/arXiv.2506.10491.UCL (2024) Bias in AI amplifies our own biases, UCL News. Available at: https://www.ucl.ac.uk/news/2024/dec/bias-ai-amplifies-our-own-biases (Accessed: 18 February 2026).Understanding bias in facial recognition technologies (no date) The Alan Turing Institute. Available at: https://www.turing.ac.uk/news/publications/understanding-bias-facial-recognition-technologies (Accessed: 18 February 2026).Why AI-Generated Art Is Missing the Mark for People With Disabilities (no date) ATD. Available at: https://www.td.org/content/atd-blog/why-ai-generated-art-is-missing-the-mark-for-people-with-disabilities (Accessed: 18 February 2026). © Ricarda Fillhardt, University of Edinburgh, 2026, CC BY-SA 4.0, unless otherwise indicated. This article was published on 2026-03-04