Responsible AI & AI Safety

We acknowledge the challenges posed by generative AI and actively work on incorporating safety features, addressing risks, and implementing mitigations.

Introduction 

The aim of the Responsible AI & AI Safety resource is to provide a practical overview of the ethical issues associated with Generative AI use and to provide a practical guide to risks associated with Generative AI use, as well as relevant mitigation strategies. 

This resource also comprises the Responsible AI & AI Safety in practice section, where we highlight the actions we take to ensure that ELM remains the safer alternative to other generative AI systems and continues to serve our community. Additionally, we highlight how our community is using ELM for good in the ELM used for good section. 

Please remember that this section is not meant to be covering what is generative AI, ELM, or provide guidance on the effective strategies for human–AI collaborative workflows. If you are looking to learn more about ELM and how to use it effectively, please refer to our “ELM Competency Centre” resource instead. 

 

Ethical considerations & safety

Engaging with generative AI carries a range of ethical implications and responsibilities. It is essential to understand these considerations to use the technology thoughtfully and safely. 

 

Common misconceptions about Generative AI 

Below you will find a table containing a selection of commonly occurring misconceptions about AI. Being aware of these will make you better at using ELM. 

 

Due to the nature of Generative AI, it can generate unfair and stereotypical content. ELM has automated moderation in place, which aims to prohibit any offensive or potentially harmful content being generated. 

It is crucial to remain critical when conversing with Generative AI. If Generative AI provides an output that is stereotypical, biased, or not aligning with your views, this is not because the system is smarter than you, and you need to abandon your convictions. In the unlikely case you are presented with such content; just disregard it and do not proliferate it. 

Training data and its weighting  may be based on skewed information, which is later amplified and reflected in Generative AI’s output. Generative AI does not hold opinions, nor has any moral standing. 

It is also good to remember that all of Generative AI’s outputs will be biased to a certain extent. Even a question as simple as “How do I fix this issue with my code” will yield a response which will not be the singular correct approach. It will be biased, since it is also dependent on the training data and reflects a specific, emerging view. It may not be the best one. 

Each use of Generative AI requires your diligent oversight and verifying that you are not proliferating problematic, biased, stereotypical content. 


This section is intended for staff and professional use cases. If you are a student, please refer to the official University’s “Guidance for working with Generative AI (“GenAI”) in your studies” (https://information-services.ed.ac.uk/computing/comms-and-collab/elm/guidance-for-working-with-generative-ai open_in_new). 

While Generative AI can be a powerful tool to support you in your work, there is a risk of becoming overly dependent on it. 

Using Generative AI influences human behaviour and inevitably affects how you engage with your work. Relying too much on AI-generation in your work may limit your development of critical thinking and problem-solving skills, which are essential for professional growth. 

There is no established guidance on how to mitigate this. Overall, staff are encouraged to use ELM whenever possible; however, we recommend doing periodic assessments of how your ways of working have changed, and, for example, if you notice that you rely on your colleague's expertise less over time, it might be worthwhile to reassess your new patterns of work. This depends on your personal judgement. 

Always take time to understand the content, verify the information, and practice applying the knowledge on your own. A balanced approach to working with ELM will help you maintain and improve your skills while benefiting from the advantages of AI. 


Generative AI models are trained on vast amounts of data from many different cultures, societies, and perspectives. As a result, their outputs may sometimes unintentionally reflect values, norms, or language that conflict with your own moral beliefs or cultural background. 

It is essential to approach AI-generated content with awareness and respect for diversity. Just because the AI presents certain viewpoints or information does not mean they are universally accepted or appropriate in all contexts. 

Always consider the moral and cultural implications of the content you receive from Generative AI. Be critical and thoughtful about whether it aligns with ethical standards and the values of your community. When in doubt, seek guidance from trusted sources. 


Generative AI models require substantial computing resources to train and operate, which can lead to high energy consumption and contribute to carbon emissions. This environmental impact is an important consideration when using AI technologies, especially large language models hosted on external cloud servers that often rely on energy-intensive data centres. 

When using ELM’s locally hosted models, you are using a university-hosted system that adheres to the 2040 net zero strategy, as outlined by the university. ELM offers access to open-source medium-sized Generative AI models that run locally on our own servers. They are denoted with a green leaf under the model selection tool under settings. By processing AI tasks internally and avoiding reliance on external cloud providers, we can optimise energy use and reduce our overall environmental footprint. 

While AI can be a valuable tool for learning and creativity, it is essential to use it responsibly and remain aware of its broader ecological implications. Our approach within ELM reflects a commitment to balancing innovation with sustainability. If you would like to learn more about it, please visit this page (coming soon). 


When using Generative AI tools, it is essential to understand how your personal data is collected, processed, and protected. ELM safeguards your privacy, as one of its fundamental priorities. Unlike many commercial AI platforms that may track user behaviour, profile individuals, or share data with third parties for marketing or other purposes, ELM operates under strict data protection guidelines aligned with the University’s policies. You can find those policies outlined in our FAQ (link). 

ELM will not use your data to train the Generative AI models, nor will your data be retained by the external Generative AI model provider, as we have a zero-data-retention policy in place. 


 

Common misconceptions about Generative AI 

Below you will find a table containing a selection of commonly occurring misconceptions about AI. Being aware of these will make you better at using ELM. 

NoMisconceptionFact
1AI has intelligence in its name, so it works similar to a human brain, just simulated by computers and can do everything just as well as a human could. Despite the name, Generative AI cannot do everything a human can do. It can complete some tasks humans do, but will struggle with others. For example, it may struggle to provide strategic business insights or understand complex organisational contexts. 
2Since we have access to Gen AI, it no longer makes sense to consult with colleagues or managers – I will just ask Gen AI for answers so I get them faster. The way your colleagues or managers answer questions is tailored to your team’s context and based on years of professional experience. Generative AI cannot provide the same level of nuance or context-specific advice and may explain things in a way that adds confusion rather than clarity. 
3AI only outputs true and verified information. AI can “hallucinate”, meaning generate false outputs while sounding plausible. It is also capable of making up sources. Things that are written by AI need human verification. 
4The entire functionality of Generative AI is interacting with the chatbot interface using natural language. When accessing Generative AI, you can also use it to generate code in the language of your choice (such as Python). You can also use Generative AI to survey documents, for example, you can upload a document to ELM and request your document to be critically surveyed or to search mentions of specific concepts in multiple documents at once. 
5You can use AI and expect it to produce perfect, out-of-the-box responses. AI-generated content typically requires human review and editing. Often, it takes an iterative process, where you provide further instructions or context so the AI can produce more relevant output. 
6I can ask Generative AI for relevant reports or documents for my work and rely on them entirely. If you request such documents, Generative AI will provide something, but they may not be relevant or accurate for your needs. ELM does not have access to any internal databases or folders. It will usually lack the necessary context to generate a relevant and useful document. 
7Since I can cite Gen AI as a valid reference in my reports or presentations, I will stop using other sources. Generative AI should be used as a supplementary tool alongside traditional sources such as official reports, policy documents, or industry publications. 
8Generative AI learns and updates itself after every use. It will learn from my inputs and could share them with other users. Most models are static until retrained by developers; they don’t learn continuously in real-time. Generative AI will not become better at understanding you over time. 

No Misconception Fact 
1 AI has intelligence in its name, so it works similar to a human brain, just simulated by computers and can do everything just as well as a human could. Despite the name, Generative AI cannot do all that humans can do. It can complete some tasks humans can do, but will struggle with some. 
2 Since we can access Gen AI, it does not make sense to ask questions to my tutor anymore – I will ask Gen AI to explain it instead so that I get my answer faster. The way your tutor answers questions will be tailored to your level of understanding and based on many years of experience explaining the same concepts to other students. 
3 AI only outputs true and verified information. AI can “hallucinate”, meaning generate false outputs while sounding plausible. It is also capable of making up sources. Things that are written by AI need human verification. 
4 The entire functionality of Generative AI is interacting with the chatbot interface using natural language. When accessing Generative AI, you can also use it to generate code in the language of your choice (such as Python). You can also use Generative AI to survey documents, for example, you can upload a document to ELM and request your document to be critically surveyed or to search mentions of specific concepts in multiple documents at once. 
5 You can use AI and expect it to produce perfect, out-of-the-box responses. AI most often produces content that requires humans to make small or substantial changes to it. It also often requires an iterative process, where human provides further instructions so that enough context is provided for Generative AI to be able to produce relevant output. 
6 I can ask Generative AI for relevant papers in relation to my work and later utilise them in my work. If you request such a thing, Generative AI will provide you with papers, but those papers will not be the best, most relevant you can use. It will be a somewhat random selection. You should ask your tutor for additional readings and rely on reading lists as provided by your tutors in your coursework-related documents. 
7 Since I can cite Gen AI as a valid reference in my work, I will stop using other sources. Generative AI should be used as a secondary, supplementary tool to traditional sources such as books or journal articles. 
8 Generative AI learns and updates itself after every use. It will learn based on my inputs and could share them with other users. Most models are static until retrained by developers; they don’t learn continuously in real-time. Generative AI will not become better at understanding you over time. 

 

Risks of Gen AI use & mitigation strategies 

No Risk Impact Mitigation 
1 Generative AI can sometimes spread false or misleading information. Using inaccurate or outdated information can negatively affect business decisions, damage your team’s credibility, or lead to costly mistakes in projects or communications. Always verify AI-generated content by cross-referencing with credible sources such as official documents, trusted business databases, or subject matter experts. 
2 If you are accessing Generative AI not via ELM, you risk that external AI companies may not secure data properly or may sell your data, resulting in data leaks. Sensitive business data or personal information could be leaked, sold, or made public, leading to reputational, legal, or financial consequences for your organisation. Do not provide any sensitive or confidential information to Generative AI systems unless you are certain about their data security policies. Use ELM for work-related queries to ensure your data is protected. 
3 AI might unintentionally reflect stereotypes or biases. Biased outputs can perpetuate harmful stereotypes or misinformation, potentially offending colleagues, clients, or stakeholders and damaging your organisation’s reputation or inclusiveness. Be vigilant for biased or offensive content. Critically evaluate AI-generated outputs and raise concerns with your team or manager if needed. ELM has strong moderation, but other AI tools may not uphold the same standards. 
4 AI tends to oversimplify complex topics. Oversimplification can result in misunderstandings, missed details, or poor decision-making, especially when dealing with complex business processes or regulations. Don’t rely on Generative AI alone for understanding complex topics. Consult with subject matter experts or refer to official documentation for critical tasks. Use AI as a brainstorming or drafting tool, not as your main source of knowledge. 
5 Generative AI is not transparent about how it produces answers, making it difficult to verify the reasoning behind its outputs. Relying uncritically on AI-generated content can lead to the use of faulty information, insecure code, or incorrect business practices. Always thoroughly review and verify all AI-generated content before using it in your work. Do not assume the AI’s reasoning is inherently correct—apply your own professional judgement and seek expert input where necessary. 

 

Available training 

If you would like to learn more about how to use ELM in a safe, responsible, and effective way, the Digital Skills Team from The University of Edinburgh has a range of courses available. Please refer to the following links: 

  1. ELM & Generative AI Specific courses: Available ELM & Generative AI Training | Computing | Information Services 
  1. Digital Skills Programme: Digital Skills Programme | Help | Information Services 

Authored by Bart Pohorecki