• The University supports responsible use of generative AI to enhance research, teaching and professional services, aligned with our values. • Where staff wish to use generative AI to assist in their work, the University’s recommendation is that staff use ELM, the University’s AI platform: it improves data security and ethics, reduces cost, and is supported by IT. • Be transparent about if/how you used AI. Never claim AI output as your own work. Always verify accuracy and appropriateness. • Protect copyright, confidentiality and personal data. Assume anything put into third-party tools is shared externally. Do not use ELM with identifiable patient/clinical data or any identifiable research participant data. • For systematic or large-scale adoption (including AI in third-party apps or the introduction of AI into an existing platform) for non-research activities, follow the approval process via the ISG Ethics Board, and complete required impact assessments. For Research use of AI follow your normal College Research Ethics Board process. • Separate guidance for students is available and should be referenced in teaching located here: https://information-services.ed.ac.uk/computing/communication-and-collaboration/elm/generative-ai-guidance-for-students/using-generative Generative AI at Edinburgh As a University that values creativity, curiosity and the pursuit of knowledge and as an early pioneer in AI teaching and research we aim to use AI with integrity. Generative AI systems (e.g., ChatGPT, Microsoft Copilot, Claude, Llama, Google Gemini, and models available via ELM) can generate text, code, images, audio and video. They can accelerate everyday tasks and open new ways of working, but they are non-deterministic, can be wrong or biased, and are not a substitute for professional judgement or in-depth academic knowledge.Use ELM, the University’s AI platform, for safer access to multiple models, your personal chat histories, University hosted open-source options with lower data transfer risks and carbon footprint, non-sharing data agreements with 3rd parties, and Application Programming Interfaces (APIs) for innovation. Help ensure that university data stays secure within the university by using ELM. https://elm.edina.ac.uk/Core guidance Use AI where it genuinely helps, but:In your teaching, make sure that students are familiar with the university-level guidance, and with relevant College or School frameworks for acceptable use.In your research and writing, verify everything that matters. Check facts, currency and sources. Treat AI outputs as drafts or suggestions.Be transparent. If your output is published, cite the tool, version and date, and keep a brief record of prompts/outputs where a provenance trail may be needed. Many publishers and conference organisers will publish their own advice about what is admissible for publication through their channels.Protect IP, confidentiality and personal data. Do not upload third party copyrighted or confidential material unless you have the right to do so and comply with licence terms. Avoid personal data unless you are working in ELM and fully compliant with the Data Protection Policy and the Computing Acceptable Use Policy.The recommendation is that staff use ELM, the University’s safer and ethical AI platform, over external tools. Assume external providers may retain your inputs and use them to train models.Do not rely on AI detection tools to confirm authorship. They misclassify human and AI content.Unacceptable uses (for staff)Presenting AI generated content as your own original work.Uploading personal data or confidential information to external AI tools.Using ELM with identifiable patient/clinical data.Publishing AI assisted content without appropriate attribution.Relying on AI for decisions where legal/ethical duties require human judgement.Ignoring funder, publisher or regulator rules on AI use.Any other use of AI in a manner which breaches University policies.Key risks you should know about AI can be inaccurate, biased or inappropriate, often with an authoritative tone. Modern models are opaque, which complicates explainability and accountability; they introduce new security risks (e.g., data poisoning, prompt-based exploits); and their training and use have environmental costs. Copyright remains complex: training data may include copyrighted material, and outputs can resemble protected works; users are likely responsible for infringements. There is a risk for researchers in terms of integrity and ethics. There is the risk of academic misconduct. There is currently no reliable automated method to detect AI generated content or verify provenance.Cognitive offloading (latest findings) Over-reliance on AI can reduce deep learning and critical thinking, weaken internal skills and create misplaced confidence. Use AI to support your work, not to replace engagement with complex tasks. Further reading:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5082524https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13544https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/Using ELM ELM is our university supported platform for general AI access and innovation. It provides access to multiple models (including the latest ChatGPT and University hosted open-source models), enforces non-sharing data agreements with 3rd parties, keeps your chat histories and document downloads private to you in ELM, and is free to all staff and students. ELM must not be used with identifiable patient/clinical data. Training and where to start We recommend starting with the University’s training and community resources:AI in Higher EducationDigital Skills Programme | The University of EdinburghThe Edinburgh Futures Institute AI Ethics MOOCThe AI in Society MOOCThe Una Europa AI MOOCs led by University of Helsinki AI Open MOOCs Free JISC training on AIGenerative AI for Higher Education Professional Services Freedom of Information (FOI)AI chat histories and outputs used for University business may be disclosable under Freedom of Information law (within reason). Retain records appropriately and consult with the University’s Information Compliance Services team if you receive an FOI request.Introducing AI through third-party apps (approval process)Many vendors are adding AI features to existing software. Treat these as new capabilities that require governance. Do not enable them by default. For systematic or large-scale adoption (including AI in third-party apps or the introduction of AI into an existing platform) for non-research activities, follow the approval process via the ISG Ethics Board, and complete required impact assessments. For Research use of AI follow your normal College Research Ethics Board process. Consider using the AI Impact Assessment guidance to frame decisions, and complete, where appropriate: Data Protection Impact Assessment (DPIA)Equality Impact Assessment (EQIA)A suitable Privacy Statement/Notice. Maintain an AI-related risk register and keep a watching brief on regulatory changes (e.g., EU AI Act).Configure approved tools to minimise data transfer and retention. For Research AI can support many stages of research if used appropriately. It can help you brainstorm research questions, sketch outlines, draft summaries of texts you are permitted to process, generate small code snippets, translate pseudocode, explain errors and draft unit tests, structure sections, improve clarity, prepare lay summaries or cover letters, and suggest checklists, guidelines, survey items, or initial draft text for applications that you then refine and verify.Good practice:Verify claims against original sources; treat outputs as drafts.Keep a concise record for transparency where you will publish.Reference use where required.Do not upload confidential manuscripts, embargoed data or third party copyrighted content unless permitted. Use ELM as the default and assume external AI platforms will harvest and expose your data. Do not use personal or special category data.Prefer ELM; assume external tools may retain inputs.Funder rules change frequently. Before submitting proposals or publications, check the latest requirements from your funder and venue (e.g., European Commission, UKRI, Wellcome Trust, Gates Foundation, and relevant publishers). The AI Adoption Hub collates resources and can signpost current links. For adoption at scale (e.g., tools handling personal/confidential data), consider using the AI Impact Assessment guidance and expect DPIA/EQIA and Research Ethics Committee review. For Teaching Familiarise yourself with the latest University guidelines for students: Using generative AI in your studies: https://information-services.ed.ac.uk/computing/communication-and-collaboration/elm/generative-ai-guidance-for-students/using-generative. You should also make yourself aware of the discipline and subject-specific guidance available in your School and College – speak to your Director of Teaching if you are in doubt.Be explicit with your students about what AI use is permitted in each activity or assessment – follow your School’s guidance on this. You may also wish to consider creating assessments that are process-oriented and authentic (for example, staged drafts, oral components, reflections). Consider the importance of AI literacy in your courses — limits of accuracy, bias and opacity, and how to avoid cognitive off-loading. Do not use AI detection tools; they misclassify and should not be relied upon for academic integrity decisions.For preparation and teaching, bear in mind that the University’s student guidelines are clear about the risks of over-reliance on AI for tasks such as summarisation and translation, and cognitive offloading in general. You may also therefore wish to avoid the use of generative AI in course and programme development, and in particular around assessment and feedback. ELM can help generate lesson outlines, examples and analogies but only with care. When handling student work or data, prefer ELM and comply with the Data Protection Policy. Reassure students that you and other University staff do not have access to a student’s ELM chat history in ELM for the purposes of 'checking' their work, just like their University supplied email account. ELM usage and history is private to them. For Management When introducing AI into a process, a system or a third-party application, treat it as a change requiring governance. Route proposals through the appropriate Ethics Committee, and consult the AI Impact Assessment to frame the case; complete DPIA, EQIA and privacy documentation as required. Many vendors add AI features to existing products; review them through the same route and do not enable by default. Maintain an AI risk register and scan for regulatory changes (e.g., EU AI Act).AI can help teams work smarter. Encourage ELM based, day-to-day efficiencies, turning meeting notes into action lists, summarising long documents, drafting communications, shaping options and sections for decision papers, and developing first drafts of project plans, role profiles and training outlines—always with human review. Build capability by ensuring staff complete introductory training and share practice via the AI Adoption community. For all staff For everyday work, AI can help you brainstorm and organise ideas, overcome writer’s block, improve clarity and tone, translate text, generate first drafts of slides or diagrams from bullet points, assist with small coding or data tasks, and plan agendas or schedules. Prefer ELM. If your output will be published, cite the tool, version and date, and keep a brief record of your interaction where provenance may be needed. Be open about your use of AI with colleagues, students, funders and the public where relevant. Consult with the IT Helpdesk if you are unsure about appropriate use of AI in your work. Copyright, data protection and confidentiality Only upload copyrighted or confidential material if you have the right to do so and comply with license terms. GenAI tools do not check for copyright infringement or whether content is derivative; outputs can resemble protected works. If infringement occurs, legal responsibility is likely to rest with the user, not the tool. You should treat the information given to any external AI (meaning any AI outside the ELM platform) as if you were posting it on a public site. (e.g., a social network or a public blog). Avoid using personal data unless using ELM and ensure you are fully compliant with the Data Protection Policy and the Computing Acceptable Use Policy. ELM must not be used with identifiable patient or clinical data.Environmental and social impactsGenerative AI consumes significant energy and resources. There are concerns about labour practices in model development, potential effects on employment, misinformation, and harm to vulnerable individuals and communities. When making decisions, you should consider whether the envisioned benefits of AI justify these risks. Where possible, consider lower impact options (e.g., smaller, University hosted models in ELM such as Llama) and use AI purposefully rather than casually.Links and referencesELM: https://elm.edina.ac.uk/University Student Guidelines on using generative AI https://information-services.ed.ac.uk/computing/communication-and-collaboration/elm/generative-ai-guidance-for-students/using-generativeEU Artificial Intelligence Act: https://artificialintelligenceact.eu/EU Ethics Guidelines for Trustworthy AI: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-aiResponsible use of GenAI in research (EU): https://research-and-innovation.ec.europa.eu/document/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_enNCSC principles for the Security of AI: https://www.ncsc.gov.uk/collection/machine-learningOWASP security checklist reference: https://www.infosecurity-magazine.com/news/owasp-security-checklist/Cognitive off‑loading studies: see links in “Cognitive off‑loading” section aboveKey University policiesUniversity of Edinburgh Data Protection Policy: https://data-protection.ed.ac.uk/data-protection-policy University Computing Acceptable Use Policy: https://information-services.ed.ac.uk/about/policies-and-regulations/university-computing-acceptable-use-policy Use of Operational Data Policy: https://uoe.sharepoint.com/sites/PolicyRepository/Shared%20Documents/Forms/AllItems.aspx?id=%2Fsites%2FPolicyRepository%2FShared%20Documents%2FUse%5Fof%5FOperational%5FData%5FPolicy%2Epdf&parent=%2Fsites%2FPolicyRepository%2FShared%20Documents&p=true&ga=1 Feedback We welcome feedback on this guidance. Please send comments or suggestions to:IT Service desk. https://information-services.ed.ac.uk/help-consultancy/contact-helpline Note: This guidance covers everyday, general work-related use of GenAI. For research specific use, consult “AI for Researchers.”. Return to ELM main page This article was published on 2025-10-22