• The University trusts you to act with integrity in your use of generative AI for your studies. • The University does not ban the use of generative AI in your research, though its use is restricted for work assessed by the University. • Some programmes and disciplines may also restrict its use in other ways. Always check with your supervisors and any programme-level guidelines. • If you are considering using generative AI for research in conference papers, posters and publications, you should also consult any guidelines provided by the conference organiser or publisher and make any appropriate declarations. • If you are a funded student, check whether your funder stipulates any additional restrictions. • The top-level guidelines in this document provide clarity on which ways of using generative AI are strictly prohibited and constitute academic misconduct. • They also explain why you should be cautious about over-reliance on generative AI for your learning and research. • These guidelines are general and set out the basics of the University’s position – it is essential that you discuss them with your supervisors and follow any further guidelines provided by your programme and potential publishers of your research. • The University trusts you to act with integrity in your use of generative AI for your studies. • The University does not ban the use of generative AI in your research, though its use is restricted for work assessed by the University. • Some programmes and disciplines may also restrict its use in other ways. Always check with your supervisors and any programme-level guidelines. • If you are considering using generative AI for research in conference papers, posters and publications, you should also consult any guidelines provided by the conference organiser or publisher and make any appropriate declarations. • If you are a funded student, check whether your funder stipulates any additional restrictions. • The top-level guidelines in this document provide clarity on which ways of using generative AI are strictly prohibited and constitute academic misconduct. • They also explain why you should be cautious about over-reliance on generative AI for your learning and research. • These guidelines are general and set out the basics of the University’s position – it is essential that you discuss them with your supervisors and follow any further guidelines provided by your programme and potential publishers of your research. Generative AI at EdinburghThe University recognises that developing skills in the responsible use of generative AI is important and will likely be significant for your future life and work. It also recognises that there are times when you may want – or be asked – to use it in your current research. We want to ensure that you have the knowledge and skills to thrive in a changing world, and we recognise that generative AI can be used creatively, critically and with integrity.These guidelines are general to postgraduate research students across the whole University. It is important that you are aware that each programme or subject area may have its own, more detailed guidelines, and that publishers and conferences may have specific restrictions or require particular forms of record-keeping. You should always check any local guidelines on this, and speak to your supervisors if you are unsure.The University trusts you as research students to act responsibly in relation to the use of generative AI. It also recognises that you need clarity on when its use breaches the University’s rules on academic misconduct. These guidelines provide you with this clarity, and they should be read in conjunction with the University Research Misconduct Policy. Unacceptable uses of generative AI for assessmentPassing off someone – or something’s – work as your own for an assessment is academic misconduct. This could be failing to cite a source you have used in an assessed submission (e.g. annual review submission or MScR, MPhil, DClin or PhD examination), getting someone else to complete a section of your thesis, dissertation or portfolio for you, claiming authorship of machine-generated content or presenting machine-translated work as your own.If you submit a piece of work for assessment that is not your own original work you risk being investigated under the University’s academic misconduct investigation procedures. This could have serious implications for you and your studies.The following uses of generative AI are not acceptable and constitute misconduct: if you use generative AI in these ways you risk investigation and penalties.Presenting AI outputs as your own, original work.Use of an AI translator to convert assessed work to English before submission: English is the language of teaching and assessment at Edinburgh – machine translation is treated as false authorship and is not acceptable.Submitting work for assessment which includes elements of AI-generated text without acknowledgment.Submitting work for assessment which includes AI-generated images, audio or video without acknowledgment.Submitting work for assessment which includes AI-generated mathematical formulae or reasoning, or computer code, without acknowledgment.Citing AI-found sources without reading and verifying them.For more detailed information about the restrictions on using AI-supported online proofing tools, please see the University’s Guidance on Proofreading of Student Assessments (pdf). Using generative AI to support your researchWhile we have these clear restrictions on the use of generative AI in your assessed work, the University understands that there are ways in which you may wish to use it to support your research. This might include using it to:brainstorm ideasget quick definitions of conceptsovercome writer’s blockcheck your grammarorganise or summarise informationreformat your referencesSome research programmes may encourage or even require you to use it in certain ways, while others may ask you not to use it at all. Again, it is important that you check any programme-level guidelines on this. Reasons to be cautious about your use of generative AIYou should be aware that there are risks and disadvantages associated with over-use of generative AI to support learning and research:Cognitive offloadingThere is growing research evidence that over-use of generative AI can negatively affect your learning. You may want to look at studies which raise concern over how ‘cognitive offloading’, ‘metacognitive laziness’ and reduction in capacity for critical thinking may be associated with over-reliance on this technology.If you routinely use generative AI for breaking down and summarising long texts, for example, you will not be developing your own critical skills in the analysis of complex documents. You will not be practising and learning how to bring together complex ideas using the power of your own intelligence. Similarly, if you are using it to regularly assist with mathematical reasoning, coding or translation, you are undermining your own ability to learn, trouble-shoot and become expert at doing this yourself. To make the most of your time at university, embracing the hard work of learning is a better approach than looking for short-cuts. Bias, inaccuracy and imitationGenerative AI models are not ‘intelligent’ in the way that humans are intelligent. They have been trained on more text than a human could ever read, but have different capabilities and make different mistakes. While their output often appears convincing and reliable, their behaviour is strongly influenced by the data they are trained on, so they can perpetuate harmful biases, fabricate information and make errors. You will be held accountable for these errors and biases if you include them in assessed work. Generative AI systems can be used to gather information about a particular domain or research topic, similar to the use of a regular search engine. Such use requires caution, as in most cases it will be impossible to trace the source of the information. You will need to check the generated information thoroughly for accuracy, and thoroughly check and consult references, bearing in mind that the AI model may have fabricated sources. If you then write a text yourself, you will need to provide references for the sources on which the information is based.Higher education should help you develop advanced knowledge which is creative and rigorous, not generic and unreliable. For doctoral researchers, it is important to note that one of the core requirements of a PhD thesis is that it should demonstrate your capability of pursuing original research that makes a significant contribution to knowledge or understanding in the field of study. It is best to use your time at university to develop high-level skills that are going to help you both with your research and throughout life – original thought, engaging writing, critical use of evidence, creative risk-taking and innovation. PlagiarismIf you use AI in an attempt to generate new ideas for a self-written paper, there is a good chance that the ideas will be generated based on existing work. If you do not quote and cite the ideas correctly, then this is plagiarism. If the ideas generated by AI turn out to be innovative, then you will need to mention your use of the generative language model in your Declaration of Own Work, even if you have written the paper yourself.You are advised to keep track of AI’s raw output when using a model to generate new ideas or summarise information (rather than if you are using it to check grammar or spelling). You have a duty to keep an audit trail of how you came to something. Ethics, copyright and intellectual propertyThink carefully about what data you enter into generative AI models. Do not enter personal data or confidential information (for example, the development of original research ideas) on platforms that are not managed by the University of Edinburgh. If you introduce intellectual property that has not yet been protected (such as a new method, the description of a unique material, or another invention), there is a good chance that you will no longer be able to protect it. Do not disclose information about which a non-disclosure agreement has been signed, for example, in the context of a thesis or dissertation researched in collaboration with a company. The information entered is often kept by the owner of the AI tool, and it is unclear what happens to this information. Make sure you have the necessary permission or licence to enter copyrighted material into the AI application. If you are unsure about the confidential nature of the information, you can ask the provider of the information.If you want to use generative AI to process fieldwork results or interview data for your research, you would need to seek prior approval in your ethics review, showing what precautions you would take to strip personal information from the dataset. Using the University’s own generative AI platform (ELM)All University of Edinburgh students have free access to ELM (Edinburgh access to Language Models), which offers you a secure gateway to a range of generative AI models. The University encourages you to use ELM over other tools such as GPT, DeepSeek, Grok etc for the following reasons:In ELM your data is secure – it will not be retained by third-party services to train their models or for any other purpose.It is free to use for all staff and students, providing the same access for all and saving you money.ELM provides access to a range of language models including a locally-hosted instance of Llama. This has an optimized architecture that can achieve faster response times and reduced power consumption. You can also choose other models such as GPT within ELM if necessary for your task.You can access ELM and find out about training opportunities here. Acknowledging your use of AIIf you choose to use generative AI for aspects of your assessed research – such as your doctoral thesis, MScR dissertation, or annual review submission – it is important to be transparent about how you have done so. You should include a brief acknowledgment in the Declaration of Own Work at the start of your thesis or other submission, for example:I used OpenAI o4 Mini via ELM to check grammar and spelling throughout my thesis.I also used the Create Image function in ChatGPT to generate the image on page 2.I used ELM to generate initial ideas for pathways to impact for my research.Again – check with your supervisors and any programme-level guidelines if you are unsure what is required, as there may be specific things your supervisor or programme expect you to cover in your acknowledgment.For research outputs not submitted to the University for assessment, check the conference, workshop, or publisher guidelines to see what form of generative AI declaration and record keeping is required. Citing your use of generative AIIf you use content generated by AI within your work, for example an AI-generated image or text from an AI chatbot, you will need to reference it. This means including an in-text citation or footnote in the body of your work, and a corresponding reference in your reference list. The Library’s guide to using generative AI gives very useful guidance on this. Environmental and social impact of generative AIMany in our community are concerned about the negative impacts of generative AI in areas such as:Energy and resource useExploitative labour practicesIntellectual propertyYou may wish to read more about these via the links above. While there are ongoing efforts to reduce the environmental impact of the datacentres that make AI models work, it is clear that use of generative AI has a higher impact than, for example, a simple web search. If you are concerned about this, consider using the locally-hosted instance of Llama in ELM (see above). It is more efficient in terms of resource use, and provides a more transparent alternative to OpenAI, meaning that it is easier for the university to measure and manage its power consumption.You might also try to limit use of generative AI to purposeful rather than casual use. Links to other sourcesPrinciplesEdinburgh Student Assembly principles on the use of generative AIRussell Group principles on generative AI use in education (pdf) Further guidanceGenerative AI use for students (guidelines tailored to taught students)Library guide to using generative AI in academic workCOPE (Committee on Publication Ethics): authorship and AI toolsEuropean Research Area Forum Guidelines on the Responsible Use of Generative AI in Research TrainingGenerative AI self-study course for students (to follow)University of Edinburgh Digital Skills Programme Academic misconduct guidanceFurther guidance on academic misconduct (including plagiarism) and how to avoid itUniversity of Edinburgh Academic Misconduct ProceduresUniversity of Edinburgh Research Misconduct Policy Return to ELM main page This article was published on 2025-09-16