Generative AI Guidance for Staff

GenAI guidance for staff.

November 2024

The University Strategy 2030 highlights our commitment to value creativity, curiosity and to pursue knowledge. As one of the first universities to teach and research the field of Artificial Intelligence (AI), it remains our ambition to be a global leader in AI with integrity. This includes making use of new AI technologies where they can benefit our research, teaching, and wider business operations.

Generative AI (also called GenAI) is a new technology that can be used by anyone to generate text, imagery and other types of data for a very broad range of purposes. GenAI systems are able to perform an impressive range of tasks that can be useful in our work at the University, but they also introduce several new risks that need to be considered. 

This guidance assists in the use of GenAI in responsible, safe, and ethical ways while maximising opportunities to create benefits for our staff communities.

The technology, ethics and use of AI are fast-moving areas. This guidance is current as of November 2024 and will be updated as necessary. Separate guidance is provided for students here:

Guidance for working with generative AI in your studies

What is generative AI (GenAI)?

GenAI systems are AI systems of which ChatGPT, made by US company OpenAI, is currently the most well-known. Other examples are DALL-E, Microsoft Copilot, Claude, Llama and Google Gemini, as well as the University’s own platform ELM (Edinburgh Language Models) which is further discussed later in this guidance.  ChatGPT and systems like it are trained on a massive amount of data, including databases and millions of internet pages. Their huge statistical models allow users to enter natural text queries or instructions to which they provide answers, ideas or other types of requested outputs. Most often this output is produced as text, but it can also be images, sounds, video or other types of data, for example programming code.

Many consider the capabilities of GenAI systems to surpass those of previous AI systems in impressive ways. However, it is important to be aware that they also have important limitations: chiefly that they can produce inaccurate, biased, misleading or inappropriate outputs; and that they perform poorly on certain types of tasks such as those that require systematic logical reasoning.

The corporations that develop these systems have also been criticised for violating copyright law in the training of these models, and for the use of exploitative labour practices in their development. There is also significant concern about their environmental footprint, as they consume very large amounts of energy and other natural resources.

More about GenAI

Guiding principles

The University recognises the value of staff exploring potential uses of GenAI and is committed to supporting staff through the provision of training and the exploration of innovative and responsible uses of GenAI. In using GenAI tools for their work, staff are asked to adhere to the following guiding principles:

  1. Verification. Staff are advised to verify the correctness of the generated output, paying attention to original sources of evidence when using GenAI tools, as these tools currently give no guarantees about the accuracy of their outputs.
  2. Transparency. Staff should ensure they are transparent with students, readers, the general public, funders and fellow staff about their use of GenAI, and follow any requirements set out by specific organisations in this regard.
  3. Respect for intellectual property (IP), confidential information and personal data. Staff must respect copyrighted material and confidential information (including unprotected IP) and not import it into GenAI tools unless they have the right to use the material for that purpose, and otherwise comply with all copyright rules and license terms for any materials uploaded.  Staff are encouraged to use the University’s secure AI platform, ELM, as this retains data within the University.  Staff should avoid uploading personal data (theirs or anyone else’s) to a GenAI platform, unless using ELM and complying in all respects with the University's Data Protection Policy and the University Computing Acceptable Use Policy.   
  4. Understand and Explore. The University encourages all staff to understand AI and to explore where it can benefit their teaching, research, professional services or in their personal life. It is important to be familiar with new technologies to improve our understanding of them and to leverage their capabilities for our work and personal lives.
  5. Responsibility. Take responsibility for your use of GenAI and exercise professional judgement in assessing and mitigating any risks that may arise from its use.   

Guidance

This guidance concerns the everyday use of widely available GenAI tools for general work-related tasks. Given the rapid evolution of the field, it is important to be aware of the specific features of any emerging AI technologies, tools and practices that you are using. In general, staff can use GenAI systems unless their use is explicitly prohibited or regulated.  In all use instances, staff must not claim the outputs generated by GenAI systems as their own work.

If you are using AI for research, the link below provides specific additional guidance:

AI for Researchers

Acceptable uses of GenAI

Please note, guidance on acceptable uses of GenAI does not preclude the use of AI tools where they are being used in the context of a reasonable adjustment.

Before using GenAI in your work, check whether there are any specific restrictions. For instance, some funding bodies or local ethics policies in your area of work may have specific restrictions on the use of AI in your work. Also, some AI tools may not permit their use with certain types of data. An example is that the University’s ELM AI platform may not be used with identifiable patient or clinical data.

Some of the generally acceptable ways in which GenAI might be used include:

  • brainstorming ideas through prompts
  • getting explanations of difficult ideas, questions and concepts
  • self-tutoring through conversation with the GenAI tool
  • organising and summarising your work notes
  • planning and structuring your writing
  • summarising a text, article or book (check first that the copyright owner permits use of GenAI for this purpose)
  • helping to improve your grammar, spelling, and writing
  • translation of texts from or into other languages
  • overcoming writer’s block through dialogue with the GenAI tool
  • help with structuring, writing, and de-bugging code
  • planning and organising your agenda or work schedule

General guidance

The most important thing is to avoid assuming that any content produced by an AI system is accurate or appropriate for its intended purpose. It is also important to keep in mind that these systems may not have up-to-date data or information about events that occurred after their training was completed. You are responsible for checking your work outputs. Always check any outputs AI tools have created on your behalf.

Where the output is published, it should be referenced subject to the same rules as any other material used from a third-party source. In these cases, it is important to include details of the tool, its version used and to add the date when the output was created, as systems may undergo ongoing modifications by their owners. Note that as GenAI systems are non-deterministic, it is usually impossible to reproduce the exact same output that was obtained for a specific prompt. It is advisable to keep a record of interactions with the tool separately in case a full trail of provenance is required. Staff should always be able to explain and justify the ways in which they have used AI tools in their work.

Where GenAI systems are used for administrative and office tasks, and outputs that will not be published, such referencing is not always necessary. In these contexts, systems can be useful for generating new ideas, obtaining background information or proofreading, structuring and editing draft documents. They can also be used to create supplementary content such as visualisations, diagrams and slide decks, and facilitating computer programming tasks.

GenAI systems do not verify whether their output infringes the rights of third parties or if it is derivative of a copyrighted piece of work. Staff should be conscious of potential ‘hidden’ copyright infringements they may be propagating in their work, especially if the work is derivative of protected content. If the output is considered to have infringed the intellectual property of a third party, this may result in a claim for infringement of copyright. Legal responsibility for copyright infringement is likely to lie with the user, not the generative AI tool.

GenAI is a new technology, and expanding our knowledge and understanding of it is important to ensure we benefit from it. The University provides a range of relevant training courses, and this will continue to expand in the future.

AI Training

Using internal GenAI systems

ELM – Edinburgh (access to) Language Models

ELM is the University’s AI access and innovation platform, a central gateway to safer access of Generative Artificial Intelligence (GAI) via large language models (LLMs). ELM is for everyone and provides some key benefits over accessing AI through other methods.

  • ELM provides access to a wider range of AI large language models including the very latest and most powerful versions of ChatGPT. This will also include Open Source LLMs.
  • Your data is secure and will not be retained by third-party services to train their models or for any other purpose, as the University has a Zero Data Retention agreement with OpenAI. All your chat histories and your document downloads are kept private to you on your instance of ELM.
  • Equity – ELM is free to use for all staff and students.
  • You can innovate on top of ELM, writing your own AI applications through our API.
  • ELM is fully supported by the University through your local IT teams, EdHelp and the IS Helpline. Note: The ELM AI platform may not be used with identifiable patient or identifiable clinical data.You can find the ELM support pages and more information here:

ELM support pages

For more detailed examples of how to use ELM with example prompts, please see:

How to use ELM with example prompts

Using external GenAI systems

A key factor to remember when using external GenAI systems is that the owners of these systems may use information entered to train their AI systems in the future. They may also retain any uploaded documents as well as your GenAI chat histories and outputs. For this reason, we strongly recommend using University-hosted tools like ELM .

GenAI-specific risks

It is important to be aware of a number of specific risks that may not be evident when interacting with GenAI systems:

  • Inaccurate and inappropriate content generation. GenAI systems may expose users to inappropriate, abusive, or misleading content. This may be exacerbated by it creating a semblance of intelligence or authoritativeness.
  • Algorithmic bias and discrimination. GenAI systems may replicate biases in historical data or introduce new ones as they learn patterns that would be considered inappropriate if applied in human decision-making.
  • Privacy and data protection. AI tools may enable new forms of ‘algorithmic surveillance’ that involve inappropriate use of personal data, and they may involve the inadvertent leaking of sensitive data from AI models when queried in specific ways.
  • Opacity. Many modern-day AI systems involve extremely complex mathematical ‘black-box’ models that cannot be effectively inspected or interpreted to produce the explanations for their predictions and decisions needed to ensure they align with human values and objectives, or satisfy legal requirements (for example, data protection legislation).
  • Security and integrity. While all IT systems may have security vulnerabilities, there are specific new ‘attacks’ to AI systems. These involve, for example, forcing them into producing flawed results by ‘poisoning’ their training data or presenting them with highly unusual problems that ‘trick’ them into producing the wrong output.
  • Sustainability. GenAI systems consume a massive amount of computing power to train and employ at scale, as well as significant impacts on other environmental resources. The environmental costs and sustainability of GenAI use must be considered alongside their benefits.
  • Copyright infringement. It is likely that GenAI systems have been trained on copyrighted materials and may create ‘new’ content which is very similar to the content the AI has been trained on.

Inappropriate uses

Staff should be aware that they may be exposed to inappropriate uses of GenAI by others:

GenAI makes it easy to generate or manipulate misleading, inaccurate, inappropriate and fake artefacts. Adversarial actors may deliberately use these systems to create such artefacts in very credible ways. Conversely this might also happen inadvertently, without bad intentions, by other users who have unintentionally consumed, processed or propagated this kind of content. Currently, no technical methods are available to automatically detect AI-generated content or verify its source or accuracy. Colleagues need to be vigilant when they consume content, especially online, and they need to be aware of the potential ways in which it may harm the security, wellbeing, financial situation and reputation of our staff and students, and other people and communities, as well as the institution and the wider higher education sector.

Globally increasing (geo)political tensions and socioeconomic challenges provide bad actors with many opportunities to exploit vulnerabilities of people and communities using AI in this regard. Associated with this are also cybersecurity risks, as GenAI systems can be used to either deploy new forms of attacks on the integrity of computer systems, including new forms of social engineering.

On a more specific level, AI-supported plagiarism and other forms of misconduct pose threats to academic and research integrity. This can include anything from inappropriate use of AI in any processes of skills assessment (from university teaching to professional and aptitude tests used for accreditation and employment purposes) and the preparation and review of publications and grant proposals to the fabrication of evidence in research, teaching, and business operations.

University staff will frequently engage in tasks where they interact with larger communities of other individuals and AI misuse may occur, and it is important to have a good level of understanding in terms of AI capabilities to assess risks and apply due diligence when consuming or assessing the work of others.

Many inappropriate uses of AI may affect how we manage legal and contractual obligations, our approaches to data protection, equality, diversity and inclusion, bullying and harassment, climate action and sustainability, academic standards and policy, and much ongoing work on people and culture. Staff are encouraged to consider the broader impact of developments in AI when they contribute to these activities.

Systematic or large-scale use of AI systems or AI in university processes

The University Digital Strategy, approved by Court, states, “The use of AI will be governed by an agreed set of guidelines or policy for each broad area where these technologies are used."

Typically, your College Ethics Committee or your Professional Services Ethics Committee will need to approve the wider use of AI in any particular process or area. This includes the purchasing and use of AI enabled software for larger scale administrative or other processes or new AI enabled modules or AI capability in existing platforms. The degree of review or approval will depend on the type of data being used and the outcomes. For large scale use, especially where personal data, confidential data, copyright or other considerations are involved, it is expected that an AI Impact assessment, an Equality Iimpact Assessment, a Data Protection Impact Assessment and a Privacy Statement or notice will need to be completed to be reviewed and approved by the appropriate Ethics board. Ethics boards and applicants should consider the EU AI guidelines below when approving the use of large-scale AI systems. The University AI and Date Ethics Board (AIDE) is also available to provide guidance and advice on matters of AI Ethics and evolving best practice.

EU Guidelines for AI Systems

While there is currently no formal impact assessment procedure for AI, AI Impact Assessment guidance that can be useful when you are considering the adoption of new AI tools can be found here on the AI Adoption Hub:

AI Impact Assessment Guidance

Further considerations

An issue we are working to address is how data produced and published by the University is used by vendors who are training new AI models. This includes our research publications, web and social media content, but also information we publish through channels not owned by the University, for example, as part of our reporting to government agencies.

Beyond copyright and IP issues, there are risks around how AI might be exploiting the work of students and staff without appropriate compensation, and risks around how future systems may present and profile their work.

Staff members are encouraged to contribute to these discussions, but also to alert managers to new issues arising, for example, in the context of academic publishers establishing legal agreements with AI companies to contribute their authors’ content to the training of AI systems.

Many of the vendors who supply the University with IT systems are introducing AI functionality in their products. Understanding whether the University should adopt these requires colleagues in procurement, technology experts, and specialist business units to work together in order to make informed, responsible and economically viable decisions.

There are wider concerns among academic and other communities that users’ perceptions may lead them to place excessive trust in these systems, with potentially detrimental impacts on individual and collective opinions, a devaluation of human labour, creativity and judgment, and a ‘homogenisation’ of human thinking.

Staff are encouraged to contribute their expertise to these conversations where relevant, but need to be aware that many of our staff and students will be using AI tools not provided by the University. Our future students and staff, in particular, will come to the University with their own expectations regarding the availability of AI at universities. This is an important factor to consider in discussions and consultations.

The long-term impact of AI on the higher education sector is hard to predict, and the University hosts an enormous range of experts in all relevant areas of academic and professional expertise. As a leading university in AI, we encourage staff to contribute to initiatives that amplify our position as thought leaders in this area.

Feedback

The university is open to feedback to on this guidance. Please send any comments or suggestions to the authors:

Professor Michael Rovatsos 

Vice Principal Gavin McLachlan