This section identifies any existing risks in Artificial Intelligence (AI) within Higher Education (HE) and any associated risk categories. To support the management of AI in Educational Technology (EdTech), known risks have been identified and collated with associated risk categories in line with the University Risk Policy and Appetite. Risk Categories Six risk categories have been identified for AI: Bias and Fairness RisksReliability and Accuracy RisksRegulatory and Compliance RisksEthical and Social RisksBusiness RiskEnvironmental RiskBias and Fairness RisksBias and Fairness risks can impact upon the University’s reputation, compliance, people and culture and student experience. Bias or lack of diversity in the training data: Where training data is biased, the AI system can learn and amplify those biases, leading to unfair decisions or outcomes. Lack of transparency, accountability and explanation: Lack of transparency can hinder efforts to detect and mitigate unfair or discriminatory outcomes, undermine trust in the reliability and accuracy of AI outputs and potentially make it impossible to question the output. On the other hand, it may lead to inappropriately uncritical acceptance if the AI interface appears confident in its output. Reliability and Accuracy Risks Reliability and Accuracy risks can impact upon the use of AI in education and related research, and in some instances, upon the University’s reputation and compliance. They can have mixed varieties such as: Incomplete, inaccurate or even false training data and outputs: If the training data is incomplete, outdated, or contains errors or inaccuracies, the AI system's outputs and decisions may be unreliable or inaccurate. Algorithmic instability or drift: AI algorithms can be sensitive to changes in input data or environmental conditions. Over time, the performance and accuracy of AI systems may degrade or drift, leading to unreliable or inconsistent results.Unpredictable Output: The user may find it difficult to understand and explain AI-based decision-making processes and how the AI system produces its output (known as the “black box problem”). An emerging field among AI research is called AI interpretability, and it aims to open up the black box of deep learning to some extent. However, when the derivation of an AI output cannot be explained then the University may risk not being able to comply with some of its audit and accountability responsibilities, and may put its reputation at risk if it cannot fully justify its decisions.Adversarial attacks or manipulation: where hackers intentionally manipulate input data to deceive AI systems. By making subtle modifications to input data, such as images or text, hackers can trick AI systems into misclassifying objects or providing incorrect information.Human oversight and validation: Overreliance on AI systems without proper human oversight and validation can lead to errors or inaccuracies going undetected.Loss of essential service: The risk to teaching and learning delivery is heightened when an AI system becomes essential and then malfunctions or becomes unavailable. Regulatory and Compliance Risks The use of AI systems can sometimes involve collecting, processing, and storing large amounts of personal data, including student records, faculty data, and other sensitive information. Institutions must ensure compliance with relevant data protection and privacy laws, DPA (Data Protection Act) in the UK and GDPR (General Data Protection Regulation) in the European Union. Data Privacy concerns: Student and employee data privacy concerns due to the potential personal data to be obtained or processed by AI systems, and furthermore that there may be large volumes of personal data entered into an AI system. There may be a risk that this may make personal or sensitive research data available to someone who is not meant to have access to them.Data security risks: Data security risks such as unauthorized access, data breaches, and cyber-attacks on AI systems containing sensitive information.Data transferred to third-party AI services or vendors: Risks associated with transfer of data to a third-party AI system vendor, such as interception of the data in transit, potential exposure of sensitive information by the third party.Unknowns around AI and copyright: Many AI systems were trained on copyright material, and there is a risk that litigation by copyright owners may impact whether these AI systems can continue to run.Copyright on AI output will likely not belong to the University and so the University’s further use of this output may risk breaching copyright.Where copyright content is entered into an AI system then the copyright owner is unlikely in practice to be able to enforce its rights in that content. Accessibility legislation: There is a risk that AI systems may not provide adequate considerations and functions for staff and students with reasonable adjustment to use the system and its output. An Equality Impact Assessment must be carried out prior to release of an AI system for a central digital service.Policy and training risks: Lack of internal governance and procedure around AI use, and poor understanding of related risks, legislation and policies by staff and students, would both risk compliance with legislation and the University’s reputation as a trusted and reliable educator. Ethical and Social RisksThe use of AI system in higher education institutions also raises several ethical and social risks that need to be carefully addressed. Trust and transparency: This lack of transparency can lead to lack of trust in the fairness and impartiality of AI-driven decisions, particularly in sensitive areas like admissions, grading, or academic integrity.Alignment with institutional values: Even with a specific prompt, there is a strong risk that the output or decision of an AI system would not align with the University’s stated values and ethical position.Academic freedom and autonomy concerns: The increasing reliance on AI systems in educational settings may limit academic freedom, autonomy, and creativity. AI algorithms could potentially constrain or influence the topics researched, perspectives explored, or pedagogical approaches adopted.Privacy and surveillance concerns: The use of AI for monitoring or surveillance purposes on campus could infringe on the privacy rights of students, faculty, and staff.Perpetuation of human biases: AI systems can perpetuate and amplify existing human biases present in their training data or algorithms. This can reinforce societal stereotypes, discriminatory practices, or unfair treatment of certain groups within the academic community. There is a significant risk that use of AI outputs where there is bias in the training data could lead to a breach of Equality legislation.Widening of inequalities: The adoption of AI technologies in higher education may exacerbate existing inequalities if access to these technologies is uneven or if they disproportionately benefit certain groups.Devaluation of human expertise: An overreliance on AI systems could lead to a devaluation or underappreciation of human expertise, experience, and judgment in academic contexts.Industrial relations: These may suffer if there is a perception or actuality of AI systems being used to replace staff rather than to improve their working conditions, or if working conditions for whatever reason were to deteriorate because of the use of AI systems, or because staff believe AI outputs or uses in specific situations are illegal, unethical or unfair. Business RiskRisk of falling behind: The University has been at the leading edge of AI research and teaching for many decades, and the four risk categories above must be balanced against the significant reputational risk in not creating, adopting and taking advantage of AI technologies to enhance teaching and learning; in not allowing colleagues to learn, experiment and innovate with AI; and in graduating students who are not experienced in using AI. Environmental RiskConsideration of whether a Generative AI solution is the right alternative should take its environmental impact into account. Generative AI models operate on hardware that is likely to require significant volumes of electricity and water for cooling in order to operate, and a Generative AI approach may require several times the energy of an equivalent non-Generative AI method. Mapping of AI Risk Categories to the University Risk Policy The following table (list provided below table) shows where the six broad risk categories above may intersect with the categories used within the University's Risk Management Policy and Risk Appetite Statement. Likely intersections are indicated by an ‘X’. For central digital learning and teaching services, the most relevant risk statement categories are likely to be Reputation, Compliance, Financial, Education and Student Experience, and People and Culture. Document Risk Management Policy and Risk Appetite Statement (391.57 KB / PDF) Risk Policy and Risk Appetite Bias and fairness Reliability and accuracy Regulatory and compliance Ethical and social Business Environmental Reputation X X X X X Compliance X X X X Financial X X Research X X Education & Student Experience X X X X Knowledge Exchange X X International Development X Major change activities X X X X Environment and Social Responsibility X X X People and culture X X X X X Alternative to the table of AI Risk Categories to the University Risk Policy This alternate mapping of the risk categories against the University's Risk Management Policy and Risk Appetite Statement, is presented as a list for those unable to interpret the table above. Each Risk policy and appetite category: Reputation Bias and fairness, Reliability and accuracy, Regulatory and compliance, Ethical and social, Business. Risk policy and appetite category: ComplianceBias and fairness, Reliability and accuracy, Regulatory and compliance, Ethical and social. Risk policy and appetite category: FinancialRegulatory and compliance. Risk policy and appetite category: ResearchEthical and social, Business. Risk policy and appetite category: Education & Student ExperienceBias and fairness, Reliability and accuracy, Ethical and social, Business. Risk policy and appetite category: Knowledge ExchangeReliability and accuracy, Regulatory and compliance. Risk policy and appetite category: International DevelopmentBias and fairness. Risk policy and appetite category: Major change activitiesBias and fairness, Reliability and accuracy, Regulatory and compliance, Ethical and social. Risk policy and appetite category: Environment and Social ResponsibilityBias and fairness, Ethical and social, Environmental. Risk policy and appetite category: People and cultureBias and fairness, Reliability and accuracy, Regulatory and compliance, Ethical and social, Business. This article was published on 2025-03-28