Below is the final Project Report that includes AI in EdTech Policy Requirements and Recommendations. Introduction Since the release of the latest versions of ChatGPT (v3.5 in November 2022 and v4 in March 2023), many technology companies have rushed to add artificial intelligence (AI) features to their services. Some prominent examples are Microsoft’s Copilot which is available in the Bing search engine and newer versions of Windows, Google’s Gemini in their latest mobile phones and AI Assistant in Adobe products. Educational technology (EdTech) services have not been immune to this wave of AI adoption. An AI detection feature was added to the similarity checking service Turnitin in April 2023. Various AI helper tools have been added to our virtual learning environment Learn since July 2023. Wooclap added an AI wizard to generate multiple choice or open questions in November 2023. All these services are under the control of the University to enable or not and are all currently switched off. The biggest barrier to adoption for AI tools is likely to be clear assurances from suppliers on the compliance of their AI features with University policy and legal obligations. A common process will allow us to be consistent in the evaluation and adoption of AI tools and features. The processes described below for the evaluation of AI tools are an extended reworking of existing processes for the introduction of new non-AI features into services. As such there should not be much of a burden on service teams to adopt these and extend them to AI features. It is important that decisions on the enabling of AI features are transparent to users. The AI Innovations Service Release Tracker, and the wider SADIE SharePoint site will give the rationale behind the approach adopted and decisions made. It will also provide advice on the risks of using a tool even if it has been made available. The adoption of AI tools and features will likely require a review of University policies, potentially including but not limited to the OER, Lecture Recording, Virtual Classroom and Learning Analytics policies, to take account of the risks identified as part of this project. The Scoping AI Developments in EdTech at Edinburgh (SADIE) project was set up to standardise an approach for service teams to test and evaluate the utility and suitability of the AI tools and features being made available in the centrally supported EdTech services. The approach developed looked at the risks of adopting a particular feature and calls upon the expertise of learning technologists within the Schools, as well as that of the service managers in Information Services, in evaluating them. AI Use Across the University and the Wider UK Higher Education Sector Within the University of Edinburgh there is an increasing interest in the use of AI for learning and teaching. While engaging with the learning technology community in Schools and Colleges, this intrigue became clear, although familiarity with the array of risks around AI appears limited. The hesitation to turn on AI tools without a full risk evaluation was also partnered with some frustration around certain tools and features within services like Learn which are not currently enabled. The outcomes of the engagement sessions included many Schools and Colleges seeking guidance surrounding the University of Edinburgh’s approach towards AI, and clarity on why certain AI tools and features may not have been enabled. There was an opportunity to investigate the wider higher education sector approaches towards AI as part of a change impact program from Advance HE – Preparing for an AI-Enabled Future. While some universities initially enabled most AI features offered by their supported services, after further exploration of the tools, they discovered areas of uncertainty around privacy and data protection. For example, the use of content from uploaded files to drive and develop AI has forced institutions to establish safeguards before enabling these tools and features again. Identification of the Risks of Using AI Service managers for the centrally supported EdTech at the University are part of the Learning, Teaching and Web (LTW) directorate of Information Services. Part of their duties is to ensure that the services that we use are compliant with legal requirements such as data protection, copyright and accessibility. The introduction of AI features introduces risks to these obligations, as it is not always clear how well AI features adhere to those legal requirements. The project team identified six primary risk categories. Bias and fairness risks – unless trained on material uploaded to the AI model by University staff, the data used to train the AI and on which it generates its output are largely unknown. Unless carefully curated, this training data is likely to contain biases that can be amplified and may lead to bias or unfairness for certain groups. Reliability and accuracy risks – AI output is subject to some error – misinterpretation of the data or amplifying mistakes in the data. In addition, it is common for output that is presented as fact and delivered without any caveat to be false or inaccurate. Without critical oversight by a subject matter expert, use of comprised output from AI could impact on the reputation of the University. Regulatory and compliance risks – The use of AI systems can sometimes involve collecting, processing, and storing amounts of personal data, including student records, faculty data, and other sensitive information. Institutions must ensure compliance with relevant data protection and privacy laws, such as the Data Protection Act (DPA) in the UK and General Data Protection Regulations (GDPR) in both the UK and the European Union. In many cases, AI systems have been trained on copyrighted material without the agreement of the copyright holder, so the output may be in breach of copyright legislation. Ethical and social risks – there are several ethical concerns with the use of AI in the production of materials that are used at the University. e.g. the process that AI uses is not transparent and, in some circumstances, could be seen as unfair and not aligned with University values. There is also a danger of human experience and expertise being side-lined by an overreliance on AI. Business risks – conversely, the University has been at the forefront of AI research for many years and it would seem strange that it did not allow colleagues to innovate with AI. This must be balanced against the other risks outlined in this section. Environmental risk – as with all systems at use at the University, the environmental impact must be assessed. Generative AI models operate on hardware that requires significant volumes of electricity and water for cooling in order to operate, and a Generative AI approach may require several times the energy of an equivalent non-Generative AI method. It may never be possible to fully mitigate against these risks when using AI tools, but the service managers can remain vigilant about the risks posed by particular features and be transparent in communicating these to their users. The following table shows where the six broad risk categories above may intersect with the categories used within the University’s Risk Management Policy and Risk Appetite Statement. Likely intersections are indicated by an ‘X’Risk Policy and Risk Appetite Bias and fairness Reliability and accuracy Regulatory and compliance Ethical and social Business EnvironmentalReputation X X X X X Compliance X X X X Financial X X Research X X Education & Student Experience X X X X Knowledge Exchange X X International Development X Major change activities X X X X Environment and Social Responsibility X X XPeople and culture X X X X X An expanded description of the identified risks can be found on the AI Risk Identification Categories page. Alternative to the table of AI Risk Categories to the University Risk Policy This alternate mapping of the risk categories against the University's Risk Management Policy and Risk Appetite Statement, is presented as a list for those unable to interpret the table above. Each Risk policy and appetite category: Reputation Bias and fairness, Reliability and accuracy, Regulatory and compliance, Ethical and social, Business. Risk policy and appetite category: ComplianceBias and fairness, Reliability and accuracy, Regulatory and compliance, Ethical and social. Risk policy and appetite category: FinancialRegulatory and compliance. Risk policy and appetite category: ResearchEthical and social, Business. Risk policy and appetite category: Education & Student ExperienceBias and fairness, Reliability and accuracy, Ethical and social, Business. Risk policy and appetite category: Knowledge ExchangeReliability and accuracy, Regulatory and compliance. Risk policy and appetite category: International DevelopmentBias and fairness. Risk policy and appetite category: Major change activitiesBias and fairness, Reliability and accuracy, Regulatory and compliance, Ethical and social. Risk policy and appetite category: Environment and Social ResponsibilityBias and fairness, Ethical and social, Environmental. Risk policy and appetite category: People and cultureBias and fairness, Reliability and accuracy, Regulatory and compliance, Ethical and social, Business. Determining Whether to Enable AI Features in Centrally Supported EdTech Currently, most AI features can be disabled at the institution level – though this may change if use of AI becomes normalised. Most features are included in the license costs though this may change over time as many large language models are pay by use. Given the rapid pace of development of AI features, a common process is required to monitor, test and evaluate AI developments in each service. The process will be described in brief in the rest of this section but is available to read in full on the SADIE SharePoint site (appendix one shows a high-level process map of the process for determining whether AI features are enabled): SADIE: Management of AI in EdTech Process Service teams will identify any AI features or tools within their services – through supplier release notes and/or conversations with supplier account managers. The service team will then seek clarity from the supplier on the compliance of the feature(s) with University policy and applicable legislation. If required, data protection and information security checks will be made. If the AI feature passes these checks, then a testing environment, or limited user access, will be set up to enable testing of the feature – this testing will include colleagues from the learning technology community across the University. As a result of the testing by the appropriate service team, a Red, Amber, Green (RAG) status will be assigned to the feature. The RAG status is determined as a product of likelihood of switching on a feature (on a scale of 1 to 5 where 1 is almost certain and 5 is almost certainly not) and an assessment of the consequences of switching it on (on a scale of 1 to 5 where 1 is insignificant and 5 is almost catastrophic). The RAG status will be used to determine the next steps – enable or escalate for a decision. The escalation route is given in the table below: Feature ScoreRAG StatusEscalation Required0 – 5 Green (low risk) Service team signs off in consultation with service manager. 6 – 10 Amber (medium risk) Service manager signs off in consultation with service owner. 12 – 16 Red (high risk) Service owner signs off in consultation with business service owner. 20 – 25 Red (extreme risk) Business service owner consults with University governance groups. All decisions will be documented and recorded in a SharePoint site and communicated to users via the normal service channels. Once a feature has been approved, guidance on its use will be added to the service’s existing documentation and, where appropriate, the feature will be subject to a training session or incorporated into existing training. The service teams will continue to review all AI features monthly within their service – both disabled and enabled – to monitor and evaluate any changes as they occur. This will be captured in a full AI Matrix to allow service teams to regularly track progress and view the risks and RAG status of AI tools or features in centrally supported services. This structured approach to monitoring the AI features provides transparency regarding the adoption, management and development of AI across all services. All decisions and rationale will be visible to users via the AI Innovations Service Release Tracker on the SADIE SharePoint site and communicated via the normal service channels. AI in EdTech Policy Requirements The wider adoption of AI tools will likely require examination of existing University policies to cover AI and the management of its risks. Likely requirements for policy relating to enabling AI in central Learning Technologies would include: Scope. Clarification that the scope of the policy covers enabling Generative AI within central learning and teaching services, and how each AI tool or feature may be used if it is enabled. Clarification over whether policy would need to cover discriminative AI. Governance. Policy that there should be a governance process to consider and approve recommendations on whether and to what extent AI tools or features should be made available for use within central University learning and teaching services. An outline of this process, and the process to be followed where a vendor provides an AI tool or feature without the ability for the University to manage its availability. See: Management of AI in EdTech Process Decision Factors. Details of the factors to be considered when deciding whether to enable an AI tool or feature, and whether subsequent training for staff or students should be mandated or recommended. This may include, among other things: An agreed framework of risk criteria, tabulating at least the likely AI risk factors against the University’s risk appetite in each area. See: Risk Identification Categories Whether the construction of the AI, including the training data acquisition or human input practices, are ethical. The extent to which the AI is shared with third parties beyond the University, or between different EdTech services, and the extent to which the AI retains its inputs as further training data. Consideration of how we will know whether an output is AI- or human-produced. This is likely to be relevant regardless of whether the output is to be assessed. Service Register. Requirement to keep a register of central learning and teaching AI services, tools or features that the University has decided to make or not make available to staff and students, and of those on which it has not yet decided, and of the reasons for each decision. User Notification. Policy or guidance on how staff and students are made aware that AI is available and is being used within a service. Training. Policy or guidance on how much University staff, visitors, students or learners need to understand how an AI works before they use it. This should inform the training required before (i) using the AI at all (ii) using the AI to enhance processes that involve personal data, such as marking. See: Recommendations Data and Content. Clear criteria describing the circumstances in which it might be permissible for staff or student personal data or copyright content to be entered or absorbed into the AI model. It may also be relevant to consider other data or work such as commercially sensitive data or unpublished research results. A University position on ownership of AI-generated content, on how AI-generated outputs are identified once produced, and on how these outputs may be used by members of the University or made available publicly. Incident Management. Consider the process for what happens when something goes wrong because of the use of a Generative AI system and whether existing incident management and escalation procedures are sufficient. Recommendations Services adopt the process for reviewing AI tools/features in EdTech.SADIE: Management of AI in EdTech Process Consult with existing University AI groups, when appropriate, on high-risk decisions. The introduction of a staff training course introducing the use of Generative AI for learning and teaching. This should equip users with the practical knowledge and understanding of language surrounding AI so that they can interact/use AI tools responsibly. An overview of relevant AI groups within Schools, Colleges and University wide should be created and maintained to avoid duplication of effort and to encourage collaboration. Policy recommendations: Innovation: Consider how each aspect of the policy and procedures can encourage responsible innovation in the use of Generative AI. Public AI: Consideration of whether to encourage a wider University Policy on the extent to which public or semi-public AI services can be used for University teaching and assessment purposes. School and College Adoption: Consider whether the policy or procedures can be tailored to allow Schools and Colleges to adopt or repurpose them for services managed locally. Concluding Remarks The biggest barrier to adoption for AI tools is likely to be clear assurances from suppliers on the compliance of their AI features with University policy and legal obligations. A common process will allow us to be consistent in the evaluation and adoption of AI tools and features. The processes described for the evaluation of AI tools are an extended reworking of existing processes for the introduction of new non-AI features into services. As such there should not be much of a burden on service teams to adopt these and extend them to AI features. It is important that decisions on the enabling of AI features are transparent to users. The AI Innovations Service Release Tracker, and the wider SADIE SharePoint site will give the rationale behind the approach adopted and decisions made. It will also provide advice on the risks of using a tool even if it has been made available. The adoption of AI tools and features will likely require a review of University policies, potentially including but not limited to the OER, Lecture Recording, Virtual Classroom and Learning Analytics policies, to take account of the risks identified as part of this project. Appendix High-level process map of the process for determining whether AI features are enabled. Document high-level process map of the process for determining whether AI features are enabled. (55.38 KB / ) This article was published on 2025-03-28