This section outlines the processes in managing AI developments in our Learning Technology Services at the University. These AI developments may take the form of new tools or functionality being offered by our existing vendor partners, or new products being offered to us by new vendors. Processes The below processes can be adapted and used for managing AI within your own services. Process of Identifying RAG Status of an AI tool or feature This section outlines the process for identification of Artificial Intelligence (AI) Risk Red Amber Green (RAG) Status within Learning Teaching and Web’s (LTW) centrally supported learning technology services.The Service Team will identify the upcoming tools or features on release notes or information provided by the supplier. If required, the Service Team asks the supplier a set of questions to clarify the level of compliance with UoE policy and with applicable legislation. The AI Matrix (access for service managers only, others can access the user facing AI Innovations Service Release Tracker) is updated monthly to ensure an up-to-date record of upcoming or new releases and that all releases for each of the services are in one place, providing transparency across EdTech services. This AI Matrix is then used to populate the AI Innovation Service Release Tracker providing transparency to all users. If necessary, the Digital Learning Applications and Media (DLAM) Service Team shall check whether the Data Protection Impact Assessment (DPIA) needs to be updated. Where data from the student record may be involved, Student Systems shall be consulted to check whether using this data is safe and lawful. DLAM may also consult the Data Protection Officer if required.NOTE: A DPIA will be needed if different data is being processed than before for example: if AI is making any sort of automated decisions.if the location of processing has changed.if there's a new sub-processor.if any new data is being processed that wasn’t before. The DLAM team send out an Information Security questionnaire to vendors. Once completed by the vendor this information is then received by Information Security to evaluate and raise any concerns or recommendations back to the DLAM team.NOTE: This detailed check may not always be necessary as the process may have been checked previously, in which a redo of a subsection of the questionnaire may be all that is required. DLAM may also choose to check any concerns with Legal Services if they feel necessary. If the tool or feature passes the DPIA process, a testing environment is created or provided by the vendor for the service team to help identify and quantify risks. Service team members meet to look at release notes and identify risks that have been set by using the Risk Identification Categories. then evaluate the impact and likelihood of risks using AI Risk Evaluation Tables where a Red Amber or Green (RAG) status is identified. Once risks are identified the following applies:The identification of the risk scores will sit with each service team and be evaluated monthly with the AI Matrix to reflect the current RAG status. The service team will then follow the appropriate Escalation Process depending on the status outcome. AI Risk Evaluation Table This outlines the process for service teams to identify the Red Amber Green (RAG) status of AI developments within Learning Teaching and Web (LTW). This structured approach helps in making informed decisions regarding the adoption, management and development of AI tools and features within the LTW portfolio of centrally supported learning technology services.Risk Score TableFor example, when the likelihood is 1 and the consequence is 1, the risk is low (Green); otherwise, if they are 5:5, the risk is high; and it will trigger an escalation. 5LOW (Green)MED (Amber)HIGH (Red)EXT (Red)EXT (Red)4LOW (Green)MED (Amber)HIGH (Red)HIGH (Red)EXT (Red)3LOW (Green)MED (Amber)MED (Amber)HIGH (Red)HIGH (Red)2LOW (Green)LOW (Green)MED (Amber)MED (Amber)MED (Amber)1LOW (Green)LOW (Green)LOW (Green)LOW (Green)LOW (Green)Likelihood12345ConsequenceLikelihood x Consequence score 0-5 = LowLikelihood x Consequence score 6-10 = MediumLikelihood x Consequence score 12-16 = HighLikelihood x Consequence score 20-25 = Extreme Risk Evaluation TablesTo use these tables, follow the below steps: Determine Likelihood: Assess the probability of AI tool or feature being turned on using Table 1 below. Choose the rating (1 to 5) that best fits the current situation or forecast. Assess Consequences: Evaluate the potential impact of AI tool or feature issues using Table 2 below. Select the rating (1 to 5) that describes the severity of the consequences. Calculate Risk Level: Combine the likelihood rating and the consequence rating to determine the overall risk level. For example, if the likelihood is 1 and the consequence is 1, the risk is low (Green). If both are rated 5, the risk is high (Red). Once risks are identified the Escalation Process should be followed. Service Team Action Based on Risk Level: Green (Low Risk): Proceed for turn on approval within service - minimal concerns. The tool or feature adds a small improvement to extend existing functionality in teaching and learning. This has not been raised to us by users as an issue or feedback. Amber (Medium Risk): Identify risks and manage risks within service. The tool or feature may extend a well-used feature within teaching and learning but does not represent a large change in how that feature is used. This may also be a release that enables new functionality but is not something that has been requested from the community. The addition of the tool or feature may make a workflow more efficient but does not fundamentally change teaching and learning. Red (High/Extreme Risk): Escalate the issue to senior management via the existing escalation process to consider next steps. The release greatly affects a well–used tool or feature or may be part of a change to a high–risk workflows such as assessment or marking. This tool or feature either adds or extends functionality/workflow that has been requested by a large number of users and affects teaching and learning. Table 1: Likelihood of Turning AI OnThis table categorises probability of AI tools or features being turned on or adopted by the University of Edinburgh (UoE) within specified time frames. The likelihood is rated from 1 to 5, with corresponding descriptions and probabilities: Likelihood of Turning AI onRatingCriteriaProbabilityAlmost Certain1AI tools or features will almost certainly be turned on by UoE this semester /academic year.80-99% AI tool or feature will be turned on. Complies with all UoE policy and legal regulatory requirements. Likely2AI tools or features are more likely to be turned on than off by UoE. It would be surprising if they were not turned on.61-79% AI tool or feature requires some further clarification from vendor before switch on. Possible3AI tools or features are more likely to be turned off than on by UoE.40-60% AI tool or feature requires further investigation and clarification from vendor on areas such as data retention and processing to check it complies with all UoE policy and legal regulatory requirements. Unlikely4Turned on/adoption of AI tool or feature in Educational Design and Engagement (EDE) is not anticipated.11-39% AI unlikely turned on as requires assurances from vendor such as, third party components, bias, inaccuracy, data retention and/or copyright issues to comply with all UoE policy and legal regulatory requirements. Almost Certainly Not5It would be unlikely if AI tool or feature was turned on. A combination of unlikely events would have to occur as risk/impact too high.0-10% AI tool or feature turned on is very unlikely. AI does not comply with UoE Policy and legal regulatory requirements. For example: equality and accessibility legislation, data retention, copyright and privacy-related issues or has a significant environmental impact. Table 2: ConsequencesThis table evaluates the impact or consequences of issues arising from the implementation of AI tools/features. The consequences are rated from 1 to 5, with criteria and examples provided for each rating: ConsequencesRatingCriteria / Examples Insignificant1Minor problems can be addressed internally at the service level with service team.No escalation of the AI tool or feature issue to upper management levels is required.The AI tool or feature issue does not generate any negative attention towards the University.No significant (student, faculty, etc.) interest or only manageable interest in the AI tool or feature.Low demand for AI tool or feature. Minor2Problems can be dealt with at a service level but require notification to managers/ leadership.Delays in release of tool or feature dates. Medium demand for AI tool or feature. Moderate3Recovery from an AI tool or feature failure or incident requires cooperation across multiple services/departments/vendors.Budget overspends or delays in deploying/using AI tool or feature, but generally within contingencies.Problems or issues that may generate negative media attention for the University because of high demand of AI tool or feature. Major4Significant damage to the University's reputation or integrity from AI issues.Event that requires a major realignment of how AI tools or features are used for delivering education.Significant AI failure or incident that has a long recovery period.Failure to deliver a major educational commitment due to AI tool or feature problems e.g. Assessment grades, feedback, failure to train users.Major budget overspend or missed deployment deadline related to AI.Catastrophic 5Major unresolvable problems with implementing or using AI tools or features across the institution from which there is high risk of no/long recovery e.g. data issues.May mean the end of utilising AI tools or features in higher education.Complete inability to use AI within Teaching and Learning. Review of AI in Learning Technology The below processes provide guidance on how to review both new and existing AI developments within the portfolio of Learning Technology tools. In order to review new and existing AI developments, each member of the service team(s) must be aware of the current availability of AI tools and features within their service and any upcoming or development of AI tools or features. This can be obtained through roadmaps, release notes or vendor meetings. These developments must be discussed within the service team with the appropriate process followed: Process for identifying and reviewing new AI developments in learning tools and features.Process for the continued review of existing AI developments in learning tools and features. Please note that these tools and features could be enabled/disabled by the University of Edinburgh or enabled by default where we have no control over the release. These processes allow the service teams to review, document and escalate any development that they believe may pose a risk to the University of Edinburgh or one where we believe we have limited or no control over.Process for identifying and reviewing new AI developments in learning tools and featuresIdentification and documentation processIn order to undertake the identification and documentation of the AI tools or features, the below process should be followed: Service team to identify any new AI developments in the tools or features within the current portfolio of centrally supported Learning Technology. Once identified, the new developments should be investigated and tested by the service team and when appropriate representatives from each College to determine the requirement, Risk Identification Categories, impact and how these align with the criteria that have been set within the AI Matrix (access for service managers only, others can access the user facing AI Innovations Service Release Tracker) and AI Risk Evaluation Table. In order to ensure consistency in testing of the tool or feature, test scripts should be written and agreed by the service team to ensure that feedback can be captured in a structured manner. Service teams should record the information about the tool or feature within the AI Matrix completing all fields following the set criterion. In order to do this, service teams must have fully completed any testing required. The service team would then decide whether the tool or feature should be enabled or not by creating a Decision Document containing a summary of the tool or feature, pros and cons, risk and impact, recommendations, a proposed release date if the tool or feature is to be enabled and the rational of the decision. If the tool or feature is to be enabled by default, the service team would create a Decision Document containing a summary of the tool or feature, pros and cons, risk and impact, recommendations and the scheduled release date. The Escalation Process should then be followed for sign-off and the decision document must be saved within the service data store for future reference. Escalation ProcessIn order to undertake escalation of the AI tools or features, the below process should be followed: If the RAG status is Green (Low Risk): The service team will sign-off the decision in consultation with the service manager. If the RAG status is Amber (Medium Risk): The service manager will sign-off the decision in consultation with the service owner. If the RAG status is Red (High Risk): The service owner will sign-off the decision in consultation with the business service owner. If the RAG status in Dark Red (Extreme Risk): The business service owner consults with University governance groups. Once the decision has been signed off, the service team shall communicate the decision and the rational for the decision to the: user community via their service community channels.LTW representatives for further distribution and awareness within the schools. Any changes must be reflected on the AI SharePoint and will be checked by the service team once the AI matrix has been updated. If the tool or feature is to be enabled, then the Implementation Process should be followed. Implementation processIn order to undertake the implementation of the AI tools or features, the below process should be followed: If an AI tool or feature is to be made available or will be made available by default: at least three weeks' notice should be given to the user community summarising the tool or feature. where possible the tool or feature should be made available on a testing environment so users can familiarise themselves before the release. user guidance and documentation must be created, reviewed and approved via the normal content creation process(es) and made available by the service team. where appropriate, training sessions for the tool or feature must be created, reviewed and approved via the normal content creation process(es) and offered to the community. where appropriate, existing training sessions that the tool or feature is relevant to should be updated, reviewed and approved via the normal content creation process(es). Process for the continued review of existing AI developments in learning technology tools and featuresThe service teams shall monitor and review the development of existing AI tools or features within the current portfolio of Learning Technology through roadmaps, release notes or vendor meetings and discuss as a team. If the service team have not been made aware of any developments of the tools or features the review must be done within the agreed time period of once a month. The review will highlight changes that have taken place that the service team and the community need to be aware of or a change in the initial risk review following the AI Risk Evaluation Table. Each review must be documented by the service team to showcase the date of last review within the AI Matrix (access for service managers only, others can access the user facing AI Innovations Service Release Tracker). In order to undertake the review of the existing AI tools or features, the below process should be followed: Service team should review the AI Matrix to identify all known AI tools or features for their service(s). If any features are missing from the list, these should be captured via the process for identifying and reviewing new AI developments in learning technology tools and features. Once identified, the service team should review each tool or feature to ensure no changes have been made by the vendor that may impact upon our original risk evaluation review and decision. If no changes are identified, the ‘date of last review’ should be updated on the AI Matrix to showcase this has been reviewed within the agreed timescales. If changes have been identified, the process for identifying and reviewing new AI developments in learning technology tools and features should be initiated and a new review should be undertaken and escalated appropriately. AI Innovations Service Release Tracker Here is an example of the AI Innovations Service Release Tracker that is shared with our users. This is a simplified version of the AI matrix that is populated by the service teams. Image Resources The below resources can be adapted and used as a template for managing AI within your own services. Document AI matrix template (20.96 KB / XLSX) Document AI Matrix Notes (23.11 KB / DOCX) Document Decision Document (24.19 KB / DOCX) Document AI Glossary (24.66 KB / DOCX) This article was published on 2025-03-28