The use of generative AI raises a number of legal issues of which staff should be aware. These include (but are not limited to):InputsThere are potential issues around the extent to which data can be used as inputs for generative AI. The UK Government is calling on the Intellectual Property Office to provide more guidance, particularly around the use of publicly available content. Laws which protect the ownership of databases and contractual requirements which restrict use on datasets should be considered.Intellectual propertyIP rights need to be respected. If the output infringes the intellectual property of a third party, that party may take legal action against the person who makes the output available. Systems like ChatGPT do not verify if the output infringes the rights of third parties, and if the output is identical or substantially similar to a copyright work this may result in a claim for infringement of copyright. Legal responsibility for copyright infringement is likely to lie with the user, not the generative AI tool.ReliabilityCurrently, AI products carry broad disclaimers which set out that outputs cannot be relied on. The training models are also not up to date. Staff should also be aware that discriminatory outcomes that result from the use of AI may contravene the protections set out in the Equality Act 2010.Data protection lawAI systems are also required by data protection law to process personal data fairly and lawfully. It is not currently always possible to delete data from a generative AI system. Therefore these systems may not be compliant with relevant law like the UK GDPR and the Data Protection Act. Staff should not input any personal data into any generative AI tool, unless using the University’s ELM platform, and unless you can ensure that any use of personal data will comply with the University’s data protection policy.If any of your teams are developing AI solutions, they should review the updated NCSC guidance Principles for Security of Machine Learning as part of their work.Principles for Security of Machine Learning This article was published on 2024-11-08