Skip to main content

Written evidence: Large language models – Dr Elena Abrusci, Dr Hayleigh Bosher and Dr Alina Miron

46_AI

"LLMs require even more transparency than general AI, both in terms of sharing code for the technical community and in terms of producing accessible information to the general public," write Dr Elena Abrusci, Dr Hayleigh Bosher and Dr Alina Miron in the recently published written evidence. The authors examine large language models and what needs to happen over the next 1–3 years to ensure the UK can respond to their opportunities and risks. 

Key recommendations:

  1. The government should confim that text and data mining copyright exception will not be extended. Any further exceptions should be narrow and only granted if they clearly address a specific barrier to innovation.
  2. The government should confim the use of AI training data includes the use of copyright protected content, for which rightsholders are entitled to be remunerated and creators are entitled to be acknowledged.
  3. The government should consider if AI generated works should be protected by copyright.
  4. If the Government did decide that AI generated works could be protectable, then to consider the justification and extent of the rights granted.

Read the full written evidence here.

This evidence was published on the 18th of October 2023  in response to the Large language models call for evidence.