Large Language Models inquiry takes evidence on copyright, data privacy, system safety and testing
Friday 3 November 2023
The House of Lords Communications and Digital Committee, which is currently undertaking an inquiry into Large Language Models (LLMs), will next week hold two evidence sessions. The first covers the impact of LLMs and AI on copyright and privacy. The second examines the trade-offs around open- and closed-source models, safety testing and auditing.
The Committee will hear from lawyers, trade associations, academic experts and technology industry leaders across the two sessions to be held on Tuesday 7 November and Wednesday 8 November.
Giving evidence to the Committee will be:
Tuesday 7 November from 2:30pm, Committee Room 4
- Dan Conway,Chief Executive Officer at the Publishers Association
- Arnav Joshi, Senior Associate at Clifford Chance
- Richard Mollet, Head of European Government Affairs at RELX
- Dr Hayleigh Bosher, Associate Dean and Reader in Intellectual Property Law at Brunel University London
Wednesday 8 November from 2:20pm, Committee Room 4A
- Dr Moez Draief, Managing Director at Mozilla.ai
- Irene Solaiman, Head of Global Policy at Hugging Face
- Professor John McDermid at the University of York
- Dr Koshiyama, CEO at Holistic AI
In Tuesday’s session the Committee will cover the use of copyrighted works and personal data in frontier AI models. The Committee will examine arguments around the application of existing law, licensing regimes and the appropriate role of governments, regulators and the courts in in addressing these issues.
Wednesday’s session will focus on the appropriate policy responses to open and closed source AI models; implications for the balance of power, safety and competition; how safety testing and auditing systems would work; and what lessons can be learned from other sectors with experience of complex safety and liability issues.