Abstract
This dissertation investigates the legal, public-health, and economic implications of the United Kingdom's regulatory approach to artificial intelligence (AI)-driven epidemiological surveillance. It argues that the UK's post-Brexit 'pro-innovation, principles-based' framework, whilst defensible for low-risk AI applications, creates regulatory fragmentation that is inadequate for high-stakes public health surveillance where algorithmic decisions affect population health, individual liberty, and public trust.
The research employs a comparative legal analysis contrasts the UK's patchwork of existing instruments, the UK General Data Protection Regulation, Data Protection Act 2018, Medical Devices Regulations 2002, and voluntary guidance, with the structured regimes of the European Union’s AI Act (2024) and Canada’s Artificial Intelligence and Data Act (proposed under Bill C-27, not enacted). Two UK-based case studies, the National Health Service (NHS) COVID-19 Contact Tracing App and NHS COVID-19 Data Store, illustrate how regulatory fragmentation manifests operationally, and a secondary review of economic trends examines relationships between regulatory uncertainty and health-technology investment patterns. The analysis is grounded in a theoretical framework combining responsive regulation theory with public health law principles.
The dissertation finds that the UK does not lack applicable law but suffers from coordination deficits: no single authority bears responsibility for AI surveillance governance, no risk classification taxonomy exists, and compliance pathways remain uncertain. The case studies reveal that pandemic emergency conditions exacerbated pre-existing structural weaknesses rather than creating unique problems, with governance failures including architectural decisions resolved through platform market power, parameter modifications without regulatory oversight, and accountability achieved only through litigation.
The comparative analysis identifies regulatory design elements absent from UK law: risk classification taxonomies, mandatory pre-deployment conformity assessments, and designated supervisory authorities. Neither comparator has been operationally tested; the EU AI Act’s high-risk provisions are not yet fully applicable, and Canada’s AIDA was not enacted.
The dissertation proposes a risk-based regulatory framework designating the Medicines and Healthcare products Regulatory Agency (MHRA) as lead supervisory authority, establishing tiered obligations calibrated to system risk, mandating algorithmic impact assessments for high-risk deployments, and incorporating emergency governance provisions with non-waivable safeguards. This framework addresses identified deficiencies whilst preserving the regulatory flexibility that characterises the UK's broader approach to AI governance.
The research employs a comparative legal analysis contrasts the UK's patchwork of existing instruments, the UK General Data Protection Regulation, Data Protection Act 2018, Medical Devices Regulations 2002, and voluntary guidance, with the structured regimes of the European Union’s AI Act (2024) and Canada’s Artificial Intelligence and Data Act (proposed under Bill C-27, not enacted). Two UK-based case studies, the National Health Service (NHS) COVID-19 Contact Tracing App and NHS COVID-19 Data Store, illustrate how regulatory fragmentation manifests operationally, and a secondary review of economic trends examines relationships between regulatory uncertainty and health-technology investment patterns. The analysis is grounded in a theoretical framework combining responsive regulation theory with public health law principles.
The dissertation finds that the UK does not lack applicable law but suffers from coordination deficits: no single authority bears responsibility for AI surveillance governance, no risk classification taxonomy exists, and compliance pathways remain uncertain. The case studies reveal that pandemic emergency conditions exacerbated pre-existing structural weaknesses rather than creating unique problems, with governance failures including architectural decisions resolved through platform market power, parameter modifications without regulatory oversight, and accountability achieved only through litigation.
The comparative analysis identifies regulatory design elements absent from UK law: risk classification taxonomies, mandatory pre-deployment conformity assessments, and designated supervisory authorities. Neither comparator has been operationally tested; the EU AI Act’s high-risk provisions are not yet fully applicable, and Canada’s AIDA was not enacted.
The dissertation proposes a risk-based regulatory framework designating the Medicines and Healthcare products Regulatory Agency (MHRA) as lead supervisory authority, establishing tiered obligations calibrated to system risk, mandating algorithmic impact assessments for high-risk deployments, and incorporating emergency governance provisions with non-waivable safeguards. This framework addresses identified deficiencies whilst preserving the regulatory flexibility that characterises the UK's broader approach to AI governance.
| Original language | English |
|---|---|
| Media of output | Dissertation |
| Publisher | The University of Law, UK |
| Publication status | Published - 28 Mar 2026 |
Fingerprint
Dive into the research topics of 'Regulating AI-driven epidemiological surveillance in the United Kingdom: Addressing regulatory fragmentation through comparative analysis and proposed reform'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver