Skip to main navigation Skip to search Skip to main content

The Juridical and Ontological Crisis of the Artificial Mind: A Theoretical Analysis of AI in Clinical Decision-Making and Social Governance

Research output: Working paper / PreprintWorking paper

Abstract

The rapid integration of Artificial Intelligence into healthcare, law, and social governance exposes a structural incompatibility between the ontological foundations of Western legal systems and the epistemic character of machine learning. This paper argues that this incompatibility constitutes not merely a regulatory gap but a structural tension in the concepts of agency, negligence, and care upon which medical law and clinical ethics depend. Drawing on Judea Pearl's Ladder of Causation, Antonio Damasio's somatic marker hypothesis, Karl Friston's free energy principle, and Michel Foucault's archaeology of clinical knowledge, the paper develops a theoretical framework demonstrating that current AI systems operate at a level of associative reasoning that is categorically insufficient for the counterfactual and causal reasoning presupposed by legal doctrines of fault. The analysis proceeds through four domains. First, it examines the ontological status of AI systems through the lens of philosophy of mind, demonstrating why probabilistic automata cannot satisfy the legal concept of negligence. Second, it extends Foucault's concept of the medical gaze to theorise an 'Algorithmic Gaze' that reconstructs the patient as a high-dimensional data vector, occluding subjective experience. Third, it analyses the destabilisation of UK medical negligence doctrine-specifically the Bolam standard of care and Montgomery informed consent requirements-when applied to opaque algorithmic systems. Fourth, it maps the emerging regulatory landscape, including the EU AI Act, the UK Data (Use and Access) Act 2025, and sectoral governance by the MHRA, CQC, and GMC. The paper concludes that the 'human in the loop' risks reduction to a ritualistic liability transfer mechanism, and proposes a shift from individual negligence to enterprise liability, mandatory explainability thresholds for clinical AI, and a reconceptualisation from 'human in the loop' to 'human on the loop' governance.
Original languageEnglish
PublisherSSRN (Elsevier)
DOIs
Publication statusPublished - 19 Mar 2026

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 1 - No Poverty
    SDG 1 No Poverty
  2. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being
  3. SDG 10 - Reduced Inequalities
    SDG 10 Reduced Inequalities

Keywords

  • Artificial Intelligence
  • Medical Negligence
  • Algorithmic Governance
  • Philosophy of Mind
  • Informed Consent
  • Liability Gap
  • Explainable AI
  • Foucault
  • Pearl Causation
  • UK Healthcare Law

Fingerprint

Dive into the research topics of 'The Juridical and Ontological Crisis of the Artificial Mind: A Theoretical Analysis of AI in Clinical Decision-Making and Social Governance'. Together they form a unique fingerprint.

Cite this