Safety and acceptability of a natural-language AI assistant to deliver clinical follow-up to cataract surgery patients: Proposal for a pragmatic evaluation

Pennington N de, Guy Mole, Ernest Lim, Madison Milne-Ives, Eduardo Normando, Kanmin Xue, Edward Meinert*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Downloads (Pure)

Abstract

Background: Due to an ageing population, the demand for many services is exceeding the capacity of the clinical workforce. As a result, staff are facing a crisis of burnout from being pressured to deliver high-volume workloads, driving increasing costs for providers. Artificial intelligence, in the form of conversational agents, presents a possible opportunity to enable efficiencies in the delivery of care. Aims and Objectives: This study aims to evaluate the effectiveness, usability, and acceptability of Dora - an AI-enabled autonomous telemedicine call - for detection of post-operative cataract surgery patients who require further assessment. The study’s objectives are to: 1) establish Dora’s efficacy in comparison to an expert clinician, 2) determine baseline sensitivity and specificity for detection of true complications, 3) evaluate patient acceptability, 4) collect evidence for cost-effectiveness, and 5) capture data to support further development and evaluation. Methods: Based on implementation science, the interdisciplinary study will be a mixed-methods phase one pilot establishing inter-observer reliability of the system, usability, and acceptability. This will be done using using the following scales and frameworks: the system usability scale; assessment of Health Information Technology Interventions in Evidence-Based Medicine Evaluation Framework; the telehealth usability questionnaire (TUQ); the Non-Adoption, Abandonment and Challenges to the Scale-up, Spread and Suitability (NASSS) framework. Results: The results will be included in the final evaluation paper, which we aim to publish in 2022. The study will last eighteen months: seven months of evaluation and intervention refinement, nine months of implementation and follow-up, and two months of post-evaluation analysis and write-up. Conclusions: The project’s key contributions will be evidence on artificial intelligence voice conversational agent effectiveness, and associated usability and acceptability.
Original languageEnglish
Number of pages0
JournalJMIR Research Protocols
Volume10
Issue number7
Early online date28 Jul 2021
DOIs
Publication statusPublished - 28 Jul 2021

Fingerprint

Dive into the research topics of 'Safety and acceptability of a natural-language AI assistant to deliver clinical follow-up to cataract surgery patients: Proposal for a pragmatic evaluation'. Together they form a unique fingerprint.

Cite this