A Red Teaming Framework for Securing AI in Maritime Autonomous Systems

Mathew J. Walter*, Aaron Barrett, Kimberly Tam

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Downloads (Pure)

Abstract

Artificial intelligence (AI) is being ubiquitously adopted to automate processes in science and industry. However, due to its often intricate and opaque nature, AI has been shown to possess inherent vulnerabilities which can be maliciously exploited with adversarial AI, potentially putting AI users and developers at both cyber and physical risk. In addition, there is insufficient comprehension of the real-world effects of adversarial AI and an inadequacy of AI security examinations; therefore, the growing threat landscape is unknown for many AI solutions. To mitigate this issue, we propose one of the first red team frameworks for evaluating the AI security of maritime autonomous systems. The framework provides operators with a proactive (secure by design) and reactive (post-deployment evaluation) response to securing AI technology today and in the future. This framework is a multi-part checklist, which can be tailored to different systems and requirements. We demonstrate this framework to be highly effective for a red team to use to uncover numerous vulnerabilities within a real-world maritime autonomous systems AI, ranging from poisoning to adversarial patch attacks. The lessons learned from systematic AI red teaming can help prevent MAS-related catastrophic events in a world with increasing uptake and reliance on mission-critical AI.

Original languageEnglish
Article number2395750
JournalApplied Artificial Intelligence
Volume38
Issue number1
DOIs
Publication statusPublished - 4 Sept 2024

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A Red Teaming Framework for Securing AI in Maritime Autonomous Systems'. Together they form a unique fingerprint.

Cite this