Abstract
Contemporary maritime operations such as shipping are a vital component constituting global trade and defence. The evolution
towards maritime autonomous systems, often providing significant benefits (e.g., cost, physical safety), requires the utilisation of
artificial intelligence (AI) to automate the function of a conventional crew. However, unsecured AI systems can be plagued with
vulnerabilities naturally inherent within complex AI models. The adversarial AI threat, primarily only evaluated in a laboratory
environment, increases the likelihood of strategic adversarial exploitation and attacks on mission-critical AI, including maritime
autonomous systems. This work evaluates AI threats to maritime autonomous systems in situ. The results show that multiple
attacks can be used against real-world maritime autonomous systems with a range of lethality. However, the effects of AI attacks
vary in a dynamic and complex environment from that proposed in lower entropy laboratory environments. We propose a set of
adversarial test examples and demonstrate their use specifically in the marine environment. The results of this paper highlight
security risks and deliver a set of principles to mitigate threats to AI, throughout the AI lifecycle, in an evolving threat landscape.
towards maritime autonomous systems, often providing significant benefits (e.g., cost, physical safety), requires the utilisation of
artificial intelligence (AI) to automate the function of a conventional crew. However, unsecured AI systems can be plagued with
vulnerabilities naturally inherent within complex AI models. The adversarial AI threat, primarily only evaluated in a laboratory
environment, increases the likelihood of strategic adversarial exploitation and attacks on mission-critical AI, including maritime
autonomous systems. This work evaluates AI threats to maritime autonomous systems in situ. The results show that multiple
attacks can be used against real-world maritime autonomous systems with a range of lethality. However, the effects of AI attacks
vary in a dynamic and complex environment from that proposed in lower entropy laboratory environments. We propose a set of
adversarial test examples and demonstrate their use specifically in the marine environment. The results of this paper highlight
security risks and deliver a set of principles to mitigate threats to AI, throughout the AI lifecycle, in an evolving threat landscape.
Original language | English |
---|---|
Journal | AI, Computer Science and Robotics Technology |
Volume | 0 |
Issue number | 0 |
Early online date | Apr 2023 |
DOIs | |
Publication status | Published - Apr 2023 |