Advancing Shared and Multi-Agent Autonomy in Underwater Missions: Integrating Knowledge Graphs and Retrieval-Augmented Generation

Heriot-Watt University, Edinburgh, UK
Open University of Cyprus,Nicosia, Cyprus
University of Girona, Girona, Spain
*Indicates Equal Contribution

Abstract

Robotic platforms have become essential for marine operations by providing regular and continuous access to offshore assets, such as underwater infrastructure inspection, environmen- tal monitoring, and resource exploration. However, the complex and dynamic nature of underwater environments—characterized by limited visibility, unpredictable currents, and communication constraints—presents significant challenges that demand advanced autonomy while ensuring operator trust and oversight. Central to addressing these challenges are knowledge representation and reasoning techniques, particularly knowledge graphs and retrieval- augmented generation (RAG) systems, that enable robots to efficiently structure, retrieve, and interpret complex environmental data. These capabilities empower robotic agents to reason, adapt, and respond effectively to changing conditions. The primary goal of this work is to demonstrate both multi-agent autonomy and shared autonomy, where multiple robotic agents operate independently while remaining connected to a human supervisor. We show how a RAG-powered large language model, augmented with knowledge graph data and domain taxonomy, enables autonomous multi-agent decision-making and facilitates seamless human-robot interaction, resulting in 100% mission validation and behavior completeness. Finally, ablation studies reveal that without structured knowledge from the graph and/or taxonomy, the LLM is prone to hallucinations, which can compromise decision quality.

Overview of the RAG system

Overview of the RAG system, which integrates an Information Retrieval System and a Mission Behaviors Generator. The user-defined mission is processed by the ROS Node Info Retriever, which queries semantic knowledge (Knowledge Graph and Taxonomy) and runtime state to obtain relevant robot and environment data. This information guides the Behaviors Generator in producing adaptive Mission Behavior Trees, enabling execution and real-time feedback-driven updates.



Full and shared autonomy use-case schema

Operational environments for the AUV: (a) VLC mounted on the docking station for communication, (b) docking station setup, and (c) the AUV docked for recharging or maintenance. Bottom: AUV Beta interacting with an object, while Alpha is recharging at the docking station.






Demo Video

BibTeX

@misc{grimaldi2025advancingsharedmultiagentautonomy,
      title={Advancing Shared and Multi-Agent Autonomy in Underwater Missions: Integrating Knowledge Graphs and Retrieval-Augmented Generation}, 
      author={Michele Grimaldi and Carlo Cernicchiaro and Sebastian Realpe Rua and Alaaeddine El-Masri-El-Chaarani and Markus Buchholz and Loizos Michael and Pere Ridao Rodriguez and Ignacio Carlucho and Yvan R. Petillot},
      year={2025},
      eprint={2507.20370},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2507.20370}, 
}