PhD opportunity [October 2026 start] on "Language-based agentic collaboration for endovascular acute stroke treatment"

Applications are invited for a fully funded 4 years PhD CDT STaR-AI studentship (including tuition fees, annual stipend and consumables) starting in October 2026.

Award details:

  • Focus: Language-based agentic collaboration for endovascular acute stroke treatment
  • Joint first supervisor: Tom Vercauteren
  • Joint first supervisor: Yali Du
  • Clinical champion: Thomas C Booth
  • Funding type: We are only able to consider candidates who qualify for home fee status. 4-year fully-funded CDT STaR-AI studentship including a stipend, tuition fees, research training and support grant (RTSG), and a travel and conference allowance.
  • Project code: STaR-AI-15
  • Application closing date: 02 March 2026
  • Start date: October 2026
Mechanical thrombectomy procedure on a patient with symptoms of stroke as observed from the control room.
Mechanical thrombectomy procedure on a patient with symptoms of stroke as observed from the control room.

Project description

Stroke is the second most common cause of death and the third most common cause of disability (Feigin, 2025). Mechanical Thrombectomy (MT) has become the first-line treatment of acute cerebral stroke. MT consists of inserting a flexible catheter from the groin to the brain and using it to physically remove the blood clot (a.k.a. thrombus) causing the stroke. MT is a complex procedure and the number of physicians who have the expertise to perform it, i.e. interventional neuro-radiologists (INRs), is too low. Due to lack of INRs, less than 20% of eligible patients are currently treated (Aguiar de Sousa, 2019). Autonomous robotics and remote teleoperation are seen as potential solutions to this crisis. Current research in this area focuses on automating specific steps of the procedure (e.g. endovascular navigation) (Robertshaw, 2023). Yet, INRs operate in a collaborative environment involving multiple human agents. The radiographer for example plays a critical role by optimising X-Ray fluoroscopic views to follow the instruments and capture anatomical context, by coordinating the injection of contrast agent, and by ensuring accurate real-time imaging to support the INR. Increasing the autonomy of MT therefore requires the development of multi-agent systems whereby AI agent can seamlessly collaborate with human staff.

In this project, the PhD candidate will focus on the development of a collaborative framework between an INR and a radiographer agent to enable fluent endovascular navigation while maintaining strict safety constraints. Paving the way for flexible human-AI team composition (Yan, 2023), the project will develop a decentralised multi-agent system exploiting natural language as the basis for communication between the agents. The theoretical foundation draws from constrained and risk-sensitive reinforcement learning (RL) for safety-critical control (Gu, 2024), multi-agent coordination under partial observability, and human factors engineering for interventional workflows. Empirically, we will leverage high-fidelity computer simulation and physical mock labs to progressively validate agent behaviours, with clinical input shaping the safety envelopes, and evaluation metrics.

In year 1, the PhD candidate will train on safe multi-agent RL, world models for robotics (Wu, 2023), human-AI communication, and real-time fluoroscopy image analysis by performing a scoped literature review. This learning will initially be leveraged to engage with clinical collaborators, patient groups, ethics experts, and the supervisory team to refine the proposed work plan through a co-creation exercise. The candidate will also expand a state-of-the-art computer simulation engine for MT (Karstensen, 2025) to account for the actions of a radiographer agent. Improvements in imaging physics realism and anatomical variability will be combined with the addition of novel communication channels and APIs to support natural language interaction between agents.

In year 2, the project will aim to formalise shared and role-specific goals (e.g., instrument tracking, target selection, contrast timing) under safety constraints (radiation dose, vessel injury, embolic risk). We will then develop training approaches and natural language protocols for agent-to-agent and human-AI communication, including intent clarification, and fluoroscopy based grounding of the decision making.

The resulting multi-agent system will be trained and benchmarked in computer simulation with anatomically realistic vasculature and imaging physics.

In year 3, the objective is to transition the system to our physical mock interventional suite to evaluate usability, teamwork quality, and task performance, with INR/radiographer user studies. On the methodological side, the candidate will seek to extend the latest safe RL research to multi-agent coordination with constraint satisfaction, uncertainty quantification, and risk-aware exploration.

By the end of the PhD, the candidate will produce a thesis supporting by high-quality publications showing advances in safety-aware, language-mediated multi-agent systems, and demonstrating a credible path towards scalable human-AI teams in safety-critical environments such as MT.

References

  • Aguiar de Sousa, D., von Martial, R., Abilleira, S., Gattringer, T., Kobayashi, A., Gallofré, M., … & Fischer, U. (2019). Access to and delivery of acute ischaemic stroke treatments: a survey of national scientific societies and stroke experts in 44 European countries. European stroke journal, 4(1), 13-28.
  • Feigin, V. L., Brainin, M., Norrving, B., Martins, S. O., Pandian, J., Lindsay, P., … & Rautalin, I. (2025). World stroke organization: global stroke fact sheet 2025. International Journal of Stroke, 20(2), 132-144.
  • Gu, S., Yang, L., Du, Y., Chen, G., Walter, F., Wang, J., & Knoll, A. (2024). A review of safe reinforcement learning: Methods, theories and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence.
  • Karstensen, L., Robertshaw, H., Hatzl, J., Jackson, B., Langejürgen, J., Breininger, K., … & Mathis-Ullrich, F. (2025). Learning-based autonomous navigation, benchmark environments and simulation framework for endovascular interventions. Computers in biology and medicine, 196, 110844.
  • Robertshaw, H., Karstensen, L., Jackson, B., Sadati, H., Rhode, K., Ourselin, S., … & Booth, T. C. (2023). Artificial intelligence in the autonomous navigation of endovascular interventions: a systematic review. Frontiers in Human Neuroscience, 17, 1239374.
  • Wu, P., Escontrela, A., Hafner, D., Abbeel, P., & Goldberg, K. (2023, March). Daydreamer: World models for physical robot learning. In Conference on robot learning (pp. 2226-2240). PMLR.
  • Yan, X., Guo, J., Lou, X., Wang, J., Zhang, H., & Du, Y. (2023). An efficient end-to-end training approach for zero-shot human-AI coordination. Advances in neural information processing systems, 36, 2636-2658.

Application Process

More information about the opportunity here and here.

A candidate recruited to this project will be registered in the Department of Informatics at King’s College London; for entry requirements, see the Computer Science Research MPhil/PhD information.

For information on how to apply, see https://www.kcl.ac.uk/research/star-ai. Please take care to follow the instructions on how to apply, otherwise your application may not be considered.

About STaR-AI: King’s Prize Doctoral Programme in Safe, Trusted and Responsible Artificial Intelligence

The King’s Prize Doctoral Programme in Safe, Trusted and Responsible Artificial Intelligence (STaR AI) brings together leading researchers from across King’s College London to train the next generation of experts in responsible AI. The programme equips graduates to understand the technical challenges of building safe and trustworthy AI, to engage critically with its human and societal implications, and to work confidently across disciplines to ensure AI technologies have positive impact.

King’s longstanding strength in interdisciplinarity provides a distinctive environment for studying AI and its wider consequences. Building on the success of the UKRI Centre for Doctoral Training in Safe and Trusted AI, STaR AI is supported by specialists in AI methods, human centred approaches, and legal and ethical frameworks from the Departments of Informatics and Digital Humanities, and the Dickson Poon School of Law. Students will gain both technical and non technical expertise relevant to responsible AI development across sectors, and will be well prepared for diverse careers, including in academia, research and development, and policy.

The programme welcomes applicants from a wide range of disciplinary backgrounds. Multidisciplinary supervision teams support students working on diverse application areas, enabling cohorts that combine technical, social scientific and humanities perspectives. This diversity is central to developing well rounded researchers able to meet the demands of a rapidly evolving national and international AI landscape.

A broad portfolio of PhD projects is available. Some focus on advancing core technical methods, such as new verification techniques for certifying AI system safety. Others examine human, social and legal dimensions, including the impact of AI on work and labour. Several projects span the socio technical space, for example developing new approaches to human–AI collaboration.

At least four fully funded studentships for home fee candidates are available for entry in October 2026. STaR AI students join a collaborative research community and take part in regular cohort building and training activities.

Tom Vercauteren
Tom Vercauteren
Professor of Interventional Image Computing

Tom’s research interests include machine learning and computer assisted interventions