CAI4CAI

We are an academic research group focusing on Contextual Artificial Intelligence for Computer Assisted Interventions.

CAI4CAI is embedded in the School of Biomedical Engineering & Imaging Sciences at King’s College London, UK

**CAI4CAI**

About us

King's College London logo

We are based at King’s College London
surgeon-kclred
Surgical assistance

Our engineering research aims at improving surgical & interventional sciences

Multidisciplinarity

We take a multidisciplinary, collaborative approach to solve clinical challenges (📖)

double-decker-kclred
London

Our labs are located in St Thomas’ hospital, a prominent London landmark

neural-net-kclred
Machine learning

We design learning-based approaches for multi-modal reasoning

Medical imaging

Medical imaging is a core source of information in our research

Computational biophotonics

We design intelligent systems exploiting information captured by safe light

Contextual AI

We thrive at providing the right information at the right time to the surgical team and embrace human/AI interactions (📖)

robotic-surgery-kclred
Translational research

Strong industrial links are key to accelerate translation of cutting-edge research into clinical impact

Open culture

We support open source, open access and involve patients in our research (👋)

Recent posts

Collaborative research

.js-id-featured
CDT SMI

CDT SMI

Through a comprehensive, integrated training programme, the Centre for Doctoral Training in Smart Medical Imaging trains the next generation of medical imaging researchers.

GIFT-Surg

GIFT-Surg

The GIFT-Surg project is an international research effort developing the technology, tools and training necessary to make fetal surgery a viable possibility.

icovid

icovid

The icovid project focuses on AI-based lung CT analysis providing accurate quantification of disease and prognostic information in patients with suspected COVID-19 disease.

TRABIT

TRABIT

The Translational Brain Imaging Training Network (TRABIT) is an interdisciplinary and intersectoral joint PhD training effort of computational scientists, clinicians, and the industry in the field of neuroimaging.

Spin-outs and industry collaborations

Pathways to clinical impact

*
Moon Surgical

Moon Surgical

Moon Surgical has partnered with us to develop machine learning for computer-assisted surgery. More information on our press release.

Hypervision Surgical Ltd

Hypervision Surgical Ltd

Following successful in-patient clinical studies of CAI4CAI’s translational research on computational hyperspectral imaging system for intraoperative surgical guidance, Hypervision Surgical Ltd was founded by Michael Ebner, Tom Vercauteren, Jonathan Shapey, and Sébastien Ourselin.

In collaboration with CAI4CAI, Hypervision Surgical’s goal is to convert the AI-powered imaging prototype system into a commercial medical device to equip clinicians with advanced computer-assisted tissue analysis for improved surgical precision and patient safety.

ico**metrix**

icometrix

icometrix is the consortium lead of the icovid project.

Intel (previously COSMONiO)

Intel (previously COSMONiO)

Intel is the industrial sponsor of Theo Barfoot’s’s PhD on Active and continual learning strategies for deep learning assisted interactive segmentation of new databases.

Mauna Kea Technologies

Mauna Kea Technologies

Tom Vercauteren worked for 10 years with Mauna Kea Technologies (MKT) before resuming his academic career.

Medtronic

Medtronic

Medtronic is the industrial sponsor of Tom Vercauteren’s Medtronic / Royal Academy of Engineering Research Chair in Machine Learning for Computer-Assisted Neurosurgery.

Open research

Exemplar outputs of our research

.js-id-featured
Learning joint Segmentation of Tissues And Brain Lesions (jSTABL) from task-specific hetero-modal domain-shifted datasets

Learning joint Segmentation of Tissues And Brain Lesions (jSTABL) from task-specific hetero-modal domain-shifted datasets

Open source PyTorch implementation of “Dorent, R., Booth, T., Li, W., Sudre, C. H., Kafiabadi, S., Cardoso, J., … & Vercauteren, T. (2020). Learning joint segmentation of tissues and brain lesions from task-specific hetero-modal domain-shifted datasets. Medical Image Analysis, 67, 101862 (📖).”

Intrapapillary Capillary Loop (IPCL) Classification

Intrapapillary Capillary Loop (IPCL) Classification

We provide open source code and open access data for our paper “García-Peraza-Herrera, L. C., Everson, M., Lovat, L., Wang, H. P., Wang, W. L., Haidry, R., … & Vercauteren, T. (2020). Intrapapillary capillary loop classification in magnification endoscopy: Open dataset and baseline methodology. International journal of computer assisted radiology and surgery, 1-9 (📖).”

NiftyNet: Open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy

NiftyNet: Open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy

[unmaintained] NiftyNet is a TensorFlow 1.x based open-source convolutional neural networks (CNN) platform for research in medical image analysis and image-guided therapy (📖). It has been superseeded by MONAI.

Python Unified Multi-tasking API (PUMA)

Python Unified Multi-tasking API (PUMA)

PUMA provides a simultaneous multi-tasking framework that takes care of managing the complexities of executing and controlling multiple threads and/or processes.

GIFT-Grab: Simple frame grabbing API

GIFT-Grab: Simple frame grabbing API

GIFT-Grab is an open-source C++ and Python API for acquiring, processing and encoding video streams in real time (📖).

The CAI4CAI team

Principal investigators

Avatar

Tom Vercauteren

CAI4CAI lead

Professor of Interventional Image Computing

King's College London, United Kingdom

Avatar

Sébastien Ourselin

Professor of Healthcare Engineering

King's College London, United Kingdom

Avatar

Jonathan Shapey

Clinical Academic and Consultant Neurosurgeon

King's College London, United Kingdom

King's College Hospital NHS Foundation Trust, United Kingdom

Research team members

Avatar

Aaron Kujawa

Research Associate

King's College London, United Kingdom

Groups: Tom, Jonathan

Avatar

Charlie Budd

Research Software Engineer

King's College London, United Kingdom

Group: Tom

Avatar

Junwen Wang

Research Associate

King's College London, United Kingdom

Groups: Tom, Jonathan

Avatar

Lorena Garcia-Foncillas Macias

MRes/PhD Student

King's College London, United Kingdom

Groups: Jonathan, Tom

Avatar

Marina Ivory

Research Associate

King's College London, United Kingdom

Groups: Jonathan, Tom

Avatar

Matthew Elliot

Neurosurgical Clinical Research Fellow and PhD Student

King's College London, United Kingdom

King's College Hospital NHS Foundation Trust, United Kingdom

Groups: Jonathan, Tom

Avatar

Maxence Boels

PhD Student

King's College London, United Kingdom

Groups: Seb, Prokar

Avatar

Meng Wei

PhD Student

King's College London, United Kingdom

CDT Smart Medical Imaging

Groups: Tom, Miaojing

Avatar

Navodini Wijethilake

PhD Student

King's College London, United Kingdom

Groups: Jonathan, Tom

Avatar

Oluwatosin Alabi

PhD Student

King's College London, United Kingdom

EPSRC CDT in Smart Medical Imaging

Groups: Miaojing, Tom

Avatar

Oscar MacCormac

Neurosurgery Clinical Research Fellow and PhD Student

King's College London, United Kingdom

King's College Hospital NHS Foundation Trust, United Kingdom

Groups: Jonathan, Tom

Avatar

Peichao Li

PhD Student

King's College London, United Kingdom

Groups: Tom, Jonathan

Avatar

Silvère Ségaud

Research Associate

King's College London, United Kingdom

Groups: Tom, Jonathan, Yijing

Avatar

Soumya Kundu

PhD Student

King's College London, United Kingdom

Groups: Jonathan, Tom

Avatar

Tangqi Shi

PhD Student

King's College London, United Kingdom

Group: Tom Booth, Tom

Avatar

Theo Barfoot

PhD Student

King's College London, United Kingdom

Imperial College London, United Kingdom

EPSRC CDT in Smart Medical Imaging

Groups: Tom, Ben, Jonathan

Avatar

Yanghe Hao

MRes/PhD Student

King's College London, United Kingdom

Groups: Tom, Christos

Avatar

Yijing Xie

L’Oréal UK&I and UNESCO for Women in Science Rising Talent Fellow

King's College London, United Kingdom

Affiliated research team members

Avatar

Helena Williams

Postdoctoral fellow

KU Leuven, Belgium

King's College London, United Kingdom

Groups: Jan, Jan, Tom

Avatar

Martin Huber

PhD Student

King's College London, United Kingdom

Groups: Christos, Tom

Avatar

Michael Ebner

CEO and co-founder, Hypervision Surgical

Hypervision Surgical Ltd, United Kingdom

King's College London, United Kingdom

Alumni

Avatar

CAI4CAI alumni

See our previous lab members

Open positions

You can browse our list of open positions (if any) here, as well as get an insight on the type of positions we typically advertise by browsing through our list of previous openings. We are also supportive of hosting strong PhD candidates and researchers supported by a personal fellowship/grant.

Please note that applications for the listed open positions need to be made through the University portal to be formally taken into acount.

.js-id-open-position
Tom Vercauteren appointed Senior Editor for Medical Image Analysis (MedIA) Journal

Tom Vercauteren has been appointed as Senior Editor for the highly-ranked Medical Image Analysis journal, an official journal of the MICCAI society. Sebastien Ourselin also features on the editorial board of the publication.

Science for tomorrow's neurosurgery: Patient & Public Involvement (PPI) group - September 2023 group meeting

On 21st September we held our fourth ‘Science for Tomorrow’s Neurosurgery’ PPI group meeting online, with presentations from Oscar, Matt and Silvère. Presentations focused on an update from the NeuroHSI trial, with clear demonstration of improvements in resolution of the HSI images we are now able to acquire; this prompted real praise from our patient representatives, which is extremely reassuring for the trial going forward. We also took this opportunity to announce the completion of the first phase of NeuroPPEYE, in which we aim to use HSI to quantify tumour fluorescence beyond that which the human eye can see. Discussions were centered around the theme of “what is an acceptable number of participants for proof of concept studies,” generating very interesting points of view that ultimately concluded that there was no “hard number” from the patient perspective, as long as a thorough assessment of the technology had been carried out. This is extremely helpful in how we progress with the trials, particularly NeuroPPEYE, which will begin recruitment for its second phase shortly. Once again, the themes and discussions were summarized in picture format by our phenomenal illustrator, Jenny Leonard (see below) and we are already making plans for our next meeting in February 2024!

Jonathan Shapey delivers the Hunterian Lecture at the Society of British Neurological Surgeons autumn congress

Jonathan Shapey had the great honour to deliver the Hunterian Lecture at the Society of British Neurological Surgeons autumn congress (SBNS London 2023). Jonathan presented his work in developing a label-free real-time intraoperative hyperspectralimaging system for neurosurgery.

MICCAI 2023 Presentation for Deep Homography Prediction for Endoscopic Camera Motion Imitation Learning

This video presents work lead by Martin Huber. Deep Homography Prediction for Endoscopic Camera Motion Imitation Learning investigates a fully self-supervised method for learning endoscopic camera motion from readily available datasets of laparoscopic interventions. The work addresses and tries to go beyond the common tool following assumption in endoscopic camera motion automation. This work will be presented at the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023).

Presentation Video for IEEE IUS

This video presents work lead by Mengjie Shi focusing on learning-based sound-speed correction for dual-modal photoacoustic/ultrasound imaging. This work will be presented at the 2023 IEEE International Ultrasonics Symposium (IUS).

You can read the preprint on arXiv: 2306.11034 and get the code from GitHub.

Presentation Video for OpTaS

This video presents work lead by Christopher E. Mower. OpTaS is an OPtimization-based TAsk Specification library for trajectory optimization and model predictive control. This work will be presented at the 2023 IEEE International Conference on Robotics and Automation (ICRA).

Our crossMoDA challenge to be held MICCAI 2023 is now live!

CAI4CAI members and alumni are leading the organization of the new edition of the cross-modality Domain Adaptation challenge (crossMoDA) for medical image segmentation Challenge, which will runs as an official challenge during the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2023 conference.

Hypervision Surgical awarded Cutlers' Surgical Prize for HyperSnap hyperspectral imaging system

The four co-founders of Hypervison Surgical, a King’s spin-out company, have been awarded the Cutlers’ Surgical Prize for outstanding work in the field of instrumentation, innovation and technical development.

Hypervision Surgical receives the Cutlers’ Surgical Prize.
Hypervision Surgical receives the Cutlers’ Surgical Prize.

The Cutlers’ Surgical Prize is one of the most prestigious annual prizes for original innovation in the design or application of surgical instruments, equipment or practice to improve the health and recovery of surgical patients.

Video for the ROS-PyByllet Interface

This video presents work lead by Christopher E. Mower. The ROS-PyBullet Interface is a framework between the reliable contact simulator PyBullet and the Robot Operating System (ROS) with additional utilities for Human-Robot Interaction in the simulated environment. This work was presented at the Conference on Robot Learning (CoRL), 2022. The corresponding paper can be found at PMLR.

Open-source package for fast generalised geodesic distance transform

Muhammad led the development of FastGeodis, an open-source package that provides efficient implementations for computing Geodesic and Euclidean distance transforms (or a mixture of both), targetting efficient utilisation of CPU and GPU hardware. This package is able to handle 2D as well as 3D data, where it achieves up to a 20x speedup on a CPU and up to a 74x speedup on a GPU as compared to an existing open-source library that uses a non-parallelisable single-thread CPU implementation. Further in-depth comparison of performance improvements is discussed in the FastGeodis documentation.

Open-access Spatio-temporal Atlas of the Developing Fetal Brain with Spina Bifida Aperta

Lucas led on the development of the first fetal brain atlas for spina bifida aperta (SBA). This first-time atlas will allow researchers to perform measurements of the brain’s anatomy and to study its development in a large population of unborn babies with SBA

Navodini W. and Muhammed A. won the second place of WIM-WILL Competition at MICCAI 2022

WiM-WILL is a digital platform that provides MICCAI members to share their career pathways to the outside world in parallel to MICCAI conference. Muhammad Asad (interviewer) and Navodini Wijethilake (interviewee) from our lab group participated in this competition this year and secured the second place. Their interview was focused on overcoming challenges in research as a student. The link to the complete interview is available below and on youtube.

Yijing Xie awarded Royal Academy of Engineering Research Fellowship

Yijing has been awarded the prestigious Royal Academy of Engineering Research Fellowship for her research in the development of tools to help neurosurgeons during surgery.

Multi-label Scribbles Support in MONAI Label v0.4.0

Recent release of MONAI Label v0.4.0 extends support for multi-label scribbles interactions to enable scribbles-based interactive segmentation methods.

CAI4CAI team member Muhammad Asad contributed to the development, testing and review of features related to scribbles-based interactive segmentation in MONAI Label.

FAROS Integration Week at Balgrist University Hospital
Our team is getting ready to test FAROS technology in the operating room. From left to right: [Tom](/author/tom-vercauteren/), [Martin](/author/martin-huber), [Anisha](/author/anisha-bahl), and [Matt](/author/matthew-elliot).
Our team is getting ready to test FAROS technology in the operating room. From left to right: Tom, Martin, Anisha, and Matt.

The FAROS consortium had a fantastic and highly productive time working at the labs of Balgrist Campus AG and the operating room at Balgrist University Hospital this week.

Congratulations to Dr. Rémi Delaunay!

A great milestone today for Rémi Delaunay who passed his PhD viva with minor corrections! His thesis is entitled “Computational ultrasound tissue characterisation for brain tumour resection”.

Thanks to Pierre Gélat and Greg Slabaugh for their role examining the thesis.

Meet Anisha, Martin and Mengjie, CAI4CAI PhD students from the CDT SIE cohort

The Surgical & Interventional Engineering Centre for Doctoral Training delivers translational research to transform patient pathways. Meet some of our talented PhD students is this programme who are engineering better health!

PhD opportunity on "Exploiting multi-task learning for endoscopic vision in robotic surgery"

Project overview:

Overview of the project objective. Laparoscopic image courtesy of [ROBUST-MIS](https://robustmis2019.grand-challenge.org/).
Overview of the project objective. Laparoscopic image courtesy of ROBUST-MIS.

Project summary

Multi-task learning is common in deep learning, where clear evidence shows that jointly learning correlated tasks can improve on individual performances. Notwithstanding, in reality, many tasks are processed independently. The reasons are manifold:

CAI4CAI presenters at MICCAI 2021

CAI4CAI will be presenting their work at MICCAI 2021, the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, held from 27 September to 1 October 2021 as a virtual event.

Join us at IEEE International Ultrasonics Symposium 2021

Join us at the IEEE International Ultrasonics Symposium where CAI4CAI members will present their work.

[IEEE International Ultrasonics Symposium](https://2021.ieee-ius.org/) runs 11-16 September 2021.
IEEE International Ultrasonics Symposium runs 11-16 September 2021.

Christian Baker will be presenting on “Real-Time Ultrasonic Tracking of an Intraoperative Needle Tip with Integrated Fibre-optic Hydrophone” as part of the Tissue Characterization & Real Time Imaging (AM) poster session.

TRABIT Virtual Conference 7-10 Sept 2021

Join us for the TRABIT conference (7-10 Sept 2021) with outstanding speakers and fun networking events. Registration is free but mandatory.

TRABIT conference flyer.
TRABIT conference flyer.

You can check all the videos made by the PhD students of TRABIT to present their research projects on youtube.

Our crossMoDA challenge at MICCAI 2021 is now live!

CAI4CAI members are leading the organization of the cross-modality Domain Adaptation challenge (crossMoDA) for medical image segmentation Challenge, which runs as an official challenge during the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2021 conference.

New Partnership with Moon Surgical to Develop Machine Learning for Computer-Assisted Surgery

King’s College London, School of Biomedical Engineering & Imaging Sciences and Moon Surgical announced a new strategic partnership to develop Machine Learning applications for Computer-Assisted Surgery, which aims to strengthen surgical artificial intelligence (AI), data and analytics, and accelerate translation from King’s College London research into clinical usage.

Yijing Xie receives a Wellcome/EPSRC CME Research Fellowship award

Yijing will develop a 3D functional optical imaging system for guiding brain tumour resection.

Yijing presenting her work at New Scientist Live.
Yijing presenting her work at New Scientist Live.

She will engineer two emerging modalities, light field and multispectral imaging into a compact device, and develop novel image reconstruction algorithm to produce and display high-dimensional images. The CME fellowship will support her to carry out proof-of-concept studies, start critical new collaborations within and outside the centre. She hopes the award will act as a stepping stone to enable future long-term fellowship and grants, thus to establish an independent research programme.

New King's Public Engagement award led by Miguel Xochicale

Miguel will collaborate with Fang-Yu Lin and Shu Wang to create activities to engage school students with ultrasound-guidance intervention and fetal medicine. In the FETUS project, they will develop interactive activities with 3D-printed fetus, placenta phantoms as well as the integreation of a simulator that explain principles of needle enhancement of an ultrasound needle tracking system.

Congratulations to Dr. Luis Garcia Peraza Herrera!

A great milestone today for Luis Garcia Peraza Herrera who passed his PhD viva with minor corrections! His thesis is entitled “Deep Learning for Real-time Image Understanding in Endoscopic Vision”.

Screenshot from Luis' online PhD viva.
Screenshot from Luis’ online PhD viva.

Thanks to Ben Glocker and Enrico Grisan for their role examining the thesis.

Contact