We are an academic research group focusing on Contextual Artificial Intelligence for Computer Assisted Interventions.
CAI4CAI is embedded in the School of Biomedical Engineering & Imaging Sciences at King’s College London, UK
Our engineering research aims at improving surgical & interventional sciences
We take a multidisciplinary, collaborative approach to solve clinical challenges (📖)
Our labs are located in St Thomas’ hospital, a prominent London landmark
We design learning-based approaches for multi-modal reasoning
Medical imaging is a core source of information in our research
We design intelligent systems exploiting information captured by safe light
We thrive at providing the right information at the right time to the surgical team and embrace human/AI interactions (📖)
Strong industrial links are key to accelerate translation of cutting-edge research into clinical impact
We support open source, open access and involve patients in our research (👋)
Applications are invited for a fully funded 3.5 years PhD CDT DT4Health studentship (including tuition fees, annual stipend and consumables) starting in October 2025.
Applications are invited for a fully funded 3.5 years PhD CDT DT4Health studentship (including tuition fees, annual stipend and consumables) starting in October 2025.
The 25th of September saw the latest meeting of “Science for Tomorrow’s Neurosurgery,” our now well established PPI group. As always, lots of exciting and valuable discussion with updates from Oscar on the (nearly complete!) NeuroHSI recruitment as well as Matt announcing the official opening of NeuroPPEYE phase 2!
Applications are invited for the fully funded 1+3 years MRes+PhD or 4 years PhD MRC DTP studentship (including home tuition fees, annual stipend and consumables) starting in October 2025.
A prospective observational study to evaluate the use of an intraoperative hyperspectral imaging system in neurosurgery.
A prospective observational study to evaluate intraoperative hyperspectral imaging for real-time quantitative fluorescence-guided surgery of low-grade glioma.
The EPSRC Centre for Doctoral Training in Advanced Engineering in Personalised Surgery & Intervention
(CDT AE-PSI) is an innovative three-and-a-half year PhD training program aiming to deliver translational research and transform patient pathways.
Through a comprehensive, integrated training programme, the Centre for Doctoral Training in Smart Medical Imaging
trains the next generation of medical imaging researchers.
The Functionally Accurate RObotic Surgery
(FAROS) H2020 project aims at improving functional accuracy through embedding physical intelligence in surgical robotics.
The GIFT-Surg
project is an international research effort developing the technology, tools and training necessary to make fetal surgery a viable possibility.
The icovid
project focuses on AI-based lung CT analysis providing accurate quantification of disease and prognostic information in patients with suspected COVID-19 disease.
Up to 100 King’s-China Scholarship Council PhD Scholarship programme
(K-CSC) joint scholarship awards are available per year to support students from China who are seeking to start an MPhil/PhD degree at King’s College London.
The integrated and multi-disciplinary approach of the MRC Doctoral Training Partnership in Biomedical Sciences
(MRC DTP BiomedSci) to medical research offers a wealth of cutting-edge PhD training training opportunities in fundamental discovery science, translational research and experimental medicine.
The Translational Brain Imaging Training Network
(TRABIT) is an interdisciplinary and intersectoral joint PhD training effort of computational scientists, clinicians, and the industry in the field of neuroimaging.
The Wellcome / EPSRC Centre for Medical Engineering
combines fundamental research in engineering, physics, mathematics, computing, and chemistry with medicine and biomedical research.
Pathways to clinical impact
Moon Surgical
has partnered with us to develop machine learning for computer-assisted surgery. More information on our press release.
Following successful in-patient clinical studies of CAI4CAI’s translational research on computational hyperspectral imaging system for intraoperative surgical guidance, Hypervision Surgical Ltd
was founded by Michael Ebner, Tom Vercauteren, Jonathan Shapey, and Sébastien Ourselin.
In collaboration with CAI4CAI, Hypervision Surgical
’s goal is to convert the AI-powered imaging prototype system into a commercial medical device to equip clinicians with advanced computer-assisted tissue analysis for improved surgical precision and patient safety.
Intel
is the industrial sponsor of Theo Barfoot’s’s PhD on Active and continual learning strategies for deep learning assisted interactive segmentation of new databases.
Tom Vercauteren worked for 10 years with Mauna Kea Technologies
(MKT) before resuming his academic career.
Medtronic
is the industrial sponsor of Tom Vercauteren’s Medtronic / Royal Academy of Engineering Research Chair in Machine Learning for Computer-Assisted Neurosurgery.
Exemplar outputs of our research
We support open source and typically use github to disseminate our research. Repositories can be found on our group organisation (CAI4CAI) or individual member github profiles.
This work makes significant strides in the field of monoocular endoscopic depth perception, drastically improving on previous methods. The task of monocular depth perception demonstrates a nuanced understanding of the surgical scene, and could act as a vital building block in future technologies. To acheive our results, we leverage large vision transformers trained on huge natural image datasets and fine tuned to our ensembled meta dataset of sugical videos. Read more about it in our pre-print, or get our model here.
FastGeodis is an open-source package that provides efficient implementations for computing Geodesic and Euclidean distance transforms (or a mixture of both), targetting efficient utilisation of CPU and GPU hardware.
We make publicly available a spatio-temporal fetal brain MRI atlas for SBA. This atlas can support future research on automatic segmentation methods for brain 3D MRI of fetuses with SBA.
We have recently published the paper “Garcia-Peraza-Herrera, L. C., Fidon, L., DEttorre, C. Stoyanov, D., Vercauteren, T., Ourselin, S. (2021). Image Compositing for Segmentation of Surgical Tools without Manual Annotations. Transactions in Medical Imaging (📖)”. Inspired by special effects, we introduce a novel deep-learning method to segment surgical instruments in endoscopic images.
We contribute to MONAI, a PyTorch-based, open-source framework for deep learning in healthcare imaging, part of PyTorch Ecosystem.
Open source PyTorch implementation of “Dorent, R., Booth, T., Li, W., Sudre, C. H., Kafiabadi, S., Cardoso, J., … & Vercauteren, T. (2020). Learning joint segmentation of tissues and brain lesions from task-specific hetero-modal domain-shifted datasets. Medical Image Analysis, 67, 101862 (📖).”
DeepReg is a freely available, community-supported open-source toolkit for research and education in medical image registration using deep learning (📖).
We provide open source code and open access data for our paper “García-Peraza-Herrera, L. C., Everson, M., Lovat, L., Wang, H. P., Wang, W. L., Haidry, R., … & Vercauteren, T. (2020). Intrapapillary capillary loop classification in magnification endoscopy: Open dataset and baseline methodology. International journal of computer assisted radiology and surgery, 1-9 (📖).”
NiftyMIC is a Python-based open-source toolkit for research developed within the GIFT-Surg project to reconstruct an isotropic, high-resolution volume from multiple, possibly motion-corrupted, stacks of low-resolution 2D slices. Read “Ebner, M., Wang, G., Li, W., Aertsen, M., Patel, P. A., Aughwane, R., … & David, A. L. (2020). An automated framework for localization, segmentation and super-resolution reconstruction of fetal brain MRI. NeuroImage, 206, 116324 (📖).”
PUMA provides a simultaneous multi-tasking framework that takes care of managing the complexities of executing and controlling multiple threads and/or processes.
GIFT-Grab is an open-source C++ and Python API for acquiring, processing and encoding video streams in real time (📖).
You can browse our list of open positions (if any) here, as well as get an insight on the type of positions we typically advertise by browsing through our list of previous openings. We are also supportive of hosting strong PhD candidates and researchers supported by a personal fellowship/grant.
Please note that applications for the listed open positions need to be made through the University portal to be formally taken into acount.
.js-id-open-positionOn 21st September we held our fourth ‘Science for Tomorrow’s Neurosurgery’ PPI group meeting online, with presentations from Oscar, Matt and Silvère. Presentations focused on an update from the NeuroHSI trial, with clear demonstration of improvements in resolution of the HSI images we are now able to acquire; this prompted real praise from our patient representatives, which is extremely reassuring for the trial going forward. We also took this opportunity to announce the completion of the first phase of NeuroPPEYE, in which we aim to use HSI to quantify tumour fluorescence beyond that which the human eye can see. Discussions were centered around the theme of “what is an acceptable number of participants for proof of concept studies,” generating very interesting points of view that ultimately concluded that there was no “hard number” from the patient perspective, as long as a thorough assessment of the technology had been carried out. This is extremely helpful in how we progress with the trials, particularly NeuroPPEYE, which will begin recruitment for its second phase shortly. Once again, the themes and discussions were summarized in picture format by our phenomenal illustrator, Jenny Leonard (see below) and we are already making plans for our next meeting in February 2024!
This video presents work lead by Martin Huber. Deep Homography Prediction for Endoscopic Camera Motion Imitation Learning investigates a fully self-supervised method for learning endoscopic camera motion from readily available datasets of laparoscopic interventions. The work addresses and tries to go beyond the common tool following assumption in endoscopic camera motion automation. This work will be presented at the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023).
This video presents work lead by Mengjie Shi focusing on learning-based sound-speed correction for dual-modal photoacoustic/ultrasound imaging. This work will be presented at the 2023 IEEE International Ultrasonics Symposium (IUS).
You can read the preprint on arXiv: 2306.11034 and get the code from GitHub.
Recently, we organized a Public and Patient Involvement (PPI) group with Vestibular Schwannoma patients to understand their perspectives on an patient-centered automated report. Partnering with the British Acoustic Neuroma Association (BANA), we recruited participants by circulating a form within the BANA community through their social media platforms.
CAI4CAI members and alumni are leading the organization of the new edition of the cross-modality Domain Adaptation challenge (crossMoDA) for medical image segmentation Challenge, which will runs as an official challenge during the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2023 conference.
The four co-founders of Hypervison Surgical, a King’s spin-out company, have been awarded the Cutlers’ Surgical Prize for outstanding work in the field of instrumentation, innovation and technical development.
The Cutlers’ Surgical Prize is one of the most prestigious annual prizes for original innovation in the design or application of surgical instruments, equipment or practice to improve the health and recovery of surgical patients.
This video presents work lead by Christopher E. Mower. OpTaS is an OPtimization-based TAsk Specification Python library for trajectory optimization and model predictive control. The code can be found at https://github.com/cmower/optas. This work will be presented at the 2023 IEEE International Conference on Robotics and Automation (ICRA).
We are working to develop new technologies that combine a new type of camera system, referred to as hyperspectral, with Artificial Intelligence (AI) systems to reveal to neurosurgeons information that is otherwise not visible to the naked eye during surgery. Two studies are currently bringing this “hyperspectral” technology to operating theatres. The NeuroHSI study uses a hyperspectral camera attached to an external scope to show surgeons critical information on tissue blood flow and distinguishes vulnerable structures which need to be protected. The NeuroPPEye study is developing this technology adapted for surgical microscopes, to guide tumour surgery.
This video presents work lead by Christopher E. Mower. The ROS-PyBullet Interface is a framework between the reliable contact simulator PyBullet and the Robot Operating System (ROS) with additional utilities for Human-Robot Interaction in the simulated environment. This work was presented at the Conference on Robot Learning (CoRL), 2022. The corresponding paper can be found at PMLR.
Muhammad led the development of FastGeodis, an open-source package that provides efficient implementations for computing Geodesic and Euclidean distance transforms (or a mixture of both), targetting efficient utilisation of CPU and GPU hardware. This package is able to handle 2D as well as 3D data, where it achieves up to a 20x speedup on a CPU and up to a 74x speedup on a GPU as compared to an existing open-source library that uses a non-parallelisable single-thread CPU implementation. Further in-depth comparison of performance improvements is discussed in the FastGeodis documentation.
WiM-WILL is a digital platform that provides MICCAI members to share their career pathways to the outside world in parallel to MICCAI conference. Muhammad Asad (interviewer) and Navodini Wijethilake (interviewee) from our lab group participated in this competition this year and secured the second place. Their interview was focused on overcoming challenges in research as a student. The link to the complete interview is available below and on youtube.
We are working to develop new technologies that combine a new type of camera system, referred to as hyperspectral, with Artificial Intelligence (AI) systems to reveal to neurosurgeons information that is otherwise not visible to the naked eye during surgery. Two studies are currently bringing this “hyperspectral” technology to operating theatres. The NeuroHSI study uses a hyperspectral camera attached to an external scope to show surgeons critical information on tissue blood flow and distinguishes vulnerable structures which need to be protected. The NeuroPPEye study is developing this technology adapted for surgical microscopes, to guide tumour surgery.
Recent release of MONAI Label v0.4.0 extends support for multi-label scribbles interactions to enable scribbles-based interactive segmentation methods.
CAI4CAI team member Muhammad Asad contributed to the development, testing and review of features related to scribbles-based interactive segmentation in MONAI Label.
We are actively involving patients and carers to make our research on next generation neurosurgery more relevant and impactful. Early February 2022, our research scientists from King’s College London and King’s College Hospital organised a Patient and Public Involvement (PPI) meeting with support from The Brain Tumour Charity.
Multi-task learning is common in deep learning, where clear evidence shows that jointly learning correlated tasks can improve on individual performances. Notwithstanding, in reality, many tasks are processed independently. The reasons are manifold:
Join us at the IEEE International Ultrasonics Symposium where CAI4CAI members will present their work.
Christian Baker will be presenting on “Real-Time Ultrasonic Tracking of an Intraoperative Needle Tip with Integrated Fibre-optic Hydrophone” as part of the Tissue Characterization & Real Time Imaging (AM) poster session.
King’s College London, School of Biomedical Engineering & Imaging Sciences and Moon Surgical announced a new strategic partnership to develop Machine Learning applications for Computer-Assisted Surgery, which aims to strengthen surgical artificial intelligence (AI), data and analytics, and accelerate translation from King’s College London research into clinical usage.
Yijing will develop a 3D functional optical imaging system for guiding brain tumour resection.
She will engineer two emerging modalities, light field and multispectral imaging into a compact device, and develop novel image reconstruction algorithm to produce and display high-dimensional images. The CME fellowship will support her to carry out proof-of-concept studies, start critical new collaborations within and outside the centre. She hopes the award will act as a stepping stone to enable future long-term fellowship and grants, thus to establish an independent research programme.
Miguel will collaborate with Fang-Yu Lin and Shu Wang to create activities to engage school students with ultrasound-guidance intervention and fetal medicine. In the FETUS project, they will develop interactive activities with 3D-printed fetus, placenta phantoms as well as the integreation of a simulator that explain principles of needle enhancement of an ultrasound needle tracking system.