CAI4CAI

We are an academic research group focusing on Contextual Artificial Intelligence for Computer Assisted Interventions.

CAI4CAI is embedded in the School of Biomedical Engineering & Imaging Sciences at King’s College London, UK

**CAI4CAI**

About us

King's College London logo

We are based at King’s College London
surgeon-kclred
Surgical assistance

Our engineering research aims at improving surgical & interventional sciences

Multidisciplinarity

We take a multidisciplinary, collaborative approach to solve clinical challenges (📖)

double-decker-kclred
London

Our labs are located in St Thomas’ hospital, a prominent London landmark

neural-net-kclred
Machine learning

We design learning-based approaches for multi-modal reasoning

Medical imaging

Medical imaging is a core source of information in our research

Computational biophotonics

We design intelligent systems exploiting information captured by safe light

Contextual AI

We thrive at providing the right information at the right time to the surgical team and embrace human/AI interactions (📖)

robotic-surgery-kclred
Translational research

Strong industrial links are key to accelerate translation of cutting-edge research into clinical impact

Open culture

We support open source, open access and involve patients in our research (👋)

Recent posts

Collaborative research

.js-id-featured
CDT SMI

CDT SMI

Through a comprehensive, integrated training programme, the Centre for Doctoral Training in Smart Medical Imaging trains the next generation of medical imaging researchers.

GIFT-Surg

GIFT-Surg

The GIFT-Surg project is an international research effort developing the technology, tools and training necessary to make fetal surgery a viable possibility.

icovid

icovid

The icovid project focuses on AI-based lung CT analysis providing accurate quantification of disease and prognostic information in patients with suspected COVID-19 disease.

TRABIT

TRABIT

The Translational Brain Imaging Training Network (TRABIT) is an interdisciplinary and intersectoral joint PhD training effort of computational scientists, clinicians, and the industry in the field of neuroimaging.

Spin-outs and industry collaborations

Pathways to clinical impact

*
Moon Surgical

Moon Surgical

Moon Surgical has partnered with us to develop machine learning for computer-assisted surgery. More information on our press release.

Hypervision Surgical Ltd

Hypervision Surgical Ltd

Following successful in-patient clinical studies of CAI4CAI’s translational research on computational hyperspectral imaging system for intraoperative surgical guidance, Hypervision Surgical Ltd was founded by Michael Ebner, Tom Vercauteren, Jonathan Shapey, and Sébastien Ourselin.

In collaboration with CAI4CAI, Hypervision Surgical’s goal is to convert the AI-powered imaging prototype system into a commercial medical device to equip clinicians with advanced computer-assisted tissue analysis for improved surgical precision and patient safety.

ico**metrix**

icometrix

icometrix is the consortium lead of the icovid project.

Intel (previously COSMONiO)

Intel (previously COSMONiO)

Intel is the industrial sponsor of Theo Barfoot’s’s PhD on Active and continual learning strategies for deep learning assisted interactive segmentation of new databases.

Mauna Kea Technologies

Mauna Kea Technologies

Tom Vercauteren worked for 10 years with Mauna Kea Technologies (MKT) before resuming his academic career.

Medtronic

Medtronic

Medtronic is the industrial sponsor of Tom Vercauteren’s Medtronic / Royal Academy of Engineering Research Chair in Machine Learning for Computer-Assisted Neurosurgery.

Open research

Exemplar outputs of our research

.js-id-featured
Learning joint Segmentation of Tissues And Brain Lesions (jSTABL) from task-specific hetero-modal domain-shifted datasets

Learning joint Segmentation of Tissues And Brain Lesions (jSTABL) from task-specific hetero-modal domain-shifted datasets

Open source PyTorch implementation of “Dorent, R., Booth, T., Li, W., Sudre, C. H., Kafiabadi, S., Cardoso, J., … & Vercauteren, T. (2020). Learning joint segmentation of tissues and brain lesions from task-specific hetero-modal domain-shifted datasets. Medical Image Analysis, 67, 101862 (📖).”

Intrapapillary Capillary Loop (IPCL) Classification

Intrapapillary Capillary Loop (IPCL) Classification

We provide open source code and open access data for our paper “García-Peraza-Herrera, L. C., Everson, M., Lovat, L., Wang, H. P., Wang, W. L., Haidry, R., … & Vercauteren, T. (2020). Intrapapillary capillary loop classification in magnification endoscopy: Open dataset and baseline methodology. International journal of computer assisted radiology and surgery, 1-9 (📖).”

NiftyNet: Open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy

NiftyNet: Open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy

[unmaintained] NiftyNet is a TensorFlow 1.x based open-source convolutional neural networks (CNN) platform for research in medical image analysis and image-guided therapy (📖). It has been superseeded by MONAI.

Python Unified Multi-tasking API (PUMA)

Python Unified Multi-tasking API (PUMA)

PUMA provides a simultaneous multi-tasking framework that takes care of managing the complexities of executing and controlling multiple threads and/or processes.

GIFT-Grab: Simple frame grabbing API

GIFT-Grab: Simple frame grabbing API

GIFT-Grab is an open-source C++ and Python API for acquiring, processing and encoding video streams in real time (📖).

The CAI4CAI team

Principal investigators

Avatar

Tom Vercauteren

CAI4CAI lead

Professor of Interventional Image Computing

King's College London, United Kingdom

Avatar

Sébastien Ourselin

Professor of Healthcare Engineering

King's College London, United Kingdom

Avatar

Jonathan Shapey

Clinical Academic and Consultant Neurosurgeon

King's College London, United Kingdom

King's College Hospital NHS Foundation Trust, United Kingdom

Research team members

Avatar

Aaron Kujawa

Research Associate

King's College London, United Kingdom

Groups: Tom, Jonathan

Avatar

Charlie Budd

Research Software Engineer

King's College London, United Kingdom

Group: Tom

Avatar

Junwen Wang

Research Associate

King's College London, United Kingdom

Groups: Tom, Jonathan

Avatar

Lorena Garcia-Foncillas Macias

MRes/PhD Student

King's College London, United Kingdom

Groups: Jonathan, Tom

Avatar

Marina Ivory

Research Associate

King's College London, United Kingdom

Groups: Jonathan, Tom

Avatar

Matthew Elliot

Neurosurgical Clinical Research Fellow and PhD Student

King's College London, United Kingdom

King's College Hospital NHS Foundation Trust, United Kingdom

Groups: Jonathan, Tom

Avatar

Maxence Boels

PhD Student

King's College London, United Kingdom

Groups: Seb, Prokar

Avatar

Meng Wei

PhD Student

King's College London, United Kingdom

CDT Smart Medical Imaging

Groups: Tom, Miaojing

Avatar

Navodini Wijethilake

PhD Student

King's College London, United Kingdom

Groups: Jonathan, Tom

Avatar

Oluwatosin Alabi

PhD Student

King's College London, United Kingdom

EPSRC CDT in Smart Medical Imaging

Groups: Miaojing, Tom

Avatar

Oscar MacCormac

Neurosurgery Clinical Research Fellow and PhD Student

King's College London, United Kingdom

King's College Hospital NHS Foundation Trust, United Kingdom

Groups: Jonathan, Tom

Avatar

Peichao Li

PhD Student

King's College London, United Kingdom

Groups: Tom, Jonathan

Avatar

Silvère Ségaud

Research Associate

King's College London, United Kingdom

Groups: Tom, Jonathan, Yijing

Avatar

Soumya Kundu

PhD Student

King's College London, United Kingdom

Groups: Jonathan, Tom

Avatar

Tangqi Shi

PhD Student

King's College London, United Kingdom

Group: Tom Booth, Tom

Avatar

Theo Barfoot

PhD Student

King's College London, United Kingdom

Imperial College London, United Kingdom

EPSRC CDT in Smart Medical Imaging

Groups: Tom, Ben, Jonathan

Avatar

Yanghe Hao

MRes/PhD Student

King's College London, United Kingdom

Groups: Tom, Christos

Avatar

Yijing Xie

L’Oréal UK&I and UNESCO for Women in Science Rising Talent Fellow

King's College London, United Kingdom

Affiliated research team members

Avatar

Helena Williams

Postdoctoral fellow

KU Leuven, Belgium

King's College London, United Kingdom

Groups: Jan, Jan, Tom

Avatar

Martin Huber

PhD Student

King's College London, United Kingdom

Groups: Christos, Tom

Avatar

Michael Ebner

CEO and co-founder, Hypervision Surgical

Hypervision Surgical Ltd, United Kingdom

King's College London, United Kingdom

Alumni

Avatar

CAI4CAI alumni

See our previous lab members

Open positions

You can browse our list of open positions (if any) here, as well as get an insight on the type of positions we typically advertise by browsing through our list of previous openings. We are also supportive of hosting strong PhD candidates and researchers supported by a personal fellowship/grant.

Please note that applications for the listed open positions need to be made through the University portal to be formally taken into acount.

.js-id-open-position
Tom Vercauteren appointed Senior Editor for Medical Image Analysis (MedIA) Journal

Tom Vercauteren has been appointed as Senior Editor for the highly-ranked Medical Image Analysis journal, an official journal of the MICCAI society. Sebastien Ourselin also features on the editorial board of the publication.

PhD opportunity [October 2024 start] on "Text promptable semantic segmentation of volumetric neuroimaging data"

Applications are invited for the fully funded 4 years full-time PhD studentship (including home tuition fees, annual stipend and consumables) starting on 1st June 2022.

Award details:

Text promptable semantic segmentation of volumetric neuroimaging data.
Text promptable semantic segmentation of volumetric neuroimaging data.

Project Overview

Semantic segmentation of brain structures from medical images, in particular Magnetic Resonance Imaging (MRI), plays an important role in many neuroimaging applications. Deep learning based segmentation algorithms are now achieving state-of-the-art segmentation results but currently require large amounts of annotated data under predefined segmentation protocols and data inclusion/exclusion criteria. The rigidity of such approaches forbids natural interactions by humans and thus limits the usefulness for non-routine questions.

Science for tomorrow's neurosurgery: Patient & Public Involvement (PPI) group - September 2023 group meeting

On 21st September we held our fourth ‘Science for Tomorrow’s Neurosurgery’ PPI group meeting online, with presentations from Oscar, Matt and Silvère. Presentations focused on an update from the NeuroHSI trial, with clear demonstration of improvements in resolution of the HSI images we are now able to acquire; this prompted real praise from our patient representatives, which is extremely reassuring for the trial going forward. We also took this opportunity to announce the completion of the first phase of NeuroPPEYE, in which we aim to use HSI to quantify tumour fluorescence beyond that which the human eye can see. Discussions were centered around the theme of “what is an acceptable number of participants for proof of concept studies,” generating very interesting points of view that ultimately concluded that there was no “hard number” from the patient perspective, as long as a thorough assessment of the technology had been carried out. This is extremely helpful in how we progress with the trials, particularly NeuroPPEYE, which will begin recruitment for its second phase shortly. Once again, the themes and discussions were summarized in picture format by our phenomenal illustrator, Jenny Leonard (see below) and we are already making plans for our next meeting in February 2024!

Jonathan Shapey delivers the Hunterian Lecture at the Society of British Neurological Surgeons autumn congress

Jonathan Shapey had the great honour to deliver the Hunterian Lecture at the Society of British Neurological Surgeons autumn congress (SBNS London 2023). Jonathan presented his work in developing a label-free real-time intraoperative hyperspectralimaging system for neurosurgery.

MICCAI 2023 Presentation for Deep Homography Prediction for Endoscopic Camera Motion Imitation Learning

This video presents work lead by Martin Huber. Deep Homography Prediction for Endoscopic Camera Motion Imitation Learning investigates a fully self-supervised method for learning endoscopic camera motion from readily available datasets of laparoscopic interventions. The work addresses and tries to go beyond the common tool following assumption in endoscopic camera motion automation. This work will be presented at the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023).

Presentation Video for IEEE IUS

This video presents work lead by Mengjie Shi focusing on learning-based sound-speed correction for dual-modal photoacoustic/ultrasound imaging. This work will be presented at the 2023 IEEE International Ultrasonics Symposium (IUS).

You can read the preprint on arXiv: 2306.11034 and get the code from GitHub.

Research Associate / Research Fellow in "Computational Hyperspectral Imaging"

Post overview:

State-of-the-art fluorescence imaging system for neurosurgical guidance.
State-of-the-art fluorescence imaging system for neurosurgical guidance.

Job description

We are seeking an interventional image computing researcher to design and translate the next generation of AI-assisted hyperspectral imaging systems for surgical guidance using quantitative fluorescence. The postholder, based within the Department of Surgical & Interventional Engineering at King’s College London, will play a key role in NeuroPPEye a collaborative project with King’s College Hospital and work closely with the project’s industrial collaborator Hypervision Surgical, a King’s spin-out company. A clinical neurosurgery study underpins this collaboration. The successful candidate will work on the resulting neurosurgical data as well as controlled phantom data. They will also have the opportunity to provide insight on how to best acquire prospective data.

Research Associate / Research Fellow in "Trustworthy Artificial Intelligence for Surgical Imaging and Robotics"

Post overview:

A view on a lumbar microdiscectomy surgery  (Source: [DVIDS Public Domain Archive](https://nara.getarchive.net/media/luis-contreras-a-contractor-orthopedic-technician-c6cb2a)).
A view on a lumbar microdiscectomy surgery (Source: DVIDS Public Domain Archive).

Job description

We are seeking a Post-doctoral Research Associate to develop novel trustworthy artificial intelligence (AI) algorithms able to extract actionable information from surgical imaging data.

Presentation Video for OpTaS

This video presents work lead by Christopher E. Mower. OpTaS is an OPtimization-based TAsk Specification library for trajectory optimization and model predictive control. This work will be presented at the 2023 IEEE International Conference on Robotics and Automation (ICRA).

Our crossMoDA challenge to be held MICCAI 2023 is now live!

CAI4CAI members and alumni are leading the organization of the new edition of the cross-modality Domain Adaptation challenge (crossMoDA) for medical image segmentation Challenge, which will runs as an official challenge during the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2023 conference.

Hypervision Surgical awarded Cutlers' Surgical Prize for HyperSnap hyperspectral imaging system

The four co-founders of Hypervison Surgical, a King’s spin-out company, have been awarded the Cutlers’ Surgical Prize for outstanding work in the field of instrumentation, innovation and technical development.

Hypervision Surgical receives the Cutlers’ Surgical Prize.
Hypervision Surgical receives the Cutlers’ Surgical Prize.

The Cutlers’ Surgical Prize is one of the most prestigious annual prizes for original innovation in the design or application of surgical instruments, equipment or practice to improve the health and recovery of surgical patients.

Video for the ROS-PyByllet Interface

This video presents work lead by Christopher E. Mower. The ROS-PyBullet Interface is a framework between the reliable contact simulator PyBullet and the Robot Operating System (ROS) with additional utilities for Human-Robot Interaction in the simulated environment. This work was presented at the Conference on Robot Learning (CoRL), 2022. The corresponding paper can be found at PMLR.

[Job] Research Coordinator - King's College Hospital NHS Foundation Trust

We are seeking a motivated research nurse/coordinator to support our NeuroHSI and NeuroPPEye project.

CRN King’s Neurosurgery research coordinator

The post will be a Band 6 level Neurosurgery affiliated research nurse/coordinator to work within the neuroscience division at KCH. This is a full-time post, initially until end of August 2023 with a view to be extended by 6 - 12 months. The successful applicant will work across several neurosurgery sub-specialities with a particular focus on neuro-oncology and translational healthcare technology in neurosurgery. The applicate will work on research and clinical trials listed in the Department of Heath national portfolio, principally involving the development and evaluation of advanced smart camera technology for use during surgery. The post holder will work under the supervision of Mr Jonathan Shapey (Senior Clinical Lecturer and Consultant Neurosurgeon), Professor Keyoumars Ashkan (Professor of Neurosurgery) and the management of Alexandra Rizos, Neuroscience Research Manager. Some experience in clinical research and knowledge of good clinical practice would be beneficial.

Open-source package for fast generalised geodesic distance transform

Muhammad led the development of FastGeodis, an open-source package that provides efficient implementations for computing Geodesic and Euclidean distance transforms (or a mixture of both), targetting efficient utilisation of CPU and GPU hardware. This package is able to handle 2D as well as 3D data, where it achieves up to a 20x speedup on a CPU and up to a 74x speedup on a GPU as compared to an existing open-source library that uses a non-parallelisable single-thread CPU implementation. Further in-depth comparison of performance improvements is discussed in the FastGeodis documentation.

PhD opportunity [February 2024 start] on "Incorporating Expert-consistent Spatial Structure Relationships in Learning-based Brain Parcellation"

Project overview:

Human-AI trust can be defined as the belief that the AI system will satisfy a set of contracts of trust. This project will establish contracts of trust about the spatial relationships across brain structures.
Human-AI trust can be defined as the belief that the AI system will satisfy a set of contracts of trust. This project will establish contracts of trust about the spatial relationships across brain structures.

Aim of the PhD Project

  • Develop trustworthy deep learning-based brain segmentation/parcellation.
  • Formalise trustworthiness as contracts on spatial relationships between labels that the algorithm must fulfil.
  • Establish mathematical/algorithmic frameworks to guarantee that the proposed segmentation/parcellation respect the contracts of trust.
  • Implement, validate, and disseminate the proposed algorithms using open-access datasets.

Project summary

Automated segmentation and labelling of brain structures from medical images, in particular Magnetic Resonance Imaging (MRI), plays an important role in many applications ranging from surgical planning to neuroscience studies. For example, in Deep Brain Stimulation (DBS) procedures used to treat some movement disorders, segmentation of the basal ganglia and structures such as the subthalamic nucleus (STN) can help with precise targeting of the neurostimulation electrodes being implanted in the patient’s brain. Going beyond segmentation of a few discrete structures, some applications required a full brain parcellation, i.e., a partition of the entire brain into a set of non-overlapping spatial regions of anatomical or functional significance. Brain parcellation have notably been used to automate the trajectory planning of multiple intracranial electrodes for epilepsy surgery or to support the assessment of brain atrophy patterns for dementia monitoring.

PhD opportunity on "Artificial intelligence-driven radiosurgery planning for brain metastases"

Project overview:

Automated detection and segmentation of brain metastases using MRI for radiosurgery planning.
Automated detection and segmentation of brain metastases using MRI for radiosurgery planning.

Aim of the PhD Project

  • Implement learning-based registration to curate a spatially-normalised dataset of MR images previously used to deliver stereotactic radiosurgery to brain metastases
  • Develop data-driven deep learning frameworks to automatically detect and segment brain metastases while allowing for interactive corrections
  • Develop imaging biomarkers to predict tumour response and behaviour following treatment

Project summary

Approximately 25,000 patients are diagnosed with a brain tumour every year in the UK. Brain metastases affect up to 40% of patients with extracranial primary cancer. Furthermore, although there are presently no reliable data, metastatic brain tumours are thought to outnumber primary malignant brain tumours by at least 3:1. Patients with brain metastases require individualized patient management and may include surgery, stereotactic radiosurgery, fractionated radiotherapy and chemotherapy, either alone or in combination.

PhD opportunity [February 2024 start] on "Accurate automated quantification of spine evolution — it’s about time!"

Project overview:

Aiming at characterising and quantifying the changes occurring in an individual’s spine over time.
Aiming at characterising and quantifying the changes occurring in an individual’s spine over time.

Aim of the PhD Project

  • Complement radiological expertise with automated analysis of longitudinal spine changes
  • Development of longitudinal registration algorithms to align spine images from different imaging modalities
  • Development of processing tools robust to the presence of metal artefact and various fields of view
  • Extraction of imaging biomarkers of spine degeneration.

Project summary

Back pain presents because of a wide range of conditions in the spine and is often multifactorial. Spine appearances also change after surgery, and postoperative changes depend on the specific interventions offered to patients and human factors such as healing and mechanical adaptations, which are also unique to each patient. When reviewing medical images for diagnostic, monitoring or prognosis purpose, radiologists are required to evaluate multiple structures, including bone, muscles, and nerves, as well as the surrounding soft tissues and any instrumentation used in surgery. They must use their expertise to assess how each structure has evolved over time visually. Such readings are, therefore, both time-consuming and require dedicated expertise, which is limited to large regional spine centres throughout the UK.

PhD opportunity [February 2024 start] on "Physically-informed learning-based beamforming for multi-transducer ultrasound imaging"

Project overview:

Multiple transducer delay-and-sum beamforming scheme (Peralta et al, 2019). It requires accurate transducer geometry to calculate the time-of-flight between each element and focal point, and apply proper time delays to each radio-frequency channel.
Multiple transducer delay-and-sum beamforming scheme (Peralta et al, 2019). It requires accurate transducer geometry to calculate the time-of-flight between each element and focal point, and apply proper time delays to each radio-frequency channel.

Aim of the PhD Project

  • Pursue sparse solutions to handle the channel count required to coherently operate multiple ultrasound transducer and design and implement machine learning strategies to avoid the sparsity-related artefacts in the images.
  • Develop advanced beamforming techniques using machine learning approaches informed by ultrasound physics to address the concern of the flexible geometry in a multi-transducer imaging system and achieve unprecedented image quality.
  • Explore the application of the techniques on healthy volunteers.

Project description

Medical Ultrasound (US) is a low-cost imaging method that is long-established and widely used for screening, diagnosis, therapy monitoring, and guidance of interventional procedures. However, the usefulness of conventional US systems is limited by physical constraints mainly imposed by the small size of the handheld probe that lead to low-resolution images with a restricted field of view and view-dependent artefacts.

Open-access Spatio-temporal Atlas of the Developing Fetal Brain with Spina Bifida Aperta

Lucas led on the development of the first fetal brain atlas for spina bifida aperta (SBA). This first-time atlas will allow researchers to perform measurements of the brain’s anatomy and to study its development in a large population of unborn babies with SBA

Navodini W. and Muhammed A. won the second place of WIM-WILL Competition at MICCAI 2022

WiM-WILL is a digital platform that provides MICCAI members to share their career pathways to the outside world in parallel to MICCAI conference. Muhammad Asad (interviewer) and Navodini Wijethilake (interviewee) from our lab group participated in this competition this year and secured the second place. Their interview was focused on overcoming challenges in research as a student. The link to the complete interview is available below and on youtube.

PhD opportunity on "Artificial intelligence-driven management of brain tumours"

Post overview:

Exemplar cases and task for AI-driven management of brain tumours.
Exemplar cases and task for AI-driven management of brain tumours.

Project description

Approximately 25,000 patients are diagnosed with a brain tumour every year in the UK. Meningiomas and pituitary adenomas are the first and third most common primary tumour, accounting for over 50% of all primary brain tumours. Brain metastases affect up to 40% of patients with extracranial primary cancer.

Yijing Xie awarded Royal Academy of Engineering Research Fellowship

Yijing has been awarded the prestigious Royal Academy of Engineering Research Fellowship for her research in the development of tools to help neurosurgeons during surgery.

Dr Yijing Xie, Research Fellow & recipient of the Royal Academy of Engineering Research Fellowship.
Dr Yijing Xie, Research Fellow & recipient of the Royal Academy of Engineering Research Fellowship.

Dr Xie says that currently there is a lack of effective ways to assess brain functions in real-time, particularly during brain surgery. During such a procedure, the surgeon must remove all cancerous tissue while preserving surrounding brain tissue and regions that serve important functions.

Research Associate in "Biomedical Optics - Hyperspectral Imaging"

Post overview:

AI-assisted hyperspectral imaging systems for surgical guidance using quantitative fluorescence.
AI-assisted hyperspectral imaging systems for surgical guidance using quantitative fluorescence.

Job description

We are seeking a biomedical optics researcher to design and translate the next generation of hyperspectral imaging systems for surgical guidance using quantitative fluorescence. The postholder, based within the Department of Surgical & Interventional Engineering at King’s College London, will play a key role in a collaborative project with King’s College Hospital and work closely with the project’s industrial collaborator Hypervision Surgical, a recently founded King’s spin-out company. A clinical neurosurgery study has been set up to underpin this collaboration.

Multi-label Scribbles Support in MONAI Label v0.4.0

Recent release of MONAI Label v0.4.0 extends support for multi-label scribbles interactions to enable scribbles-based interactive segmentation methods.

CAI4CAI team member Muhammad Asad contributed to the development, testing and review of features related to scribbles-based interactive segmentation in MONAI Label.

FAROS Integration Week at Balgrist University Hospital
Our team is getting ready to test FAROS technology in the operating room. From left to right: [Tom](/author/tom-vercauteren/), [Martin](/author/martin-huber), [Anisha](/author/anisha-bahl), and [Matt](/author/matthew-elliot).
Our team is getting ready to test FAROS technology in the operating room. From left to right: Tom, Martin, Anisha, and Matt.

The FAROS consortium had a fantastic and highly productive time working at the labs of Balgrist Campus AG and the operating room at Balgrist University Hospital this week.

PhD opportunity [June 2022 start] on "Computational approaches for quantitative fluorescence-guided neurosurgery"

Applications are invited for the fully funded 4 years full-time PhD studentship (including home tuition fees, annual stipend and consumables) starting on 1st June 2022.

Award details:

AI-assisted hyperspectral imaging systems for surgical guidance using quantitative fluorescence.
AI-assisted hyperspectral imaging systems for surgical guidance using quantitative fluorescence.

Aim of the project

This project aims at enabling wide-field and real-time quantitative assessment of tumour-specific fluorescence by designing novel deep-learning-based computational algorithms. The project will leverage a compact hyperspectral imaging (HSI) system developed by Hypervision Surgical Ltd initially designed for contrast-free imaging.

Congratulations to Dr. Rémi Delaunay!

A great milestone today for Rémi Delaunay who passed his PhD viva with minor corrections! His thesis is entitled “Computational ultrasound tissue characterisation for brain tumour resection”.

Thanks to Pierre Gélat and Greg Slabaugh for their role examining the thesis.

Meet Anisha, Martin and Mengjie, CAI4CAI PhD students from the CDT SIE cohort

The Surgical & Interventional Engineering Centre for Doctoral Training delivers translational research to transform patient pathways. Meet some of our talented PhD students is this programme who are engineering better health!

Research Associate in "Real-time Computational Hyperspectral Imaging"

Post overview:

Hyperspectral Imaging (HSI) in the operating room.
Hyperspectral Imaging (HSI) in the operating room.

Job description

We are seeking an interventional image computing researcher to design and translate the next generation of real-time AI-assisted hyperspectral imaging systems for surgical guidance. The postholder, based within the Department of Surgical & Interventional Engineering at King’s College London, will play a key role in a collaborative project with King’s College Hospital and Hypervision Surgical, a recently founded King’s spin-out company. A clinical neurosurgery study has been set up to underpin this collaboration. The successful candidate will work on the resulting neurosurgical data as well as retrospective data. They will also have the opportunity to provide insight on how to best acquire prospective data.

Research Associate in "Computational Hyperspectral Imaging"

Post overview:

AI-assisted hyperspectral imaging systems for surgical guidance using quantitative fluorescence.
AI-assisted hyperspectral imaging systems for surgical guidance using quantitative fluorescence.

Job description

We are seeking an interventional image computing researcher to design and translate the next generation of AI-assisted hyperspectral imaging systems for surgical guidance using quantitative fluorescence. The postholder, based within the Department of Surgical & Interventional Engineering at King’s College London, will play a key role in a collaborative project with King’s College Hospital and work closely with the project’s industrial collaborator Hypervision Surgical, a recently founded King’s spin-out company. A clinical neurosurgery study has been set up to underpin this collaboration. The successful candidate will work on the resulting neurosurgical data as well as controlled phantom data. They will also have the opportunity to provide insight on how to best acquire prospective data.

Research Associate or Fellow in "Real-time AI for Surgical Robot Control"

Post overview:

Overview of the H2020 [FAROS project](https://h2020faros.eu).
Overview of the H2020 FAROS project.

Job description

We are seeking a highly motivated individual to join us and work on FAROS, a European research project dedicated to advancing Functionally Accurate RObotic Surgery, https://h2020faros.eu, in collaboration with KU Leuven, Sorbonne University, Balgrist Hospital and SpineGuard.

PhD opportunity on "Joint learning of stereo vision reconstruction and hyperspectral imaging upsampling for binocular surgical guidance"

Project overview:

Hyperspectral imaging (HSI) data can now be acquired in real-time using compact devices suitable for surgery. By combining HSI, high-resolution RGB, and multi-view geometry and by designing novel machine learning approaches, this PhD project will for the first time allow optimal display of HSI-derived information for binocular guided surgeries.
Hyperspectral imaging (HSI) data can now be acquired in real-time using compact devices suitable for surgery. By combining HSI, high-resolution RGB, and multi-view geometry and by designing novel machine learning approaches, this PhD project will for the first time allow optimal display of HSI-derived information for binocular guided surgeries.

Project summary

Optimal outcomes in oncology surgery are hindered by the difficulty of differentiating between tumour and surrounding tissues during surgery. Real-time hyperspectral imaging (HSI) provides rich high-dimensional intraoperative information that has the potential to significantly improve tissue characterisation and thus benefit patient outcomes. Yet taking full advantage of HSI data in clinical indication performed under binocular guidance (e.g. microsurgery and robotic surgery) poses several methodological challenges which this project aims to address. Real-time HSI sensors are limited in the spatial resolution they can capture. This further impacts the usefulness of such HSI sensors in multi-view capture settings. In this project, we will take advantage of a stereo-vision combination with a high-resolution RGB viewpoint and a HSI viewpoint. The student will develop bespoke learning-based computational approaches to reconstruct high-quality 3D scenes combining the intuitiveness of RGB guidance and the rich semantic information extracted from HSI.

PhD opportunity on "Exploiting multi-task learning for endoscopic vision in robotic surgery"

Project overview:

Overview of the project objective. Laparoscopic image courtesy of [ROBUST-MIS](https://robustmis2019.grand-challenge.org/).
Overview of the project objective. Laparoscopic image courtesy of ROBUST-MIS.

Project summary

Multi-task learning is common in deep learning, where clear evidence shows that jointly learning correlated tasks can improve on individual performances. Notwithstanding, in reality, many tasks are processed independently. The reasons are manifold:

CAI4CAI presenters at MICCAI 2021

CAI4CAI will be presenting their work at MICCAI 2021, the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, held from 27 September to 1 October 2021 as a virtual event.

Join us at IEEE International Ultrasonics Symposium 2021

Join us at the IEEE International Ultrasonics Symposium where CAI4CAI members will present their work.

[IEEE International Ultrasonics Symposium](https://2021.ieee-ius.org/) runs 11-16 September 2021.
IEEE International Ultrasonics Symposium runs 11-16 September 2021.

Christian Baker will be presenting on “Real-Time Ultrasonic Tracking of an Intraoperative Needle Tip with Integrated Fibre-optic Hydrophone” as part of the Tissue Characterization & Real Time Imaging (AM) poster session.

TRABIT Virtual Conference 7-10 Sept 2021

Join us for the TRABIT conference (7-10 Sept 2021) with outstanding speakers and fun networking events. Registration is free but mandatory.

TRABIT conference flyer.
TRABIT conference flyer.

You can check all the videos made by the PhD students of TRABIT to present their research projects on youtube.

Our crossMoDA challenge at MICCAI 2021 is now live!

CAI4CAI members are leading the organization of the cross-modality Domain Adaptation challenge (crossMoDA) for medical image segmentation Challenge, which runs as an official challenge during the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2021 conference.

New Partnership with Moon Surgical to Develop Machine Learning for Computer-Assisted Surgery

King’s College London, School of Biomedical Engineering & Imaging Sciences and Moon Surgical announced a new strategic partnership to develop Machine Learning applications for Computer-Assisted Surgery, which aims to strengthen surgical artificial intelligence (AI), data and analytics, and accelerate translation from King’s College London research into clinical usage.

Yijing Xie receives a Wellcome/EPSRC CME Research Fellowship award

Yijing will develop a 3D functional optical imaging system for guiding brain tumour resection.

Yijing presenting her work at New Scientist Live.
Yijing presenting her work at New Scientist Live.

She will engineer two emerging modalities, light field and multispectral imaging into a compact device, and develop novel image reconstruction algorithm to produce and display high-dimensional images. The CME fellowship will support her to carry out proof-of-concept studies, start critical new collaborations within and outside the centre. She hopes the award will act as a stepping stone to enable future long-term fellowship and grants, thus to establish an independent research programme.

New King's Public Engagement award led by Miguel Xochicale

Miguel will collaborate with Fang-Yu Lin and Shu Wang to create activities to engage school students with ultrasound-guidance intervention and fetal medicine. In the FETUS project, they will develop interactive activities with 3D-printed fetus, placenta phantoms as well as the integreation of a simulator that explain principles of needle enhancement of an ultrasound needle tracking system.

Congratulations to Dr. Luis Garcia Peraza Herrera!

A great milestone today for Luis Garcia Peraza Herrera who passed his PhD viva with minor corrections! His thesis is entitled “Deep Learning for Real-time Image Understanding in Endoscopic Vision”.

Screenshot from Luis' online PhD viva.
Screenshot from Luis’ online PhD viva.

Thanks to Ben Glocker and Enrico Grisan for their role examining the thesis.

Contact