Science In Silico: Medical Imaging, AI & High Performance Computing.

10th August 2024 - Scott Marshall and Andrew Holway

In 2016, Professor Geoffrey Hinton famously quipped that “people should just stop training radiologists now. It’s just completely obvious that in five years deep learning is going to do better than radiologists”.

Professor Hinton’s prediction may have been a little hasty, but in this article we look at 8 companies developing AI diagnostic tools, and see how they are working alongside radiologists to create better clinical outcomes.

Medical diagnostics have long been at the forefront of adopting revolutionary new computer technologies. In the 1970s, the first computed tomography (CT) machines harnessed the power of computing to build advanced imaging systems, and the industry continues to embrace high performance computing (HPC) technologies today.

Whilst there are a number of different diagnostic areas and techniques, medical imaging is undoubtedly one of the most computationally heavy. In fact, HPC is so integral to their functionality that standard modern radiological technologies like CT and MRI could not exist without it.

The original MRI machine had a Tesla rating of 0.09, whereas the industry standard today is 3.0. This dramatic uptick in magnetic power means that modern scanners are able to produce extremely focused, high resolution images, and they need to rely on increasingly powerful computational resources to achieve them. 

The continued symbiotic improvements in HPC resources and diagnostic capabilities are visible in the recent generative AI boom, which has seen a multitude of companies developing HPC enabled, AI-enhanced diagnostic tools. These products rely on increasingly sophisticated algorithms and require significant computational resources to develop and run, but there are already countless examples where they are proving to be an important and effective part of front line medicine.

The Diagnostic Process

Radiological techniques such as CT and MRI involve creating huge amounts of individual medical images, and these need to be constructed into a diagnostically useful display. This is computationally complex, especially with the demand for ever increasing scan resolutions.

Reconstruction algorithms convert the raw data into the industry standard DICOM file format, which combines text, visual images and embedded patient data. These files are then transferred to a Picture Archiving and Communication System (PACS), a central image repository which may be stationed either on-premises or in the cloud. 

Traditionally, these DICOM images will then be assessed by a radiologist. However, companies like the ones in this article are driving a push towards using computationally intensive techniques to digitally screen the images.

Brainscan’s diagnostic workflow, where the need for HPC resources is baked into every step.

One of the more common imaging processes utilised by diagnostic AI companies is a convolutional neural network (CNN). This is a deep learning technique where the primary focus is processing and analysing visual data. This differs from something like a large language model (LLM) such as ChatGPT, which is more tuned to analyse and produce text and language based data.

The CNN will begin to process an image at an individual pixel level, identifying key elements of the image as it goes. On its first pass, it may only identify very broad features, such as edges or textures, but as it moves through the process, additional convolution layers add more filters. This allows the CNN to build a more intricate analytic picture, eventually reaching a level of detail that results in the detection of a required output, for example identifying an anomaly in a CT scan that requires further attention.

Diagnostic algorithms are relatively binary in their operation. They are trained only to detect specific pathologies within a specific sequence of scans, often from a specific model of scanner. Therefore these tools are not likely to replace radiologists any time soon, but already deliver a huge amount of value by automating large parts of the results reporting stage and significantly reducing the clinician’s cognitive load.

Oxipit’s Chestlink workflow

Medical imaging AI diagnostics companies are not attempting to position their product as something that will replace physicians, but rather as tools that help streamline workflows. Acting as a double reading aid they effectively give them a second opinion on their image analysis. The terms augmented intelligence and intelligence amplification are increasingly being used in the industry to reflect their acknowledgement of the limitations of generative AI, and to reinforce the importance of having a trained human as the ultimate decision maker.

The Cohort

A study of 100 CE approved AI diagnostic products gives a good overview of the types of scan and body parts the tools work with.

In this article we focus on what are generally considered to be radiological diagnostic techniques, specifically looking at products that work with scans such as x-rays, CTs or mammograms. This means companies like Aiforia, who focus on microscopic histology technologies, or Ultromics, who offer an ECG analysis product, don’t make the cut. We will, however, be featuring alternative diagnostic methods in future articles.

Example output screens of AZMed’s product.

Paris based AZMed provides an augmented intelligence solution which scans X-rays for potential bone fractures and prioritises radiologists’ workloads, presenting them with comparative DICOM images to support their diagnosis. The system has been successfully rolled out in over 1000 hospitals across the world since its 2021 EU MDR approval.

BeholdAI’s red dot® CTH V1 workflow for brain CTs.

Red Dot is a computer aided diagnosis platform created by London based BeholdAI. It has two core anatomical focuses: looking for cancer in chest x-rays and haemorrhages & infarctions in brain CTs. The products are currently being integrated across the NHS after successful trials at local trusts, and are also currently in operation in healthcare facilities across India and the US.

A selection of the lesions that Brainscan’s technology is designed to identify.

Brainscan is a Polish startup focusing on using AI to assess CT scans of the brain for lesions which could indicate pathological changes, before outputting an infographic for human review. Initially trained on over 250,000 scans, the platform gained CE approval last year and is currently being used in a handful of Polish hospitals.

An example of Contextflow’s diagnostic report that is presented to radiologists.

A spinout of the Medical University of Vienna, Contextflow’s software focuses on the detection of chest disorders from CT scans. Their AI algorithms look for potential regions of interest which may indicate lung cancer, interstitial lung disease (ILD) or chronic obstructive pulmonary disease (COPD). They then deliver results through a series of visualisation tools which radiologists can then use to augment their own analysis.

Gleamer’s diagnostic AI product suite.

Similar to AZMed, Gleamer is a French company offering a platform for assisting in the X-ray diagnosis of bone fractures. Spun out of the PSL Research University in 2017, they currently have four CE approved products, offering tools which process X-Rays and provide radiologists with DICOM overlays, as well as automating bone age predictions and musculoskeletal measurements. Alongside their focus on skeletal data, they also have an algorithm which assesses chest scans for signs of disorders such as a collapsed lung, and a product in development that aims to interpret mammograms.

Image Biopsy Lab’s product workflow.

Image Biopsy Lab offers both an overarching musculoskeletal imaging analysis platform, as well as a number of CE marked and FDA approved products focusing on individual bones and joints. The Vienna based firm’s products analyse x-ray and CT scans for anomalies before presenting doctors with image and text based reports. Their products are currently being installed in over 100 healthcare institutions.

A selection of the AI solutions Kheiron Medical Technologies offers.

Kheiron Medical Technologies flagship Mia platform is at the forefront of modern breast cancer diagnostics. Their CE certified suite offers AI driven products which aim to improve the scheduling and visual quality of initial mammograms, before analysing these scans for signs of cancer and notifying radiologists of any anomalies. The company has been working with the NHS for the past five years, incorporating the technology across the service.

Oxipit’s retrospective auditing workflow.

The Lithuanian company Oxipit offers a CE approved imaging tool which works as a double reading aid, analysing chest x-rays marked as healthy by a radiologist to validate their accuracy and alert them to any potential misdiagnoses. Additionally, they have a retrospective, comparative, analysis tool which uses AI to audit doctors’ diagnostic reports. After the analysis, the healthcare institution is provided with analytical breakdowns of how accurate their pathologists recent diagnoses have been.

Deeptech Deep Dive

The diagnostic products that the companies on this list have created generally fall under the umbrella of generative AI. This is where machine learning techniques generate new content based on a previously analysed dataset. The generative AI process generally consists of two parts, firstly the creation of a diagnostic AI where a model is trained. Once the training is complete, it moves into the inference stage, where the model is put to use and begins generating results.

The initial training phase involves ingesting large quantities of data, which for medical diagnostics, are usually publicly available scientific training datasets such as PROSTATEx. Certain parameters are then given which tell the model the desired output, in this case a potential anomaly in a radiological image. An algorithm then analyses the data and attempts to produce results which match the parameters, continually optimising and iterating itself until it reaches the desired level of accuracy.

Inference is where a new piece of data is given to the AI model, and a decision or prediction is made by comparing the new data to the knowledge it has boiled down from its training. In medical diagnostic terms, this means analysing x-rays or CT scans to detect anomalies which may indicate certain pathologies. 

The bulk of scans will return negative for the specific condition, but if the algorithm finds something that a clinician needs to check, they will create infographics or additional DICOM layers alerting the doctor to their findings.

The initial training stage is many orders of magnitude more computationally intensive than an individual inference. The training phase will generally require HPC clusters, but a single inference uses only a tiny fraction of the resources, so can be performed on much less powerful machines. 

The upfront compute spend required in the training phase is borne by the creators of the model. These models continually need to be updated and refined as new data and techniques become available. Once trained however, the costs of running inferences on the model will normally be covered by the medical institution as part of the operational budget. The software may run on top of a hospital's internal infrastructure, or the data may be sent to a third party server for inferences.

The cost of creating diagnostic AI algorithms can vary depending on the scale and complexity of the data and the level of validation required for regulatory compliance. These tools are regulated as ‘medical devices’. Compared to a field like drug discovery, where getting a product onto the market can take $2 billion plus and over a decade, EU Medical Device Regulation (MDR) and FDA approval is much faster and cheaper. The majority of the companies on this list were able to successfully take a product from the bench to the bedside within 5 years with relatively modest fundraising efforts. 

The publicly available financial data for this group of companies is too inconsistent to allow in depth analysis. However, it does appear that once a company has successfully obtained CE and/or FDA approval for a product, they can quickly leverage this success to obtain further funding. Diagnostic AI companies also appear to have strong cases for securing grants to fund their work, with the European Innovation Council (EIC) in particular providing support to a number of companies in the sector.

Who’d Be a Radiologist?

Radiology is a complex and difficult field of medicine. There is currently a global shortage of trained radiologists, with those currently in the profession overworked. Radiologists sit in front of computer screens all day examining scans, looking for signs of disease in grainy images. Their large caseloads and repetitive nature of their task inevitably leads to high levels of stress, and unfortunately, mistakes.

It’s estimated that there is a global real time diagnostic error rate of between 3-5%, meaning there are around 40 million diagnostic imaging errors happening every year. The results of these mistakes can be catastrophic, both for patients, and also for the doctors that make them. 

Staggeringly, over 70% of radiologists in the USA have been named in a malpractice lawsuit, which can only add to their  stress levels and the potential for future errors. The cost to the healthcare service, both reputationally and financially is also significant. The NHS paid out £71 million for radiology malpractice claims in 2021, so it’s no surprise to see the organisation keen to adopt new technologies which can help improve diagnostic accuracy.

Diagnostic AI is already bearing fruit for radiology, and is beginning to transform the practice. Within a generation or two, Professor Hinton’s prediction from the start of this article could come true. We can envisage an artificial general intelligence able to independently test a sequence of scans against a range of diagnostic algorithms. And, due to the iterative nature of these techniques, continual increases in their precision could validate their widespread integration.

Currently, however, these are tools that can only be used in the trained hands of radiologists. These AI models require a large volume of properly labelled and organised images and human prompting in order to be successfully trained, and the keen eye of a clinician to ensure the accuracy of their results.

Previous
Previous

Bare Metal, Hyperscalers and HPCaaS: Discovering the HPC Deployment Capability Gap.

Next
Next

Science In Silico: How Drug Discovery Uses High Performance Computing