
How to develop an AI medical imaging platform
Developing an artificial intelligence system that performs flawlessly, perhaps even on par with real specialists, may seem like a difficult endeavor, and it does in fact need a lot of work.
The problem of properly validating the system to assure a safe and successful usage in clinical practice, however, emerges after development moves past the initial research stage and the system begins to show promise. In many instances, it is actually beneficial to lay out the validation process even before the actual building of the system itself. The data collecting phase and the clinical validation are the two most time-consuming and expensive phases of the creation of artificial intelligence systems for medical practice.
From hardware to software
In the initial years, software systems were mostly embedded in hardware medical devices, and thus the primary concern was guarding against the possibility of physical harm, with attention to aspects such as the transmission of energy and/or substances to or from the body, the degree of invasiveness, the closeness to sensitive organs, etc.
This was reflected in many of the regulatory directives and guidelines, which often did not offer specific indications for software developed as a standalone clinical aid. As the use of such applications steadily grew, the need for specific guidance was felt and several contributions were developed.
Risk management principles play an important role in many disciplines, as they provide a framework for minimizing the probability of adverse outcomes, which typically translates into harm for users, whether patients or healthcare providers. When expressing the extent of software’s effect over a patient, there seem to be multiple dimensions. This are made up of the importance and the knowledge the program provides to the healthcare decision, the guiding of the clinical outcome, the diagnosing and the treating.

The transition from concept to practice
When thinking about the clinical validation of a medical system, one of the first factors that must be considered is how the system will be integrated into the clinical workflow (even before developing the system as a whole). Applications are sometimes utilized as measurement instruments, such as when estimating the size or diameter of the lesions, and in those situations, it is necessary to confirm the correctness of the measurements.
The delivery of a final diagnosis or a recommendation for therapy, as in the case of triage systems, is less likely in the present, at the current state of the art. The majority of the time, however, a clinician or healthcare operator receives indications from software. As a result, the clinical evaluation must take into account the clinician’s performance when utilizing the software.
An AI system indicating suspect lesions which has demonstrated perfect sensitivity in identifying lesions in a standalone validation may not be able to guarantee that all true lesions are included in the final report, because the clinician may decide not to confirm the suggested lesions.
It is exceedingly difficult, if not impossible, to completely eliminate sources of bias because there are numerous elements that are intrinsically present in the data that may affect the outcome of a performance assessment.
How to avoid bias
Spectrum or selection bias may happen in general when the sample dataset population is not representative of the population on which the AI system will be implemented. As a result, the population to which the device will be applied must be defined in the intended use, and further evaluations might be required when the usage of the device is expanded to new populations.

This is especially true when the intended use is accustomed to include subsets, such as pediatric subjects. It may also be crucial when the new use cases appears to be less critical, such as when the intended use is expanded from a symptomatic to a screening population, as the lesions in the latter case are frequently smaller and less noticeable.
There may also be a population-related indirect consequence: for instance, it is required to minimize dose during screening tests since this has an impact on image quality. The aim for the system may differ based on the application; therefore the therapeutic pathway may also be significant for other factors.
All of the characteristics linked to the picture capture and processing are extremely crucial when working with medical images, and they must be set precisely. Most imaging scanners are not fixed calibrated measuring equipment and provide a broad range of collection, reconstruction, and post-processing processes, frequently using specialized imaging analyzing methods that were originally created for human interpretation rather than computer interpretation.
Unique cases require different settings
The quality of the pictures and the capacity to identify the disease may be significantly impacted by the imaging technology and acquisition methodology (including elements like resolution, acquisition angles and number of photos).

Dose has a significant impact on picture quality, particularly contrast to noise ratio. Imaging protocols and reconstruction techniques have been introduced to reduce dose while preserving image quality in response to worries about the risk of radiation-induced cancer, particularly in healthy subjects or in the case of frequently repeated tests. However, the impact on image readability in humans and computerized systems may be very different.
Because they rely on quantitative elements that are either directly or indirectly retrieved from the pictures, artificial intelligence (AI) systems are frequently far more sensitive than humans to minor fluctuations in image intensity distributions. This may change in the future as the amount of data accessible increases and training algorithms become more immune to context changes. As a result, variations in the picture post-processing chain might likewise have a significant influence on system performance. Images and the outcomes of the used AI systems may be changed by image reconstruction in the case of volumetric scans or post-processing for picture augmentation.
AI technology is evolving rapidly, and it would be impossible to list all directions for future improvement.
In the present, Healthcare AI applications are implemented as a static, “locked” systems that may be upgraded but do not automatically learn, despite their sophistication. Research on self-supervised and active learning systems that may change over time as fresh training data becomes available, possibly even specializing on the particular demographic or acquisition process of each healthcare institution is anticipated by expert radiologists in the field.

The future of AI
Understanding how an AI system functions, at least to a certain extent, is helpful for addressing the complicated, multifaceted issue of trust in automation. Long-term trust also depends, to a large extent, on the perceived performance and reliability of the system.
We think that, with or without improvements in implementations, clinical performance will be the main driver of AI adoption in the radiological profession. Because of this, the prospects for the future are very high considering the extraordinary speed of progress seen in recent years.

Do you know that XVision-AI based software can detect and locate over 100 pathologies on chest X-rays? Or that we automatically detect and measure pulmonary nodules on lung CT? How about saving up to 30% of time on every medical image we analyze?
If we made you curious, let’s meet at #ECR2022, the most innovative event in the scientific community, dedicated to radiologists from all over Europe.
Learn about the tools that empower radiologists, enhance their skills, and increase their productivity. Schedule a discussion with our team HERE.