Omina Technologies is proud to be a gold sponsor of Automa+ Healthcare Edition online conference.
Bas Jansen will contribute to Session 6: Transforming Healthcare with AI and Robotics.
Modern healthcare relies on a combination of many available clinical tests, e.g., complete blood count, and the knowledge of healthcare practitioners (HCP), to interpret their results. Many of these clinical tests were developed based on a single medical dimension such as the genetic information of the population. The complete blood count test is usually compared to the average of the population, enabling a HCP to make an assessment that an individual’s white blood cell count is below the ‘acceptable’ range.
However, it is now possible to continuously measure a person’s health on multiple dimensions, ranging from basic blood tests to more complex dimensions such as post-translational modifications or (epi-)genetics. This enables a multi-dimensional representation of a population and spurs precision medicine. Multi-dimensional datasets have been successfully studied in academia, where Croatian scientists matched genomic- and glycomic-data for various islands in the Adriatic. However, the use of data with more than three dimensions is rare due to its complexity.
The ability to handle multi-dimensional data is critical on the one hand to produce novel multi-dimensional biomarkers that combine multi-omics data, and on the other hand, to explain how a given biomarker came to its prognosis/diagnosis. A key barrier to the adoption of complex, multi-dimensional AI-based biomarkers is the lack of explainability, especially for the HCP and patient(s).
A combination of both interpretable and explainable AI can help alleviate the aforementioned problem. Interpretability is important to illustrate how a biomarker came to its prognosis/prediction, and could refer to a mathematical representation of the model, such as f(x) = (a*x + b*y*z) / (a * y * z + b * x) . While important, interpretability does not make an AI-based biomarker digestible for the HCP. Explainability refers to methods of illustrating why the biomarker yielded a specific prognosis/diagnosis for a given patient, and does so by showing which features drove the model, and their direction. However, interpretability and explainability are still not sufficient as they do not take the knowledge of the HCP into account. Therefore, it is necessary to add context to the explanation by including domain knowledge. For example, if the explainability component shows that for a given patient an increased expression of protein X and shift in protein’s X modifications are the main drivers for a positive diagnosis, the HCP should be given an actionable explanation. Actionability can be achieved by searching through scientific literature and identifying the conditions correlating with the key features, and reporting these to the HCP.
In conclusion, there is a clear opportunity to evolve from the traditional healthcare system to a holistic healthcare system, if we are able to utilise more complex patient data by leveraging explainable AI, to empower HCP and their patients.