To enable the patient to take up an active role in health decision-making and health management requires transparent, interpretable and explainable AI solutions. Black-box models are not interpretable. Also, post-hoc explanations are often not authentic, so they cannot be used to enable the patient to make a shared health decision.
To promote health equity is also challenging. Health disparities can be exacerbated through biases in the data used to train an AI algorithm, and in the algorithm’s intent. For AI to be used effectively to promote health equity, existing biases in healthcare services and systems based on race, ethnicity, age, and gender, that are encoded in data used to train algorithms, must be addressed (WHO, Guide on Ethics and Governance of Artificial Intelligence for Health, June, 2021). Obermeyer et al. (2019) found that a widely used commercial prediction algorithm resulted in significant racial bias in predicting outcomes. A particular challenge in the application of AI to cardiovascular medicine is in the intrinsic and extrinsic biases that affect quality control, and thus clinical outcomes. The bulk of current evidence-based medicine in cardiology might not apply to women and minority groups, because they are historically under-represented (Tat et al., The Lancet, 2020. A systematic review study of COVID-19 prediction models uncovered widespread and serious shortcomings, particularly with respect to potential biases (Wijnants et al., BMJ, 2021). With the push to quickly adopt AI to assist in the fight against COVID-19 frenzy, the risk of producing biased prediction models due to unrepresentative datasets and other limitations in model development is higher than ever (Roosli et al., JAMIA, 2021).
Regular AI has a lack of respect for data privacy, human agency, explainability, transparency, fairness, which result in lack of trust and no adoption of the AI solution supporting health management decisions.