Author: Sara Habibipour
Interview with Peter Szolovits PhD. Dr. Szolovits is Professor of Computer Science and Engineering in the MIT Department of Electrical Engineering and Computer Science (EECS) and an Associate faculty member in the MIT Institute of Medical Engineering and Science (IMES) and its Harvard/MIT Health Sciences and Technology (HST) program. He is also head of the Clinical Decision-Making Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
Read more about Dr. Szolovits.
***
Artificial intelligence (AI) has and continues to transform the practice of medicine. Machine learning algorithms have made it possible for doctors to detect lung cancers based on CT scans, assess the risk of cardiac failure based on echocardiograms and cardiac MRI imaging, classify skin lesions, find early signs of diabetic retinopathy, and much more. With AI’s heightened ability to diagnose patients, doctors are then able to move forward with accurate treatment plans, ultimately increasing patient survival rates.
But, as AI improves healthcare for some, does that mean that it improves it for all?
We all know that healthcare is an imperfect practice; many patients are “left behind” because of low socioeconomic status, race and gender disparities, etc. Because of this, researchers have raised concerns about these biases being carried over to AI algorithms, leading to an amplification of current health disparities.
But, what if these algorithmic biases could be recognized? What if doctors could work together with AI to identify the sources of these algorithmic biases and improve models through better data collection and model improvements? These are a few questions posed by Dr. Peter Szolovits in a paper written with Dr. Marzyeh Ghassemi and Irene Chen published in the AMA Journal of Ethics. This paper is a report of two case studies examined using machine learning algorithms on clinical and psychiatric notes to predict ICU mortality and 30-day psychiatric readmission with respect to race, gender, and insurance payer type (to represent socioeconomic status). This study shows that, “differences in prediction accuracy and therefore machine bias are shown with respect to gender and insurance type for ICU mortality and with respect to insurance policy for psychiatric 30-day readmission.” The authors say that “this analysis can provide a framework for assessing and identifying disparate impacts of artificial intelligence in health care.”
In order to further understand his research and AI’s implications for working to end health disparities, I reached out to Dr. Szolovits. For a bit of background, Dr. Szolovits is the Professor of Computer Science and Engineering as well as Health Sciences and Technology at the Massachusetts Institute of Technology. He is also an associate member of the MIT Institute for Medical Engineering and Science (IMES) and on the faculty of the Harvard/MIT Health Sciences and Technology program. His research centers on the application of AI methods to problems of medical decision making, predictive modeling, decision support, and design of information systems for health care institutions and patients.
I first asked about AI’s meaning for patients of low socioeconomic status and/or minorities. Is AI advanced enough to take in these factors and come up with accurate diagnoses (especially those that tend to be more prevalent in certain populations)? What are AI's weaknesses when it comes to taking in these patient backgrounds? What are its strengths?
Dr. Szolovits replied,
“I think many of the biases so far come from poor data collection. This is not hard to fix in principle, but terribly difficult in practice. There are many situations in which low SES [socioeconomic status] patients are disadvantaged in healthcare; I just came out of a National Academy of Medicine discussion of specific such problems around COVID-19 outcomes. The best thing would be to abolish prejudice and inequality, but I’m not holding my breath. Until then, an emphasis on more comprehensive data collection across different populations is the best plan. There are evolving AI methods to try to correct for poor data sampling, but these rely on assumptions about how the ‘real’ (comprehensive) data would compare to what is actually collected, and results based on these can only be as good as the assumptions.”
Well, how can we enhance data collection so that AI algorithms benefit all patients? Dr. Szolovits strongly believes that in order for this to happen, there must be a collaborative rather than a competitive relationship between doctors and AI.
“I’ve been working in this area for over four decades,” he says, “and our focus has always been on decision support systems, which can act as a dynamic ‘second opinion’ to the treating doctor or patient, but not take over the decisions themselves. This is necessary because our medical colleagues, as human beings first, have a great deal of experience with unusual situations that should influence how data are interpreted, whereas the AI approaches tend to be more limited. Of course people are also imperfect, but the hope is that the combination will work better than either alone.”
Even if doctors were able to enhance AI’s algorithms to be more representative of a patient’s background, there are still some concerns that patients themselves face regarding privacy. In another one of Dr. Szolovits’ studies, he and his colleagues created a “mistrust score” by using coded interpersonal features to predict patient noncompliance, particularly with patients in end of life care. The scores indicate a higher level of mistrust held by black patients than white patients; black patients are already significantly likely to have increased mistrust in the medical system due to systemic racism, with examples ranging from the Tuskegee Syphilis Crisis to the case of Henrietta Lacks. But, the hope is that if we can make better efforts to learn and understand these disparities and what causes them, we can work to make AI’s benefits accessible to all patients.
Thank you Dr. Szolovits for taking the time to share your expertise with us.
Sources:
Irene Y. Chen, Peter Szolovits, PhD, and Marzyeh Ghassemi, PhD AMA J Ethics.https://journalofethics.ama-assn.org/article/can-ai-help-reduce-disparities-general-medical-and-mental-health-care/2019-02
Boag, W., Suresh, H., Celi, L. A., Szolovits, P., & Ghassemi, M. (2018, June 30). Modeling Mistrust in End-of-Life Care. Iclr 2020.
https://www.datarevenue.com/en-blog/artificial-intelligence-in-medicine
Very interesting, I have never thought of using AI to help solve this problem! I always felt it was hard to use AI since each patient was unique.
Wow amazing! I really hope the collaborative efforts Dr. Szolovits discussed will increase so AI in the medical field has the ability to benefit every patient.