By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Thought leadership

Why Medical AI Needs Explainability

Corti
11/3/2023
-
2
 min read
Why Medical AI Needs Explainability

After a 15-minute-long consultation, Dr. Adams is confident that her patient has influenza. However, her AI model has suggested meningitis. The two previous times the AI model suggested meningitis, Dr. Adams performed a painful cerebrospinal fluid test despite disagreeing with the AI’s suggestion. None of the two previous patients had meningitis. What should Dr. Adams do? Should she listen to the AI and risk an unnecessary painful test, or ignore it?

If the AI explained why it suggested meningitis, the decision would be easier for Dr. Adams to make informed decisions. 

My name is Joakim Edin and I’m a Corti PhD student working on how to explain AI suggestions. Specifically, how to identify the words in a text that are important for an AI’s output. In the above example, the text would be an automatically generated transcript of the consultation, and the identified important words could be fever, stiff neck, and rash. If Dr. Adams didn’t hear that the patient mentioned a stiff neck, the explanation could potentially be life-saving. On the other hand, if the identified words were ok, pain, and hurt, Dr. Adams would know that she should resort to her medical training and experience rather than the AI output. 

Before being able to implement methods to explain AI suggestions, one must know how to evaluate the explanations. However, evaluating explanations is difficult. An explanation must accurately reflect the true reasoning process of the AI. If not, the explanation might dangerously make incorrect suggestions look plausible.

Some researchers evaluate explanations by presenting them to humans and asking them to rate the informativeness of the explanation. However, humans don’t know the reasoning processes of AI models. If they did, there would be no need for explanations. Because humans don’t know how AI models reason, human evaluation doesn’t measure how accurately an explanation reflects the true reasoning of an AI. Therefore, we need alternative evaluation methods. That is the current focus of my research.

I’m currently testing new explanation methods for Corti’s use cases.  In my next blog post, we will dig deeper into how to explain AI suggestions.


Feel free to contact me at je@corti.ai if you have any questions, suggestions, or would like a chat.