Artificial intelligence (AI) is reshaping healthcare, offering new ways to diagnose diseases, recommend treatments, and improve patient care. But for AI to be truly effective in medicine, it must go beyond making predictions—it must reason like a team of doctors. A new approach to AI development aims to mimic how multidisciplinary medical teams collaborate, ensuring transparency, trust, and compliance with evolving regulations like the European Union’s AI Act.
Musumeci Online – The Podcast. It is perfect for driving, commuting, or waiting in line!
The Challenge of Trusting AI in Medicine
AI has shown remarkable potential in healthcare, from detecting abnormalities in X-rays to predicting patient risks based on medical records. However, a major challenge remains: trust. Many AI-driven medical tools function as “black boxes,” making decisions without explaining their reasoning in a way that doctors and patients can easily understand. This lack of transparency creates concerns about bias, errors, and regulatory compliance.
To address these issues, AI needs to adopt a more human-like approach to decision-making—one that aligns with how doctors collaborate in multidisciplinary medical teams.
How Doctors Make Decisions Together
When treating complex conditions like cancer, medical teams rely on specialists from different fields—radiologists, oncologists, pathologists, and more. Each expert brings their knowledge to the table, but instead of discussing raw data, they communicate through shared medical concepts. For example, rather than debating the technical details of an X-ray, they discuss tumor size, cancer stage, and molecular markers to decide the best course of treatment.

This method of reasoning—using widely understood medical concepts—helps teams make informed, transparent decisions. If AI can replicate this process, it could provide clearer, more trustworthy recommendations.
These AI models don’t just predict outcomes—they also provide explanations using human-like concepts.
Making AI Explainable: A New Approach
To align with human decision-making, AI needs to go beyond simple predictions and provide explanations in understandable terms. Two traditional approaches to explainable AI exist:
- Rule-Based Models – These AI systems follow predefined rules, ensuring transparency but limiting flexibility. In medicine, strict rules may not always capture the complexity of a diagnosis.
- Post-Hoc Explanation Models – These analyze AI decisions after they are made, highlighting key factors like areas of an X-ray that influenced a diagnosis. However, this approach can still leave doctors uncertain about whether the AI’s reasoning is medically sound.

A promising alternative is Concept Bottleneck Models (CBMs). These AI models don’t just predict outcomes—they also provide explanations using human-like concepts. For example, instead of simply labeling a skin lesion as cancerous or benign, an AI using CBMs would describe key features like asymmetry, color changes, or irregular borders—the same way a dermatologist would.
AI That Thinks Like a Team of Doctors
By adopting this concept-based reasoning, AI can offer more transparent and reliable medical support. If an AI model misclassifies a tumor stage, doctors can easily identify and correct the mistake, allowing the AI to learn and improve over time. This level of human-AI collaboration not only increases trust but also ensures compliance with strict regulations like the EU AI Act, which mandates explainability in high-risk AI systems.
The Future of AI in Healthcare
The integration of explainable AI in medicine is a crucial step toward safer, more effective healthcare. By designing AI that reasons like a team of doctors, we can create systems that doctors trust, patients understand, and regulators approve. As AI continues to evolve, its role in medical decision-making will not only be about efficiency—it will be about building a future where technology and human expertise work seamlessly together to improve patient care.
Leave a Reply