Just a week ago, Microsoft announced that its new AI tool diagnoses with 85% accuracy – four times better than experienced doctors. But here’s the question that should keep us awake at night: would you trust a diagnosis that even your own doctor can’t explain?

This is not science fiction. It is the reality of medicine in 2025, where “black box” algorithms are making decisions that can save or cost lives, while healthcare professionals watch without understanding the “why” behind each recommendation.

The dilemma of accuracy without transparency

The paradox is both fascinating and frightening. Today’s medical AI systems consistently outperform humans in diagnostic accuracy. In 2024, melanoma detection algorithms achieved greater than 95% accuracy, outperforming dermatologists with decades of experience. In radiology, cardiology and ophthalmology, AI is redefining accuracy standards.

But there’s a problem: no one understands how they reach these conclusions.

Imagine this situation: you arrive at the emergency room with confusing symptoms. The AI system recommends immediate surgery, but when you ask “why?” your doctor can only reply, “The algorithm says it’s necessary, and it’s usually right.”

Would you have surgery?

When black boxes fail: real cases that changed everything

History is full of reminders about why transparency matters:

2019: A triage algorithm in U.S. hospitals systematically underestimated the severity of African-American patients, perpetuating racial biases for months without detection.

2021: An AI system for COVID-19 diagnosis developed 97% accuracy in testing, but failed miserably in the real world because it had learned to identify… the X-ray machine model, not the disease.

2023: An ICU sepsis prediction algorithm generated so many false alarms that physicians began to ignore it, resulting in real cases going undetected.

The pattern is clear: without explainability, there is no trust. Without trust, there is no adoption. Without adoption, there is no benefit.

What is Explainable AI and why does it matter now?

Explainable AI (XAI) is not just about making algorithms “talk” – it is about designing systems that can justify their decisions in a human-understandable way.

In medicine, this means:

Diagnostic transparency: “I recommend this test because I detected similar patterns to previous cases of X disease in these specific areas of the image”.

Treatment rationale: “I am suggesting this drug based on your history, current symptoms and response of patients with similar profile.”

Contextualized alerts: “This combination of symptoms is 78% likely to indicate AND condition, based on these specific factors.”

The difference is transformational. A physician can evaluate, question and complement an explained recommendation. With a black box, he can only obey or ignore.

Regulation to come: FDA leads the way

The FDA is not waiting. In June 2024, it published its “Guiding Principles for Transparency in Machine Learning Enabled Medical Devices,” stating that:

  • Medical AI devices must provide understandable explanations of their decisions
  • Manufacturers must demonstrate how their systems handle edge cases and unforeseen events
  • Transparency is not optional – it is a regulatory requirement.

Europe goes further. The AI Act requires that high-risk AI systems (including medical applications) be “sufficiently transparent to allow users to interpret the output of the system.”

The message is clear: the era of black boxes in medicine is ending.

Practical implementation framework for hospitals

How can healthcare facilities prepare? Here is the framework I recommend:

Audit of current systems

  • Identify which algorithms you use and their level of explainability.
  • Evaluates the impact of unexplained decisions on patient outcomes
  • Mapping critical points where transparency is essential

2. Selection criteria for new tools

  • Explainability by default: Can the system justify each recommendation?
  • Adjustable granularity: Can you get simple or detailed explanations as needed?
  • Clinical validation: Are the explanations medically consistent?

3. Staff training

  • Training physicians in interpreting explainable AI outputs
  • Develops protocols for challenging and validating algorithmic recommendations
  • Establishes escalation paths when explanations are unsatisfactory

4. Continuous monitoring

  • Tracks correlation between AI explanations and real outcomes
  • Identifies algorithmic decision patterns that require human review
  • Adjusts confidence levels based on explained performance

Tangible ROI: beyond precision

The benefits of explainable AI go beyond transparency:

  • Reduced liability: Justified decisions = lower legal risk
  • Better adoption: Physicians rely more on systems they understand
  • Faster diagnostics: Explanations accelerate clinical validation
  • Less burnout: Professionals feel empowered, not replaced

A 2024 study in European hospitals showed that the implementation of explainable AI reduced diagnosis time by 34% while increasing physician satisfaction by 67%.

The future: towards collaborative medicine

Explainable AI is not the end of the road – it’s the beginning of something bigger: collaborative medicine, where humans and algorithms work as informed partners.

Imagine a future where:

  • AI systems not only diagnose, but teach as they do it
  • Physicians can debate with algorithms, improving both in the process
  • Patients receive understandable explanations about their diagnosis and treatment
  • Medicine becomes more precise and more humane

The decision is now

Microsoft’s tool can diagnose better than experienced doctors, but does it matter if no one understands how? Accuracy without explainability is a dead end.

Healthcare facilities that adopt explainable AI now will not only comply with future regulations – they will lead the transformation to smarter, more transparent and reliable medicine.

The question is not whether explainable AI will come to medicine. It is already here.

The question is: will you be ready when your patients ask “why?”?