Newsletter

The Most Powerful AI in Medicine Will Be the One That Does Nothing

June 6, 2025

Signals for what’s next – and what matters

Health ⎪ Innovation ⎪ Society

🖊️ Share this newsletter with a friend

Hey Reader,

“AI will supercharge diagnosis and care delivery!” is the dominant narrative in VC boardrooms, innovation labs and university tech transfer lecture halls.

But in reality, the greatest harm and cost in modern healthcare comes from doing too much—tests, imaging, procedures, prescriptions—that offer no benefit and often cause harm (iatrogenesis). $200B/year is spent on overtreatment and low-value care (JAMA, 2019).

This systemic over-medicalization is driven largely by a fee for service model, litigious practice environment and a dearth of comparative effectiveness research among others.

Yet almost no AI startups are building for restraint. Why?

Because the financial incentives of the system reward doing more, not doing less.

The most powerful role for AI may be in shifting the system to provide less care not more. The real role for AI in healthcare is not to amplify medicine, but to restrain it. AI’s best use case in healthcare may not be in doing more, but doing less—intelligently?

Sometimes subtraction is better than addition.

Enter AI: Clinical Brake Pedal

More often than not, we don’t need another AI that tells us to order more labs. Healthcare in America is bloated, expensive, and often dangerous—not because we’re missing disease (although we are), but because we’re doing too much.

  • We test too much.
  • We treat too much.
  • We prescribe too much.

We need an AI that tells says, with confidence and evidence: “This patient will be fine. Don’t touch them.”

On the contrary, most predictive analytics try to find high risk patients, understandably so. Sepsis risk models are the most common example. This is well and good but the harm of errors of commission (~Type 1 Error) should not be ignored in our quest to avoid errors of omission (~Type 2 Error) .

Potential point solutions could be:

  • “Safe to Skip” AI – Predictive tools that tell you when:

    • Imaging isn’t needed
    • Antibiotics won’t help
    • ER admissions can be avoided
    • Follow-up intervals can be safely extended
  • Over-diagnosis Detection Engines

    • Detect likely false positives and incidentalomas
    • Suggest conservative monitoring paths vs aggressive interventions
  • “De-prescription” AI

    • Identify candidates for tapering or discontinuing medications safely
    • Flag poly-pharmacy risks

This is AI as the quiet surgeon. The cautious voice. The statistical monk.

Business Model for Subtractive AI

No one gets paid for doing less. Venture capital wants scale and action, not restraint. Hospitals rely on volume-based revenue models.

So this approach requires brave health systems, value-based care environments, or public insurers to back it. But the long-term savings, safety, and ethical alignment are enormous. Some ideas for a path forward:

1. Sell to Value-Based Systems

  • ACOs, capitated groups, and payers win when they spend less
  • Risk-bearing orgs are hungry for safety + cost reduction
  • Market AI as a “Clinical Risk Firewall”

2. Enterprise SaaS Model

  • Charge per physician or per hospital seat license
  • Offer ML “confidence scores” tied to decision trees and explainability
  • Plug into existing EMRs and physician workflows

3. MedMal Reduction Guarantees

  • Bundle AI usage with predictive analytics to reduce litigation risk
  • Partner with malpractice insurers to share savings if adverse events drop

4. Partner with Public Health / Medicaid

  • Population-level “smart restraint” AI tools to reduce ER overuse, unnecessary imaging, etc.
  • Government incentives for high-value care + AI accountability

Reduce Care in Gastroenterology

A license to scope is a license not to scope. Frankly, that’s often the more important decision.

As a GI doctor, I see over testing all the time – to great cost and harm. The literature on low value care is rife with examples:

  • Low risk colonoscopy – Too many average-risk patients are scoped at 5-year intervals, despite guidelines recommending 10 years. I would be the first to say I am absolutely guilty of this (though in my defense, for good reason). Imagine automated surveillance tracker using patient records + polyp histology to flag inappropriate re-scopes. Predictive models estimating patient-specific CRC risk (accounting for age, family history, genetics, lifestyle).

  • PPI overuse in GERD. If there are any drugs that are just left on a patient’s medication list, it’s PPI’s. Here, smart de-prescribing software can be particularly effective, as in other similar scenarios.

  • HP Testing/Treatment – Some patients are over-tested (e.g., after eradication), while others are over-treated without confirmation. Here, test-treatment-timeline AI + resistance prediction model based on geography, prior antibiotics, and patient history could help mitigate antibiotic resistance is rising due to blind triple/quadruple therapy.

  • Elevated liver tests – When the AST/ALT bump up, even slightly, non-GI’s get flustered and anxious quickly as if they fear impending acute liver failure (ALF). That may be because, outside of the brain, the liver is the most complex organ in the human body. But mild transaminitis often leads to cascades: US → CT → MRI +/- → biopsy or liver elastography, often for benign fatty liver when lifestyle changes and clinical monitoring are sufficient.

Objections – and Why They’re Wrong

Understandably there are some concerns to my central thesis. Let’s address them:

Objection 1: ‘This is dangerous. Doing less care means missing something.”

This is the great emotional fear in the heart of every patient and the mind of every provider. But false security already exists in doing more. Data-driven AI can increase safety by identifying low-risk cohorts more accurately than gut feeling or habit. In colonoscopy, a 20% rate of inappropriate early surveillance is not protective—it’s wasteful.

Objection 2: ‘Doctors won’t trust AI to tell them to do nothing.”

Doctors already have a built-in bias toward action (and defensive medicine). But when AI is built as a guideline-concordant support tool, not a replacement, it can give physicians permission to stand down—especially when backed by explainable predictions and malpractice-safe defaults.

The frame can be: “You can safely wait. Here’s the data.” The doctor retains autonomy—but AI offers courage with evidence.

Objection 3: ‘There’s no financial incentive. Doing less means losing money.”

This is true for fee-for-service organizations and will be a major limitation to adoption of “AI for less.” But value-based care (Medicare Advantage, ACOs, capitated systems) are starving for solutions that reduce cost without harming outcomes, as are public payers, employers and the self-insured.

Objection 4:” Liability will increase if AI suggests inaction and something goes wrong.”

This is a legitimate argument if only because the legal environment in healthcare has expanded to ridiculous extremes. Litigation risk enshrouds every healthcare organization and incurs enormous unnecessary cost. Over-testing creates more incidental findings and decision cascades.

But if AI is used to enforce clinical guidelines (e.g., Choosing Wisely, USPSTF) offers a defensible standard of care. Insurers may even reduce premiums for using AI to avoid low-value care appropriately.

Objection 5: “Hospitals and specialists won’t adopt tools that cannibalize their revenue.”

Disruptive technologies rarely succeed by converting incumbents. They win by serving new incentives and underserved buyers. Market to payers, policymakers, and digital-first clinics (e.g., Iora, Oak Street, ChenMed) who have structural incentives to do less. This isn’t a hospital’s AI. It’s a payer’s ally.

Objection 6: “AI is better at doing than not doing stuff. You’re limiting its potential.”

Not true. AI is best at pattern recognition, prediction, and risk stratification—which are the exact tasks needed to determine when care is unnecessary. This is not limiting AI—it’s putting it to its most mature and ethical use.

Objection 7: “There’s no venture-scale outcome here. “

Preventive AI tied to large payers can generate massive cost savings, which are monetizable via:

  • Risk-sharing contracts
  • SaaS licenses to self-insured employers
  • Integration with value-based primary care clinics
  • If an AI tool saves $300 per patient per year across 1M lives, that’s a $300M value pool.

The market isn’t in more care. It’s in less waste.

Objection 8: “This idea isn’t sexy.”

Point taken. I’ve never been known to be associated with sex appeal (just ask my wife)

But neither was Dropbox. Or Stripe. Or Plaid. Quiet tools that improve infrastructure and reduce friction build empires. In a time of economic pressure and physician burnout, a product that improves safety, reduces cost, and honors clinical wisdom will resonate deeply with smart stakeholders. You don’t need “sexy.” You need inevitable, defensible, and essential.

Postscript

Every AI company in healthcare promises the same thing: faster scans, earlier detection, more efficient triage, personalized prescriptions, etc. That’s fine and good and can be powerful.

But we don’t need more intelligence to act. We need more intelligence to withhold. In a system addicted to more—more pills, more tests, more scans—the most radical innovation might be learning when to walk away.

Less is code. The future of medicine is subtractive machine learning.

That’s the real Hippocratic AI.

And the only one worthy of our trust.

Tomorrow Can’t Wait,

Rusha Modi MD MPH

🖊️ Share this newsletter with others

🎙 Listen to the Alchemy of Politics

The Tomorrow List by Rusha Modi, MD is where cutting-edge ideas in health, business, and technology converge. Designed for thinkers, innovators, and leaders, we explore the forces shaping the future of medicine, longevity, and human performance—while decoding how they intersect with economics, policy, and culture. Expect sharp insights, deep dives into emerging trends, and unconventional wisdom you won’t find in mainstream discourse. If you’re building, leading, or rethinking the future of health and society, welcome to your next strategic advantage.
Unsubscribe · Preferences