A healthcare team built a predictive AI dashboard.

It was genuinely impressive. Clean design. Real-time patient risk scores. The kind of thing that looks great in a demo and even better in a board presentation.

Leadership signed off. IT deployed it. Everyone expected adoption to follow.

It didn't.

Clinicians opened it during the first week. A few explored it out of curiosity. By week three, usage had flatlined. The dashboard sat there, technically running, practically invisible.

The problem wasn't the model. The predictions were accurate. The problem wasn't the data. The pipeline was solid. The problem was that nobody had asked a simple question: how does a clinician actually use this during their day?

THE 90-SECOND PROBLEM

Here's something that doesn't show up in most AI project plans: clinicians don't have time to go looking for information.

A nurse finishing a medication round has maybe 90 seconds before the next task. A physician between patient rooms is mentally loading the next case. A pharmacist reviewing orders is working through a queue.

None of these people are going to stop what they're doing, open a new application, navigate to a dashboard, find their patient, interpret a risk score, and then decide what to do with it. That's five steps too many.

The dashboard was designed for someone with 10 minutes to explore data. The people who needed the information had 90 seconds to act on it.

This is the gap that kills most healthcare AI dashboards. Not bad models. Bad delivery.

PULL VS PUSH

I've started thinking about AI outputs in two categories, and the distinction has changed how I approach every project.

Pull tools are things you have to go check. Dashboards. Portals. Reports you have to log into a system to find. They require the user to remember the tool exists, navigate to it, and spend time interpreting what they see.

Push tools are things that come to you. An alert in the EHR. An automated flag on a patient chart. An inline suggestion that appears where you're already working. They require zero extra steps from the clinician.

Here's what I've observed across multiple deployments:

-> Pull tools get used by leadership and analysts. People whose job is to look at data. They work for that audience
-> Pull tools almost never get sustained adoption from frontline clinicians. The initial curiosity fades within weeks
-> Push tools that are embedded in existing workflows get used daily. Not because they're better technology. Because they don't ask the clinician to change their behavior
-> The best push tools are invisible. The clinician doesn't think "I'm using an AI tool." They think "this alert is useful"

WHAT THE FIX ACTUALLY LOOKED LIKE

The team with the ignored dashboard didn't scrap the whole project. They stripped it down.

They took the predictive model that powered the dashboard and rebuilt the output layer. Instead of a standalone application with visualizations, they created 3 automated alerts that pushed directly into the EHR:

-> A high-risk flag that appeared on the patient's chart when the model predicted deterioration within 24 hours
-> A medication interaction warning that surfaced during the ordering workflow, not on a separate screen
-> A daily summary pushed to the charge nurse's existing handoff report, no login required

Same model. Same data. Completely different delivery.

Adoption went from near-zero to daily use across the unit within two weeks. Not because the clinicians suddenly became more tech-savvy. Because the information started showing up where they already were.

THE DECISION FRAMEWORK

Before you build any AI output in healthcare, ask one question: does the person who needs this information have time to come find it?

If yes, a dashboard or portal might work. This is usually leadership, quality teams, population health analysts.

If no, the output needs to go to them. Inside their existing tools. In their existing workflow. With zero extra clicks.

Here's how I think about the four output options:

-> Dashboard: Best for strategic oversight. Weekly reviews. Trend analysis. Audience is leadership and analysts
-> Alert: Best for time-sensitive clinical decisions. Appears in the EHR or at the point of care. Audience is clinicians during active patient care
-> Inline suggestion: Best for decision support during existing workflows. Appears inside the tool they're already using, at the moment they need it. Audience is anyone making a decision in real time
-> Automated action: Best for routine tasks that don't need human review every time. Runs in the background. Audience is the system itself, with human oversight on exceptions

Most healthcare AI projects default to dashboards because they're the easiest to demonstrate. The projects that get adoption start by asking which of these four formats actually fits how the end user works.

CONNECTING THE DOTS

This ties directly back to the vendor question from Issue #3 about designing for where the clinician already works. And to the Phase 2 guidance from Issue #4 about shadowing clinicians before finalizing any interface.

The pattern is the same every time. The teams that succeed don't build the most impressive output. They build the output that fits the workflow with the least friction.

A dashboard nobody opens is a failed project, regardless of how accurate the model behind it is.

A single alert that reaches the right person at the right time is worth more than every visualization on a screen nobody checks.

Build for the 90-second window. Everything else is a demo.

- Guryash

P.S. If you've been part of a project where the output format made or broke adoption, I want to hear about it. What worked? What didn't?

Want more? Follow me on LinkedIn where I share daily insights on healthcare AI implementation: linkedin.com/in/guryashsingh

BEEHIIV BONUS (email version only)

BONUS: Push vs Pull Decision Matrix

Use this framework before designing any AI output in healthcare. For each use case, work through the four questions to determine the right delivery format.

STEP 1: WHO IS THE PRIMARY USER?

-> Executive / leadership team -> Likely pull (dashboard)
-> Quality or analytics team -> Likely pull (dashboard or report)
-> Frontline clinician (physician, nurse, pharmacist) -> Likely push (alert or inline)
-> Administrative staff -> Could be either, depends on Step 2

STEP 2: WHEN DO THEY NEED THE INFORMATION?

-> During a weekly or monthly review -> Pull (dashboard)
-> During active patient care -> Push (alert or inline suggestion)
-> At a specific decision point in a workflow -> Push (inline suggestion)
-> They don't need to see it at all, the action should just happen -> Automated action

STEP 3: HOW MUCH TIME DO THEY HAVE?

-> 10+ minutes (dedicated review time) -> Dashboard works
-> 2-10 minutes (between tasks) -> Alert or summary report
-> Under 2 minutes (active workflow) -> Inline suggestion only
-> Zero (shouldn't require human attention for routine cases) -> Automated action

STEP 4: WHAT'S THE COST OF MISSING IT?

-> Low (informational, no immediate impact) -> Dashboard is fine. If they miss it today, they'll see it tomorrow
-> Medium (affects planning or resource allocation) -> Alert with reasonable priority
-> High (affects patient care decisions) -> Inline suggestion or high-priority alert embedded in clinical workflow
-> Critical (patient safety) -> Automated action with human override for exceptions

DECISION MATRIX:

If mostly pull answers: Build a dashboard. Design it for the review cadence of the audience (weekly, monthly, real-time)
If mixed: Start with push for the clinical users, add a dashboard layer for leadership reporting
If mostly push answers: Do not build a dashboard. Build alerts or inline suggestions embedded in the EHR or clinical workflow
If mostly automated: Build the automated workflow with an exception queue and oversight dashboard for edge cases

COMMON MISTAKES:

-> Building a dashboard when the primary user is a clinician with no review time
-> Building an alert for something that's informational, not actionable (this creates alert fatigue)
-> Building an inline suggestion that interrupts rather than assists the workflow
-> Skipping the automated action option because it feels less impressive than a dashboard

This matrix is exclusive to the email edition of HealthTech Singh. Use it before your next AI output design decision.

Keep Reading