Hey- I’m Guryash.
I’m a Senior Data Engineer at a Canadian health authority. I build AI systems inside a real hospital. Not demos. Not proofs of concept. Production systems that touch real patient data, real clinicians, and real compliance requirements.
I started this newsletter because I kept seeing the same pattern:
Healthcare organizations spend millions on AI initiatives that never make it past the pilot stage. Vendors sell the dream. Internal teams build the prototype. Leadership signs off on the budget. And then… nothing. The project dies quietly in a compliance review, or launches to three users who ignore it, or gets shelved because nobody planned for integration with the EHR.
I’ve seen it happen with seven-figure initiatives. Smart people. Good intentions. Real budget. Dead on arrival.
The problem wasn’t the technology. It was the approach.
After three years of building AI systems that actually survive contact with healthcare reality, I’ve distilled it down to a simple framework. Before any AI project gets green-lit, it needs to pass three questions:
Question 1: Does it solve a workflow problem that clinicians already complain about?
Not a problem that executives think exists. Not a problem that the AI vendor identified. A problem that the people who will actually use the system bring up in every staff meeting. If you can’t point to a specific, recurring complaint from the floor, stop. You’re building a solution looking for a problem.
Question 2: Can it run within existing compliance infrastructure?
HIPAA, PHIPA, PIPEDA, whatever your jurisdiction. The question isn’t “can we make it compliant?” It’s “can it operate within the security architecture we already have?” If the answer requires a new data pipeline, a new consent framework, and an exception from the privacy office, your 6-month project just became an 18-month project.
Question 3: What happens when the model is wrong?
Every AI system will produce incorrect outputs. In healthcare, incorrect outputs can harm patients. If your implementation plan doesn’t include a specific, documented fallback process for when the AI gets it wrong, and a way for clinicians to override it in under 10 seconds, you’re not ready to deploy.
I’ve seen a project fail at Question 1. It solved a reporting problem that mattered to administrators but created extra steps for frontline staff. The staff ignored it. Usage flatlined. Project cancelled.
Three questions. Five minutes. Could have saved millions.
Every week, I’ll share one real insight from inside the healthcare AI trenches. Implementation stories, frameworks, compliance lessons, and the stuff vendors won’t tell you.
No theory. No hype. Just what actually works when you’re deploying AI where the stakes are highest.
— Guryash
P.S. If you’re working on healthcare AI and want to go deeper, reply to this email with what you’re building. I read every response.
Want more? Follow me on LinkedIn where I share daily insights on healthcare AI implementation: linkedin.com/in/guryashsingh


