Skip to main content
VarenyaZ
Healthcare Case Study

WhenaHospitalNetworkStoppedMakingClinicalDecisionsintheDark

A large healthcare network was sitting on 12 years of patient data locked across 12 isolated EHR systems. Readmissions were avoidable. Deteriorations were predictable. We built a real-time analytics platform that finally connected the dots—improving patient outcomes by 28% in the first year.

Project evidence
Healthcare & Population Health Management
Confidential Healthcare and Population Health Management client
Anonymized
7 min read

Challenge

In a network of this scale, healthcare data isn't scarce—it’s fragmented. The data wasn't missing; it was simply trapped in 12 systems that had never been asked to speak to each other, creating blind spots at the exact moment clinical decisions were being made.

Solution

We built six interconnected modules on a unified data architecture, translating 12 years of fragmented history into real-time, actionable clinical guidance.

Result

28%

Better patient outcomes

Timeline

20-week delivery

5 delivery phases

Team

6 specialist roles

Cross-functional delivery

Evidence

Anonymized

Project and post-launch operating period

Client Context

Business Context & Telemetry

Our client was a regional healthcare network spanning 8 hospitals and 14 outpatient facilities. While clinically respected, their technology had evolved in silos. Each facility ran its own EHR system—including two legacy platforms whose original vendors no longer existed. Twelve years of patient history was scattered. A patient seen at three different facilities had three separate records that no clinician could view simultaneously without requesting manual extracts.

Client Operating Profile

Scope, visibility, delivery context, and trust signals

09 signals
Executive Perspective

We had a patient readmitted four times in eight months. Each admission was treated as if it were the first. Different doctors, different facilities, same underlying condition that nobody connected because the dots were in four different systems. We had the data to flag her. We just couldn't see it.

CM

Chief Medical Officer

Client

Confidential Healthcare and Population Health Management client

Reach

Regional network spanning urban tertiary centres and rural district facilities

Surfaces

4 platforms

Evidence

anonymized

Context Telemetry

Client operating details, platform surface area, and validation signals that shaped the work.

01
Client Visibility

Confidential Healthcare and Population Health Management client

Anonymized public case study

02
Company Size

Large integrated healthcare network

03
Team Size

3,800 clinical and administrative staff across 22 sites

04
Geography

Regional network spanning urban tertiary centres and rural district facilities

05
Core Platforms

Clinical Dashboard, Population Health Portal, Alerts App, Executive Reporting Suite

06
Founded

2001

07
Evidence Level

anonymized

08
Measurement Window

Project and post-launch operating period

09
Metrics Note

Metrics are shown as client-reported or operating-period outcomes; confidential identifiers are removed where required.

The Challenge

Twelve years of clinical data, completely invisible to the clinicians who needed it most.

In a network of this scale, healthcare data isn't scarce—it’s fragmented. The data wasn't missing; it was simply trapped in 12 systems that had never been asked to speak to each other, creating blind spots at the exact moment clinical decisions were being made.

01

Predictable readmissions slipping through the cracks

An audit revealed 34% of 30-day readmissions shared visible risk factors at discharge (elevated markers, missing follow-ups). But pulling data from four different screens during a rushed 10-minute discharge workflow was impossible.

02

Bed management running on whiteboards and phone tag

Finding a transfer bed at the tertiary centre meant admissions teams calling each other for hours. Patients waited in emergency departments for beds that were actually empty, but digitally invisible.

03

Reactive, delayed outbreak responses

Seasonal disease patterns were tracked anecdotally. By the time a spike in respiratory or waterborne illnesses became obvious to the staff, the outbreak had already been developing for 10–14 days.

04

Clinical histories that took days to retrieve

When a physician needed records from a patient’s district-level visit six months prior, the manual request took 2–3 days. Doctors were forced to make immediate medication decisions based solely on the current admission.

05

Executives navigating by the rearview mirror

Monthly performance reports took the informatics team 12 days to manually compile. By the time the COO read them, the operational data was 6 weeks old.

Previous Attempts

They had tried building a centralized SQL data warehouse, but it was too technical for clinicians to use. They later added BI dashboards, but those only answered pre-configured executive questions. If a doctor had a specific clinical query, they still faced a 5-day wait for a manual report.

"The CMO didn't need a sales pitch on big data; she needed proof. She carried the story of the patient readmitted four times as a systemic failure. The question wasn't if analytics mattered, but whether a 22-site organization could actually deploy them without the effort collapsing under its own weight."

The Real Cost
The Approach

We started with the decisions, not the dashboards.

Rather than starting with an abstract data architecture, we spent two weeks asking clinicians exactly what decisions they were forced to make with incomplete information. We worked backward from the workflow.

Discovery & Methods

We interviewed 42 stakeholders across 8 departments. The consensus was clear: the data existed, but it was fundamentally disconnected from the point of care. An audit of their 12 EHRs revealed a sobering reality—getting the data clean enough to trust would require serious engineering, not just a slick UI.

Interviews with 42 emergency physicians, ward clinicians, and bed managers
Technical audit of data models and APIs across all 12 EHR systems
Workflow mapping for 8 high-impact clinical decision types
3-day shadowing of bed management and transfer operations
Readmission cohort analysis validating the 34% preventable rate

The data was not the bottleneck. The distance between data and decision was.

A dashboard a clinician has to log into during a chaotic discharge is useless. But a targeted alert that surfaces directly inside their existing workflow, calculating specific risk factors and suggesting an action? That changes behavior. We designed the platform around the decision, not the data.

Design Philosophy

Three non-negotiable rules: 1) Zero workflow disruption—integrate into existing screens. 2) No 'black box' AI—every recommendation must explain its reasoning in clinical terms. 3) Honest data—if data quality is poor, the UI must explicitly show the clinician it's poor so they can contextualize the risk.

Constraints Respected

  • No rip-and-replace: We had to ingest data from all 12 legacy EHRs as they stood.
  • Zero learning curve: The UI had to be intuitive enough for exhausted staff to use without training.
  • Strict compliance: Full adherence to Indian health data regulations and HIPAA-equivalent audit logging.
  • Self-sustaining: The client’s 6-person informatics team had to be able to maintain the system post-launch.
The Solution

A clinical intelligence platform built to close the gap between insight and action.

We built six interconnected modules on a unified data architecture, translating 12 years of fragmented history into real-time, actionable clinical guidance.

Architecture Spec

Unified Health Data Lake

Function

Ingests, normalises, and deduplicates patient data from all 12 EHRs in near real-time. It uses fuzzy matching (MRN, Aadhaar, demographics) to stitch together a single, longitudinal patient record.

Impact

The foundation of the entire system. Treating a chronically ill patient as a 'stranger' because they walked into a different building ends here.

Implementation Note
Kafka for real-time ingestion, normalising to a FHIR R4 standard. Apache Spark handles large-scale processing of the 12-year historical migration.
Tech Stack
Apache Kafka & Spark

High-throughput streaming and historical data processing

Apache Druid

Sub-second query response at population scale for executive dashboards

FHIR R4 (HL7)

Standardized clinical data model unifying 12 disparate databases

LightGBM + SHAP

Predictive modeling with built-in, clinician-friendly explainability

React + Next.js

Fast, responsive frontends for dashboards and clinical portals

AWS (EMR, EKS, S3)

HIPAA-compliant, elastic infrastructure for data lakes and microservices

Design Decision

Risk scores explain 'Why'.

A '78% risk score' doesn't tell a doctor what to do. '78% risk because of 2 recent readmissions and no scheduled follow-up' does. Explainability isn't a buzzword; it's the only way to make AI clinically actionable.

Design Decision

Visible data quality scores.

If a patient's allergy history is only 60% complete, the UI explicitly states '60% Confidence'. Clinicians distrust platforms that pretend to know everything. Visible uncertainty actually builds trust.

Execution

Twenty weeks to launch. And a massive data cleanup that was worth every delayed day.

We planned for messy data, but reality was worse. We intentionally delayed the clinical launch by three weeks just to remediate legacy EHR data. In healthcare, shipping on poor data isn't a bug—it’s a patient safety risk.

Delivery Timeline

Operational Log

1

Discovery & Architecture

Weeks 1–7

Workflow mapping, Kafka pipeline build, and API adapters developed for all 12 systems. Initial profiling revealed severe data inconsistencies in 4 legacy systems.

2

Data Quality Remediation

Weeks 8–11

A heavy three-week sprint dedicated purely to cleaning duplicates, non-standard medications, and missing values. The CMO backed the delay without hesitation to ensure clinical safety.

3

Models & Population Surveillance

Weeks 12–15

Readmission and deterioration models trained and validated against hold-out datasets. Surveillance algorithms calibrated against 18 months of historical outbreak data.

4

Bed Intelligence & EHR Integration

Weeks 16–18

Real-time occupancy network deployed. Clinical governance committee pressure-tested and approved 24 distinct alert types before EHR integration.

5

Full Rollout & Knowledge Transfer

Weeks 19–20

Phased launch across 22 sites. We spent two full weeks pair-working with the internal informatics team to ensure they could independently manage and scale the platform.

Team Topology

Deployed Roster

1 × Engagement Lead
2 × Data Engineers (Kafka, Spark, FHIR)
1 × ML Engineer (Readmission & Surveillance Models)
2 × Backend Engineers (Alerts & Bed Intelligence)
1 × Frontend Developer
1 × Product Designer

Collaboration

Working Rhythm

Our monthly clinical governance reviews were the battlefield where alert logic was forged. Physicians aggressively challenged model outputs, negotiating the exact boundary between what the system should recommend and what should be left to clinical judgment. That friction is what made the system safe.

Course Corrections

Diagnostic Log

Friction Point

Two legacy EHRs were so old the original vendors were out of business. One was a 15-year-old undocumented database built by a retired sysadmin.

Resolution

We located the retired sysadmin and brought him in for two days to help reverse-engineer the schema. We built read-only adapters to protect the fragile environments and safely migrated 80,000 patient records with zero data loss.

Friction Point

The readmission risk model performed poorly on maternity patients, as the clinical predictors for obstetrics differ wildly from general medicine.

Resolution

We built a separate, specialized obstetric model. Because the data pool was smaller, we extended the training window to 36 months and explicitly communicated the model's evolving confidence levels to the maternity ward. Honesty won their buy-in.

Friction Point

Initial alert fatigue. In the first two weeks, doctors were blindly ignoring two specific alert types, dropping acceptance rates below 30%.

Resolution

We didn't blame the users; we blamed the design. We suspended the alerts, narrowed the trigger conditions with the clinical leads to remove false positives, and relaunched. Acceptance rates immediately jumped to 75%.

Measured Impact

Twelve months later: 28% better outcomes, real-time beds, and proactive decisions.

The metrics were incredible, but the cultural shift was profound. Physicians now openly discuss AI risk scores during discharge rounds. Bed managers orchestrate transfers with total visibility. The data finally became part of how the network thinks.

Primary KPIVerified Metric

28%

Better patient outcomes

composite improvement across readmissions, complications, and recovery

Annual cost savings

₹1.9Cr

driven by optimized bed usage and plummeting readmission rates

Admin time saved

35%

freed informatics staff from manual reporting; freed doctors from records hunting

Qualitative Objectives Reached

  • The surveillance system caught an atypical pneumonia cluster 9 days before manual tracking would have, allowing the public health team to mobilize resources and alter the outcome for at least 40 patients.
  • The 12-day executive reporting cycle was eliminated. The COO now manages network capacity daily based on real-time realities, not historical anecdotes.
  • Readmissions for high-risk chronic patients dropped from 18.4% to 11.2%. The discharge team attributed this directly to the in-workflow EHR alerts prompting mandatory medication checks.

"Before this platform, I walked into morning bed meetings working off a printout from the night before, while my managers relied on hours-old phone calls. Now, the screen shows me exactly where we are—network-wide, right now. Last Tuesday we spotted a capacity crisis 14 hours before it hit and pre-positioned staff to handle it. That doesn't make the news. But that's what this platform does for us, every week."

Chief Operating Officer

Healthcare Network Client

Key Learnings

Insights Gained

Valuable lessons and strategic insights uncovered through this project that inform our future work and architectural decisions.

01

Data quality is a clinical safety issue, not an IT nuisance.

You can't 'fix it in post' when it comes to healthcare data. A readmission model fed on bad data produces dangerous recommendations. Delaying our launch to scrub 12 years of legacy records is the exact reason clinicians trusted the platform on day one.

02

Alert fatigue is a UX failure.

When doctors ignore alerts, it means the system is noisy and irrelevant. Treating low acceptance rates as a design flaw—and actively tuning the logic in clinical governance meetings—is mandatory for adoption.

03

The distance between data and decision is everything.

A beautiful dashboard hidden behind a login screen will never be used during a chaotic shift. Analytics only matter when they are injected directly into the moment the decision is being made.

Let's Work Together

Your clinical data already knows things your team doesn't. Can it tell them in time?

We build analytics platforms for networks with deeply fragmented systems, legacy tech, and exhausted staff. We know what it takes to go from messy data to clinical decision support that actually gets used. Tell us about your data landscape, and we'll give you an honest view of what's achievable.

"No generic big data pitch. A real conversation about your clinical reality."