WhenaWellnessAppStoppedTrackingNumbersandStartedUnderstandingPeople
A wellness startup had an app users loved—until week two. The platform tracked fitness, nutrition, and mental health, but treated them as isolated silos. We rebuilt the intelligence layer to see the 'whole person,' and a million users made it part of their daily lives in the first year.
Business Context & Telemetry
Our client had done the hard thing: they built a wellness platform people actually downloaded. It featured three beautifully designed modules for fitness, nutrition, and mental health. The fatal flaw? The modules didn't talk to each other. A user logging three nights of terrible sleep would still receive a push notification for a grueling HIIT workout. The app collected data about the whole person, but it was looking at three separate people. Retention was bleeding, and the founders knew their product was failing its core mission.
Series A startup
38 people, including a medical advisory board of 6 clinicians and nutritionists
India-first consumer launch, with a B2B corporate wellness track
iOS, Android, Web Dashboard, Corporate Admin Portal
2021
“We were tracking sleep, stress, nutrition, and exercise — all in one app. But we were treating them like four separate spreadsheets. The whole point of having all that data together was to understand the person. We weren't doing that.”
Co-founder & CEO
All the right data. None of the right connections.
Wellness is not four separate problems. Sleep affects exercise capacity. Stress dictates nutritional choices. An app that treats these as independent modules isn't actually doing wellness; it's just running four parallel trackers in a single interface.
Blind recommendations
The fitness module recommended workouts based purely on a stated goal (e.g., 'weight loss'), completely ignoring the user's current physical reality. Recommending a 45-minute sprint session to a user who just logged 5 hours of sleep and high stress isn't coaching—it's hazardous.
The 'Day 14' cliff edge
Analytics showed a massive user drop-off right at the two-week mark. The novelty of tracking wore off because the app felt like a void. Users were painstakingly logging their lives, but the app wasn't changing its behavior in response.
Mental health as an afterthought
The daily mood check-in felt like homework. If a user logged a week of severe stress, the app didn't adjust their meal plans, didn't offer a supportive prompt, and didn't ease their workout load. It was a data input with zero output.
Corporate wellness without proof
The B2B track had signed 40 corporate clients but was struggling to retain them. HR teams had no aggregate reporting to prove to their leadership that the wellness benefit was actually making employees healthier.
Passive wearable integration
The app connected to Apple Health and Google Fit to pull steps and heart rates, but only used them to draw pretty charts. The data never actively influenced the app's coaching decisions.
Before coming to us, the team tried fixing retention with gamification—streaks, badges, and step-count leaderboards. It caused a brief spike before returning to baseline. Their honest retrospective? They had treated a *meaning* problem as a *motivation* problem. Badges don't fix an app that feels indifferent to your actual well-being.
"The founders built this product because they hated the fragmented approach of the health-tech industry. Yet, their execution had accidentally reproduced that exact fragmentation inside their own app. Closing the gap between their vision and their reality was what kept the CEO awake at night."
People don't want to be optimised. They want to be understood.
Before discussing AI architectures, we spent three weeks interviewing users to understand what their health actually felt like. The goal wasn't to build a better tracker; it was to build a tool that paid attention.
Discovery & Methods
We recruited 24 users—loyalists, churned users, and non-users—and asked them to describe their last two weeks of health. We mapped the app's engagement logs against behavioral science literature. The findings redefined the product: users who stayed felt the app was 'responsive,' while users who churned felt the app was 'indifferent.'
An app that records isn't enough. It has to listen.
Users don't churn because an app lacks features; they churn because logging a terrible week yields the exact same user experience as logging a great one. The intelligence layer we needed to build wasn't about mathematically perfect recommendations. It was about making users feel genuinely seen.
Design Philosophy
Cross-pillar or nothing. If a feature couldn't synthesize fitness, sleep, and stress simultaneously, we wouldn't build it. Furthermore, we adopted the clinical principle: *first, do no harm*. Pushing a stressed, sleep-deprived user to hit a calorie deficit isn't optimization; it's damage. The medical board didn't just review our AI—they co-designed its constraints.
Constraints Respected
- Clinical Safety: Every automated recommendation had to be explainable in clinical terms (no 'black box' AI).
- Strict Privacy: Corporate analytics required rigorous anonymization; individual employee data could never be exposed.
- HIPAA Compliance: Data sovereignty and privacy were architected as first-class requirements.
- Offline Capability: The app had to function meaningfully on mid-range Android devices with spotty connectivity.
A holistic intelligence layer that sees the whole person — and adapts to them daily.
We built five interconnected capabilities that wove the fragmented modules into a single, highly responsive health companion.
Unified Health Intelligence Profile
A real-time, continuous model of the user’s health state. It synthesizes fitness logs, meals, mood check-ins, sleep cycles, and wearable data into a single, weighted composite.
This is the bedrock of the app. It ended the era of 'blind recommendations' by finally giving the platform a 360-degree view of whether a user was thriving or struggling on any given day.
Multi-modal health state model combining tabular metrics with time-series wearable patterns. Weighting methodology was rigorously vetted by the medical advisory board.Unified mobile experience with offline caching; performant web portals for B2B admins
Modular AI microservices for cross-pillar recommendations and state modeling
High-throughput real-time event streaming for 1M+ active users
Efficient querying of massive time-series wearable metrics
Sub-second caching to handle the massive 6:00 AM daily plan generation spike
HIPAA-compliant, auto-scaling infrastructure for nightly inference jobs
The AI must explain itself.
“User research proved that people follow recommendations they understand. 'Light yoga, 20 minutes' is a generic command. 'Light yoga today because your sleep score was low and your body needs recovery' is a conversation. Explanations increased follow-through by 34%.”
Mental health logs have immediate, visible consequences.
“We moved the mood check-in from a buried menu into the primary morning flow. More importantly, logging high stress instantly triggered a visible adjustment to the day's fitness plan. Once users saw the app was actually 'listening,' completion rates for mental health check-ins jumped from 23% to 71%.”
Fifteen weeks to launch. Co-designed with clinicians, not just engineers.
In wellness AI, getting it wrong can cause physical harm. We structured the build so that the medical advisory board was actively designing the logic, rather than just rubber-stamping it at the end.
Delivery Timeline
Operational Log
Clinical Framework & Research
Weeks 1–3We ran full-day workshops with the medical advisory board to establish clinical guardrails, categorizing every potential recommendation by the health states where it should and shouldn't surface.
Unified Profile & Data Architecture
Weeks 4–6Built the multi-modal health state model and overhauled the wearable ingestion pipelines. The HIPAA-compliant architecture was locked and legally signed off.
Recommendation Engine & Daily Plan
Weeks 7–10Trained the cross-pillar model and built the nightly plan generation pipeline. The critical 'plain English' explanation layer was authored in tandem with clinicians.
Corporate Dashboards & Wearables
Weeks 11–13Deployed the privacy-preserving B2B analytics portal and finalized deep integrations with 50+ wearable platforms to feed the recommendation engine.
Beta Launch & Rollout
Weeks 14–15Soft-launched to 2,000 users. We monitored usage daily, using a clinical review queue to catch and refine any edge-case recommendations before the full 1M+ user rollout.
Team Topology
Deployed Roster
Collaboration
Working Rhythm
The medical advisory board was our most important technical collaborator. We ran fortnightly review sessions where a sports physician and a registered dietitian personally evaluated recommendation logic. It added time to the build, but it resulted in a medically sound product the founders could confidently scale.
Course Corrections
Diagnostic Log
Mental health boundary conditions. If a user logged persistent, severe depressive signals, the app needed to respond without illegally crossing into medical diagnosis or therapy.
We engineered a 'care signal' trigger. Instead of offering AI therapy, the app surfaced a warm, non-alarmist prompt validating the user's difficult week and offering direct signposting to professional mental health resources. It acknowledged and referred, keeping the app strictly within consumer wellness boundaries.
Noisy training data. A user skipping a workout could mean the AI's recommendation was wrong, or it could just mean the user had a busy day at work. We couldn't train the model on assumptions.
We injected a simple 'Did you do this?' feedback loop into the evening UX. Explicit feedback (rather than inferred behavior) drastically improved the accuracy of the recommendation model, and users gladly participated because they knew it made their future plans better.
Corporate privacy vs. HR utility. HR teams wanted granular data, but small departments (e.g., a 5-person marketing team) risked exposing individual employee health metrics.
We enforced a strict tiered disclosure model. Department-level metrics only unlocked if the group had 20+ active users. Instead of complaining, HR teams actually used this as an incentive, actively promoting the app internally to hit the numbers required to unlock their team's dashboard.
One million active users. And the clinical data to prove it was actually working.
Retention spiked immediately, but the real victory came months later when the health outcome metrics rolled in. The data validated the core thesis: holistic, attentive AI produces meaningfully better human health than any single-pillar app ever could.
30%
Better health outcomes
composite improvement across tracked fitness, nutrition, and mental metrics
60%
crushing the 18% industry average for standalone wellness apps
1M+
rapid adoption across consumer and B2B corporate segments
Qualitative Objectives Reached
- Users who previously churned at Day 14 returned and stayed. In re-engagement surveys, the most common praise was a variation of: 'It actually noticed.'
- The B2B sales cycle dropped from 11 weeks to 4 weeks. Armed with the new analytics dashboard, existing clients began heavily referring the platform to peer organizations.
- Three members of the medical advisory board co-authored a peer-reviewed research paper on the platform's cross-pillar AI methodology, citing its clinical significance in driving behavioral change.
"I've used probably fifteen wellness apps over the years. They all track everything, show you charts, and after two weeks you stop opening them. This one was different. It told me to take it easy on a Thursday because my sleep had been bad, and I realized it was the first time an app had actually paid attention to me. That sounds like a small thing. For me, it's the reason I'm still using it eight months later."
Early user, individual subscriber
Re-engaged user, 8 months post-relaunch
Insights Gained
Valuable lessons and strategic insights uncovered through this project that inform our future work and architectural decisions.
In health tech, AI errors cause harm, not just bad UX.
A bad e-commerce recommendation is annoying. A bad wellness recommendation—like pushing a sleep-deprived user into high-intensity training—causes physical harm. Hard-coding clinical guardrails into your AI isn't project overhead; it is the fundamental requirement to operate in this space.
Explanation builds more trust than accuracy.
We assumed perfect recommendations would drive retention. In reality, *explaining* the recommendation drove retention. When the AI told users why it was suggesting a change, follow-through skyrocketed. In highly personal domains, transparency beats a 'black box' every time.
Holistic data is useless without holistic processing.
The client already had the data. Their failure was processing fitness, sleep, and stress in isolation. The massive leap in user engagement didn't come from a new data source; it came from finally synthesizing the data they already owned. Synthesis is the ultimate moat.
Capabilities & Archive
Building a health or wellness product and finding that the data you're collecting isn't translating into the outcomes you promised? That gap is usually a synthesis problem, not a data problem.
Services Leveraged
Health data that doesn't change behavior isn't an asset. It's a liability.
We build wellness AI that clinicians trust and users actually keep using. If your platform's engagement or health outcomes are falling short of your vision, tell us about it. We'll give you an honest read on what's missing.
"No wellness buzzwords. A real conversation about your product and your users."
