The official website of VarenyaZ
Logo

WhenaWellnessAppStoppedTrackingNumbersandStartedUnderstandingPeople

A wellness startup had an app users loved—until week two. The platform tracked fitness, nutrition, and mental health, but treated them as isolated silos. We rebuilt the intelligence layer to see the 'whole person,' and a million users made it part of their daily lives in the first year.

Wellness TechAI PersonalisationHealth & FitnessMental HealthCorporate Wellness
Core_Architecture
Wellness Tech
AI Personalisation
Health & Fitness
Mental Health
30%
Better health outcomes
60%
Higher 90-day retention
1M+
Active users in year one
Client Dossier

Business Context & Telemetry

Our client had done the hard thing: they built a wellness platform people actually downloaded. It featured three beautifully designed modules for fitness, nutrition, and mental health. The fatal flaw? The modules didn't talk to each other. A user logging three nights of terrible sleep would still receive a push notification for a grueling HIIT workout. The app collected data about the whole person, but it was looking at three separate people. Retention was bleeding, and the founders knew their product was failing its core mission.

[Company Size]

Series A startup

[Team Size]

38 people, including a medical advisory board of 6 clinicians and nutritionists

[Geography]

India-first consumer launch, with a B2B corporate wellness track

[Core Platforms]

iOS, Android, Web Dashboard, Corporate Admin Portal

[Founded]

2021

Executive Perspective

We were tracking sleep, stress, nutrition, and exercise — all in one app. But we were treating them like four separate spreadsheets. The whole point of having all that data together was to understand the person. We weren't doing that.

C&

Co-founder & CEO

The Challenge

All the right data. None of the right connections.

Wellness is not four separate problems. Sleep affects exercise capacity. Stress dictates nutritional choices. An app that treats these as independent modules isn't actually doing wellness; it's just running four parallel trackers in a single interface.

01

Blind recommendations

The fitness module recommended workouts based purely on a stated goal (e.g., 'weight loss'), completely ignoring the user's current physical reality. Recommending a 45-minute sprint session to a user who just logged 5 hours of sleep and high stress isn't coaching—it's hazardous.

02

The 'Day 14' cliff edge

Analytics showed a massive user drop-off right at the two-week mark. The novelty of tracking wore off because the app felt like a void. Users were painstakingly logging their lives, but the app wasn't changing its behavior in response.

03

Mental health as an afterthought

The daily mood check-in felt like homework. If a user logged a week of severe stress, the app didn't adjust their meal plans, didn't offer a supportive prompt, and didn't ease their workout load. It was a data input with zero output.

04

Corporate wellness without proof

The B2B track had signed 40 corporate clients but was struggling to retain them. HR teams had no aggregate reporting to prove to their leadership that the wellness benefit was actually making employees healthier.

05

Passive wearable integration

The app connected to Apple Health and Google Fit to pull steps and heart rates, but only used them to draw pretty charts. The data never actively influenced the app's coaching decisions.

Previous Attempts

Before coming to us, the team tried fixing retention with gamification—streaks, badges, and step-count leaderboards. It caused a brief spike before returning to baseline. Their honest retrospective? They had treated a *meaning* problem as a *motivation* problem. Badges don't fix an app that feels indifferent to your actual well-being.

"The founders built this product because they hated the fragmented approach of the health-tech industry. Yet, their execution had accidentally reproduced that exact fragmentation inside their own app. Closing the gap between their vision and their reality was what kept the CEO awake at night."

The Real Cost
The Approach

People don't want to be optimised. They want to be understood.

Before discussing AI architectures, we spent three weeks interviewing users to understand what their health actually felt like. The goal wasn't to build a better tracker; it was to build a tool that paid attention.

Discovery & Methods

We recruited 24 users—loyalists, churned users, and non-users—and asked them to describe their last two weeks of health. We mapped the app's engagement logs against behavioral science literature. The findings redefined the product: users who stayed felt the app was 'responsive,' while users who churned felt the app was 'indifferent.'

In-depth interviews with 24 users across engaged, churned, and non-user segments
Behavioral science literature review on habit formation and app efficacy
Clinical guardrail workshops with the medical advisory board
Corporate HR interviews across 8 existing B2B clients
Audit of underutilized data sets (wearables, sleep, mood logs)

An app that records isn't enough. It has to listen.

Users don't churn because an app lacks features; they churn because logging a terrible week yields the exact same user experience as logging a great one. The intelligence layer we needed to build wasn't about mathematically perfect recommendations. It was about making users feel genuinely seen.

Design Philosophy

Cross-pillar or nothing. If a feature couldn't synthesize fitness, sleep, and stress simultaneously, we wouldn't build it. Furthermore, we adopted the clinical principle: *first, do no harm*. Pushing a stressed, sleep-deprived user to hit a calorie deficit isn't optimization; it's damage. The medical board didn't just review our AI—they co-designed its constraints.

Constraints Respected

  • Clinical Safety: Every automated recommendation had to be explainable in clinical terms (no 'black box' AI).
  • Strict Privacy: Corporate analytics required rigorous anonymization; individual employee data could never be exposed.
  • HIPAA Compliance: Data sovereignty and privacy were architected as first-class requirements.
  • Offline Capability: The app had to function meaningfully on mid-range Android devices with spotty connectivity.
The Solution

A holistic intelligence layer that sees the whole person — and adapts to them daily.

We built five interconnected capabilities that wove the fragmented modules into a single, highly responsive health companion.

Architecture Spec

Unified Health Intelligence Profile

Function

A real-time, continuous model of the user’s health state. It synthesizes fitness logs, meals, mood check-ins, sleep cycles, and wearable data into a single, weighted composite.

Impact

This is the bedrock of the app. It ended the era of 'blind recommendations' by finally giving the platform a 360-degree view of whether a user was thriving or struggling on any given day.

Implementation Note
Multi-modal health state model combining tabular metrics with time-series wearable patterns. Weighting methodology was rigorously vetted by the medical advisory board.
Tech Stack
React Native & Next.js

Unified mobile experience with offline caching; performant web portals for B2B admins

Python (FastAPI) & TensorFlow

Modular AI microservices for cross-pillar recommendations and state modeling

Apache Kafka

High-throughput real-time event streaming for 1M+ active users

PostgreSQL + TimescaleDB

Efficient querying of massive time-series wearable metrics

Redis

Sub-second caching to handle the massive 6:00 AM daily plan generation spike

AWS (EKS, S3, Lambda)

HIPAA-compliant, auto-scaling infrastructure for nightly inference jobs

Design Decision

The AI must explain itself.

User research proved that people follow recommendations they understand. 'Light yoga, 20 minutes' is a generic command. 'Light yoga today because your sleep score was low and your body needs recovery' is a conversation. Explanations increased follow-through by 34%.

Design Decision

Mental health logs have immediate, visible consequences.

We moved the mood check-in from a buried menu into the primary morning flow. More importantly, logging high stress instantly triggered a visible adjustment to the day's fitness plan. Once users saw the app was actually 'listening,' completion rates for mental health check-ins jumped from 23% to 71%.

Execution

Fifteen weeks to launch. Co-designed with clinicians, not just engineers.

In wellness AI, getting it wrong can cause physical harm. We structured the build so that the medical advisory board was actively designing the logic, rather than just rubber-stamping it at the end.

Delivery Timeline

Operational Log

1

Clinical Framework & Research

Weeks 1–3

We ran full-day workshops with the medical advisory board to establish clinical guardrails, categorizing every potential recommendation by the health states where it should and shouldn't surface.

2

Unified Profile & Data Architecture

Weeks 4–6

Built the multi-modal health state model and overhauled the wearable ingestion pipelines. The HIPAA-compliant architecture was locked and legally signed off.

3

Recommendation Engine & Daily Plan

Weeks 7–10

Trained the cross-pillar model and built the nightly plan generation pipeline. The critical 'plain English' explanation layer was authored in tandem with clinicians.

4

Corporate Dashboards & Wearables

Weeks 11–13

Deployed the privacy-preserving B2B analytics portal and finalized deep integrations with 50+ wearable platforms to feed the recommendation engine.

5

Beta Launch & Rollout

Weeks 14–15

Soft-launched to 2,000 users. We monitored usage daily, using a clinical review queue to catch and refine any edge-case recommendations before the full 1M+ user rollout.

Team Topology

Deployed Roster

1 × Engagement Lead
2 × ML Engineers (Health Modeling & Wearable Integration)
2 × Backend Engineers (HIPAA Pipelines & Corporate Analytics)
1 × Mobile Developer (React Native)
1 × Frontend Developer (Next.js)
1 × Product Designer

Collaboration

Working Rhythm

The medical advisory board was our most important technical collaborator. We ran fortnightly review sessions where a sports physician and a registered dietitian personally evaluated recommendation logic. It added time to the build, but it resulted in a medically sound product the founders could confidently scale.

Course Corrections

Diagnostic Log

Friction Point

Mental health boundary conditions. If a user logged persistent, severe depressive signals, the app needed to respond without illegally crossing into medical diagnosis or therapy.

Resolution

We engineered a 'care signal' trigger. Instead of offering AI therapy, the app surfaced a warm, non-alarmist prompt validating the user's difficult week and offering direct signposting to professional mental health resources. It acknowledged and referred, keeping the app strictly within consumer wellness boundaries.

Friction Point

Noisy training data. A user skipping a workout could mean the AI's recommendation was wrong, or it could just mean the user had a busy day at work. We couldn't train the model on assumptions.

Resolution

We injected a simple 'Did you do this?' feedback loop into the evening UX. Explicit feedback (rather than inferred behavior) drastically improved the accuracy of the recommendation model, and users gladly participated because they knew it made their future plans better.

Friction Point

Corporate privacy vs. HR utility. HR teams wanted granular data, but small departments (e.g., a 5-person marketing team) risked exposing individual employee health metrics.

Resolution

We enforced a strict tiered disclosure model. Department-level metrics only unlocked if the group had 20+ active users. Instead of complaining, HR teams actually used this as an incentive, actively promoting the app internally to hit the numbers required to unlock their team's dashboard.

Measured Impact

One million active users. And the clinical data to prove it was actually working.

Retention spiked immediately, but the real victory came months later when the health outcome metrics rolled in. The data validated the core thesis: holistic, attentive AI produces meaningfully better human health than any single-pillar app ever could.

Primary KPIVerified Metric

30%

Better health outcomes

composite improvement across tracked fitness, nutrition, and mental metrics

90-day retention rate

60%

crushing the 18% industry average for standalone wellness apps

Active users in year one

1M+

rapid adoption across consumer and B2B corporate segments

Qualitative Objectives Reached

  • Users who previously churned at Day 14 returned and stayed. In re-engagement surveys, the most common praise was a variation of: 'It actually noticed.'
  • The B2B sales cycle dropped from 11 weeks to 4 weeks. Armed with the new analytics dashboard, existing clients began heavily referring the platform to peer organizations.
  • Three members of the medical advisory board co-authored a peer-reviewed research paper on the platform's cross-pillar AI methodology, citing its clinical significance in driving behavioral change.

"I've used probably fifteen wellness apps over the years. They all track everything, show you charts, and after two weeks you stop opening them. This one was different. It told me to take it easy on a Thursday because my sleep had been bad, and I realized it was the first time an app had actually paid attention to me. That sounds like a small thing. For me, it's the reason I'm still using it eight months later."

Early user, individual subscriber
Early user, individual subscriber

Re-engaged user, 8 months post-relaunch

Key Learnings

Insights Gained

Valuable lessons and strategic insights uncovered through this project that inform our future work and architectural decisions.

01

In health tech, AI errors cause harm, not just bad UX.

A bad e-commerce recommendation is annoying. A bad wellness recommendation—like pushing a sleep-deprived user into high-intensity training—causes physical harm. Hard-coding clinical guardrails into your AI isn't project overhead; it is the fundamental requirement to operate in this space.

02

Explanation builds more trust than accuracy.

We assumed perfect recommendations would drive retention. In reality, *explaining* the recommendation drove retention. When the AI told users why it was suggesting a change, follow-through skyrocketed. In highly personal domains, transparency beats a 'black box' every time.

03

Holistic data is useless without holistic processing.

The client already had the data. Their failure was processing fitness, sleep, and stress in isolation. The massive leap in user engagement didn't come from a new data source; it came from finally synthesizing the data they already owned. Synthesis is the ultimate moat.

Exploration

Capabilities & Archive

Building a health or wellness product and finding that the data you're collecting isn't translating into the outcomes you promised? That gap is usually a synthesis problem, not a data problem.

Let's Work Together

Health data that doesn't change behavior isn't an asset. It's a liability.

We build wellness AI that clinicians trust and users actually keep using. If your platform's engagement or health outcomes are falling short of your vision, tell us about it. We'll give you an honest read on what's missing.

"No wellness buzzwords. A real conversation about your product and your users."