The official website of VarenyaZ
Logo
Case Studiesai-innovation-wins

When a SaaS Company Stopped Making Product Decisions Based on What Felt Right

A growth-stage SaaS company had four years of rich user data, but nobody was using it. Why? Because extracting a single insight required a data analyst, a bespoke SQL query, and a two-day wait. We built a natural-language analytics platform that gave anyone instant answers. Manual reporting dropped 60% in the first quarter.

SaaS AnalyticsBusiness IntelligenceData EngineeringProduct AnalyticsReal-Time Insights
85%

Analytics adoption rate

active users querying data independently — up from 12%

60%

Less manual reporting

analyst hours freed from repetitive data extraction tasks

25%

Better business metrics

improvement in engagement and retention driven by data-informed decisions

The Client

Who We Worked With

Our client was a Series A B2B SaaS company with 180 enterprise customers and 22,000 end users. The engineering founders had instrumented the product beautifully from day one. By year four, they were sitting on a goldmine of user behavior data. The problem was access. Data lived in a fragmented PostgreSQL and warehouse setup. Answering a simple business question required writing SQL—meaning every query bottlenecked at the two-person data team. In the absence of accessible data, massive product decisions were being made on intuition, anecdote, and the loudest customer complaint from the previous week.

"We have four years of data about everything our users do. We use about 2% of it. The other 98% sits there answering no one's questions because everyone who could answer them is busy building the product."

Head of Product

Company

Series A B2B SaaS company

Team

65 people, including a bottlenecked 2-person data team

Reach

India-headquartered, serving India and Southeast Asia

Platforms

Internal Analytics DashboardCustomer-Facing AnalyticsSlack IntegrationExecutive Reporting
The Problem

A data-rich company making data-poor decisions—not for lack of data, but lack of access.

The analytics bottleneck in growth-stage SaaS is almost always the same: data collection is flawless, but distribution is broken. The data team becomes a request queue. The product team guesses while they wait. The gap between data collected and data used is exactly where growth dies.

Pain Point 01

The two-person bottleneck

The data team received 23 requests a week but only had the capacity for 14. Product managers waited 3–5 days for sprint metrics. Customer success managers asked for churn risk reports and received them after the client had already cancelled.

Pain Point 02

Dashboards frozen in 2021

The company had 14 legacy dashboards built for highly specific, outdated purposes. Sales was looking at pipeline metrics that no longer reflected their actual sales motion. The dashboards answered the questions of the past, not the present.

Pain Point 03

Manual, stale health scoring

The CS team spent two days every month building a massive customer health spreadsheet. By the time it was finished, it was two weeks out of date. Intervening on an 'at-risk' account was structurally impossible because the visibility lag was too high.

Pain Point 04

The 'Squeaky Wheel' product roadmap

Without accessible usage data, product prioritization defaulted to support tickets and vocal sales prospects. The company was building niche features for vocal minorities while completely ignoring the silent majorities struggling with core workflows.

Pain Point 05

Brutal board reporting

Compiling quarterly board reports required 12 hours of manual exports, VLOOKUPs, and reconciliation across six conflicting systems. The resulting report was error-prone, static, and immediately obsolete.

What They'd Tried

They bought a heavy, drag-and-drop BI tool. It was powerful, but non-technical staff didn't understand the underlying data models and accidentally joined tables incorrectly. After two major product decisions were made based on wildly miscalculated metrics, confidence in the tool collapsed. They reverted to the data team's manual queue.

What Was at Stake

The Head of Product joined from a mature organization where rapid, data-informed experimentation was the norm. She believed in the product, but found an infrastructure that made her job nearly impossible. Fixing this wasn't an operational 'nice-to-have'; it was a rescue mission for the company's product culture.

Our Approach

We started by looking for the questions people had given up asking.

Before discussing architectures, we spent two weeks interviewing teams. We didn't just ask what reports they needed; we asked what questions they had simply stopped asking because getting the answer was too painful.

Key Insight

The bottleneck wasn't data volume. It was the friction of curiosity.

The 23 requests the data team received every week weren't the only questions the company had. They were just the ones important enough to justify the friction of filing a ticket. If we could reduce the cost of asking a question to zero seconds and zero SQL, we would unlock a massive shadow-backlog of strategic curiosity.

Discovery

We ran structured interviews across product, sales, and CS. We audited the data team's 6-week backlog and analyzed the 14 existing dashboards (six were completely unused; two had known calculation errors). The findings were undeniable: the infrastructure wasn't the root cause. The friction of asking a question was.

  • Structured interviews across 28 stakeholders to find 'abandoned' questions
  • Analysis of the data team's 6-week request backlog
  • Data quality and schema audit across the fragmented warehouse
  • Observation of the manual CS health-scoring workflow
  • Teardown of the 12-hour quarterly board reporting process
Design Philosophy

The 60-Second Rule: Anyone in the company, regardless of technical skill, must be able to get a reliable answer in under a minute. Crucially, a faster way to get *wrong* answers is not progress. Therefore, every single number in the platform had to have a canonical, centrally governed definition.

Constraints Respected
  • Build on existing infrastructure: We had to sit on top of the messy event collection they already had.
  • Zero engineering maintenance: The 2-person data team had to be able to maintain it post-launch.
  • Embedded analytics support: The architecture had to natively support a future customer-facing analytics feature.
  • Strict data privacy: Aggregation and anonymization rules had to be built-in to respect enterprise contracts.
The Solution

An analytics platform that made querying data as easy as asking a colleague.

We deployed six interconnected modules—from a governed metrics layer to a natural language query interface—transforming a tangled database into a self-serve intelligence engine.

Module 01

Unified Metrics Layer

A single, governed dbt repository defining every core metric (MRR, DAU, Churn). If the definition of 'Churn' changes, it updates globally. Every dashboard and query references this exact layer.

This cured the 'conflicting board report' problem. When Sales and Product ask for MRR, they get the exact same number. It is the invisible foundation that makes the entire platform trustworthy.
Module 02

Natural Language Query (NLQ)

A conversational interface where users ask questions in plain English (e.g., 'How has 30-day retention changed since the March release?') and receive visual charts in under 3 seconds.

It completely bypassed the SQL bottleneck. Product managers stopped waiting 3 days for analyses and started running them in 60 seconds. This feature alone drove the 85% adoption rate.
Module 03

Automated Anomaly Detection

Continuously monitors core metrics against historical baselines. If a feature's usage drops uncharacteristically on a Tuesday, it pings the Product Slack channel by Wednesday morning.

It shifted the company from a reactive posture to a proactive one. They stopped discovering catastrophic drops during Friday reviews and started intervening immediately.
Module 04

Customer Health Intelligence

Replaced the stale monthly spreadsheet with a live health score for all 180 accounts, driven by login frequency, support tickets, and feature breadth.

CS managers could finally intervene *before* a client churned. Because the data was real-time, time was spent saving accounts rather than identifying who was already lost.
Module 05

Self-Serve Dashboard Builder

A drag-and-drop interface restricted exclusively to canonical metrics. Users can build and share custom dashboards without writing a single line of SQL.

It freed the data team to do deep strategic work rather than acting as dashboard monkeys. It also ensured that non-technical users couldn't accidentally construct 'bad math.'
Module 06

Embedded Customer Analytics

A white-labeled analytics tab injected into the SaaS product itself, allowing the client's customers to see their own team's usage and ROI.

It turned an internal tool into a revenue-generating product feature. The sales team used this transparency to close two major Enterprise deals within months of launch.

Tech Stack

dbt (Data Build Tool)

The source of truth: version-controlled, tested SQL models for all metrics

ClickHouse

Lightning-fast columnar storage for sub-second analytical queries at scale

GPT-4 (Fine-Tuned)

Natural Language to SQL engine, trained specifically on the company's schema

Apache Kafka & Spark

Real-time event ingestion and heavy batch processing for historical backfills

React + D3.js

Highly customizable, responsive frontend visualization layer

Node.js (GraphQL)

Unified API layer serving both internal dashboards and the customer-facing embedded app

Design Decision

Tooltips on every single number.

Because users had been burned by bad data in the past, trust was low. We added a tooltip to every metric showing its exact definition, calculation logic, and last-validation date. In week one, users hovered over them 4,000 times. Once they verified the system wasn't lying, they trusted it implicitly.

Design Decision

NLQ shows its math.

When you ask a plain English question, the UI returns the chart—but also features an expandable tab showing the exact SQL it generated. Non-technical users ignore it; technical users inspect it to verify accuracy. Transparency builds confidence.

Execution

Fourteen weeks to launch. And a three-week metrics workshop that changed the company.

A shiny natural-language interface wrapped around bad metrics produces confident garbage. We spent the first three weeks painfully defining the business's core math before writing a single line of UI code.

Timeline
01

Discovery & Infrastructure Audit

Weeks 1–2

Mapped out the 6-week backlog, audited the messy event schema, and finalized a prioritized list of capabilities.

02

dbt Metrics Layer & Backfill

Weeks 3–5

Defined and tested 42 canonical metrics in dbt. Provisioned ClickHouse and successfully backfilled 3 years of historical event data.

03

NLQ & Dashboard Builder

Weeks 6–9

Fine-tuned GPT-4 on 500 annotated query examples to build the Text-to-SQL engine. Calibrated anomaly detection algorithms against 12 months of historical data.

04

Health Model & Embedded Analytics

Weeks 10–12

Deployed the real-time CS health model. Built the multi-tenant sandbox for the customer-facing embedded analytics layer, clearing strict legal privacy boundaries.

05

Rollout & Handoff

Weeks 13–14

Role-specific onboarding for Product, CS, and Sales. Pair-worked with the 2-person internal data team to ensure they could own and maintain the dbt layer indefinitely.

Team Involved
  • 1 × Engagement Lead
  • 1 × Data Engineer (dbt, ClickHouse, Kafka)
  • 1 × ML Engineer (NLQ, Anomaly Detection, Health Model)
  • 1 × Backend Engineer (GraphQL API, Multi-tenant Architecture)
  • 1 × Frontend Developer
  • 1 × Product Designer
How We Collaborated

The defining moment of the project wasn't code; it was a 3-week workshop with the Head of Product, VP of CS, and CFO. Forcing leadership to agree on what 'Active User' and 'Churn' *actually* meant mathematically was uncomfortable. But locking those definitions into dbt code resolved years of inter-departmental arguments.

Overcoming Friction

Challenges & How We Solved Them

The Challenge

Messy legacy tracking. Four years of organic growth meant the same user action was logged under three different event names, depending on who wrote the code.

How We Handled It

Instead of rewriting the core application code, we built an event normalization layer in dbt. It mapped all 23 legacy naming inconsistencies into clean, canonical event streams, insulating the new analytics platform from the sins of the past.

The Challenge

The NLQ engine failed on complex comparative queries (e.g., 'Show me accounts where adoption dropped but NPS stayed high').

How We Handled It

We ran a targeted annotation sprint, manually mapping 150 complex SQL queries and feeding them back into the fine-tuning dataset. We also added a fallback: if the model's confidence was low, the UI prompted 'Let me verify this with our data team' rather than hallucinating a wrong answer.

The Challenge

The initial Customer Health model was too sensitive, flagging 40% of accounts as 'Amber/At-Risk.' The 7-person CS team was immediately overwhelmed.

How We Handled It

We sat down with the VP of CS and had her manually rate the top 50 flagged accounts based on her intuition. We used her human labels to recalibrate the model's thresholds, dropping the immediate alerts from 72 to a highly actionable 18. We added a 'Watch List' tier for the rest.

Results

Three months later, decisions stopped being arguments over whose spreadsheet was right.

Operational efficiency spiked instantly, but the business metrics took a few months to mature. When they did, they proved that a company making hundreds of small, data-informed decisions will radically outperform a company running on gut instinct.

85%

Analytics adoption rate

active users querying data independently — up from 12%

60%

Less manual reporting

analyst hours freed from tedious CSV pulls and basic queries

25%

Better business metrics

composite improvement across product engagement, retention, and expansion

−85%

Board reporting time

from 12 grueling hours down to 4 hours of strategic review

< 48 hrs

Health alert response

time from health score drop to CS intervention (down from 2+ weeks)

64%

At-risk save rate

of alerted accounts successfully retained or expanded within 90 days

Product Sprints shifted drastically. Instead of building features for vocal squeaky wheels, the product team began prioritizing workflows with high discovery but low completion rates. These data-backed features showed significantly higher engagement than intuition-backed ones.

The embedded customer analytics became a massive sales asset. The VP of Sales specifically credited the transparency of the embedded dashboards for closing two major Enterprise deals against steep competition.

The CEO's relationship with board meetings completely changed. She noted: 'We used to spend the first hour of every board meeting arguing over what the numbers were. Now we spend that hour discussing what the numbers mean.'

"The thing I didn't expect was how much the quality of our conversations changed. When Product, CS, and Sales are all looking at the exact same numbers, defined the exact same way, in real time—the arguments stop being about whose data is right, and start being about what we should do about it. For a company our size, that alignment is everything."

Head of Product

B2B SaaS Client

What We Learned

Learnings That Outlasted the Project

Metric definition is the work. The platform just makes it accessible.

An AI interface layered over poorly defined metrics produces confident garbage. The painful weeks spent forcing leadership to agree on the exact mathematical definition of MRR was the most valuable work of the project. We refuse to build NLQ interfaces until the dbt metrics layer is flawless.

Self-serve analytics is a trust problem, not a training problem.

The company had been burned by bad BI tools in the past. To drive 85% adoption, we didn't just need a slick UI; we needed visible trustworthiness. Tooltips defining calculation logic, visible underlying SQL, and automated pipeline tests rebuilt their trust through radical transparency.

The questions people stop asking are your most important signal.

The most valuable data points aren't found in a data team's Jira backlog. They are the strategic questions that PMs and execs simply gave up asking because the friction was too high. Unlocking those 'abandoned' questions is the true ROI of a modern analytics platform.

Keep Exploring

Related Work & Services

Running a SaaS business where your data team is a bottleneck and your product decisions are running on instinct? The data to make better decisions is already there. What's missing is the infrastructure to unlock it.

Your product data knows things about your users that your team doesn't. Can you afford to keep guessing?

We build analytics platforms for SaaS companies where the data exists, but the access doesn't. Tell us about your current data infrastructure and the decisions you wish were better informed. We'll give you an honest view of what's buildable.

No generic BI vendor pitches. A real conversation about your data and decisions.