The official website of VarenyaZ
Logo

Whena3,000-PersonCompanyStoppedLosing90MinutesaDaytoLookingThingsUp

A fast-growing tech firm had six years of knowledge scattered across 50+ different tools. Employees were re-creating work that already existed simply because they couldn't find the original. We built a semantic search platform that understood intent, not just keywords—dropping retrieval time by 80%.

Enterprise SearchKnowledge ManagementSemantic SearchAI DiscoveryProductivity
Core_Architecture
Enterprise Search
Knowledge Management
Semantic Search
AI Discovery
80%
Faster time-to-information
85%
Search satisfaction rate
90%
Employee adoption rate
Client Dossier

Business Context & Telemetry

Our client had grown from 200 to 3,000 employees in six years. While their revenue thrived, their knowledge infrastructure crumbled. Institutional wisdom was trapped across Google Drive, Slack, Notion, and three different SharePoint instances from past acquisitions. Everyone technically had access to everything, but in practice, nobody could find anything. New hires were paralyzed, and senior staff were spending hours every week acting as 'human search engines' for their colleagues.

[Company Size]

3,000-person technology services company

[Team Size]

3,000 employees across 6 practice areas, served by a 15-person KM/IT team

[Geography]

Distributed workforce across 8 offices in 4 countries

[Core Platforms]

Web App, Chrome Extension, Slack Integration, Mobile App

[Founded]

2017

Executive Perspective

We have a document that answers almost every question a new hire will ask in their first three months. We know it exists. But we can't reliably find it ourselves. That is the problem, in miniature.

CP

Chief People Officer

The Challenge

Six years of wisdom—organized well enough to store, but not well enough to use.

Enterprise search is a 'hidden' tax. Nobody submits a ticket saying 'I lost 25 minutes looking for a pricing guide today,' but when you multiply that moment by 3,000 people, the cost to productivity and morale is staggering. The knowledge wasn't missing; it was just speaking a language the search bar didn't understand.

01

The 'Exact Word' penalty

Keyword search punished users for not knowing the exact title of a file. A search for 'how to handle client complaints' yielded nothing because the document was titled 'Change Control Framework.' The burden was on the human to be a mind-reader.

02

Post-Acquisition information silos

Each acquired company brought its own legacy tools. These became 'digital sediment'—visible to everyone but effectively inaccessible to anyone outside the original team. Work was being duplicated daily because teams didn't know the solution already existed.

03

The 'Expert Bottleneck'

Tenured employees became informal human directories. They were constantly interrupted with basic questions because they were 'the only ones who knew where the file was.' This was efficient for the asker, but exhausting for the expert.

04

A 3-month onboarding fog

New hires took 12 weeks to become independent. Until then, they were entirely dependent on colleagues to navigate a knowledge base that felt like a maze. This frustration was a top reason cited for early-stage turnover.

05

Decisions made in a vacuum

When senior leads needed to make calls on pricing or architecture, the relevant precedents existed—but were unfindable. Decisions were made on gut instinct, ignoring documented institutional learnings and repeating past mistakes.

Previous Attempts

They tried standardizing on Confluence, which helped organize *new* work but didn't solve the search problem across other tools. They later tried a 'federated search' Chrome extension, but it just returned 'more noise from more places'—ranking a random Slack chat above a finalized policy document.

"The Chief People Officer saw 'can't find anything' as the #3 frustration in every employee engagement survey. The search problem was destroying the 'flow' of work, turning simple tasks into bureaucratic scavenger hunts. The project was finally greenlit when they calculated the sheer number of lost salary-hours spent on failed searches."

The Real Cost
The Approach

We started by listening to 100,000 failed searches.

Before choosing a model, we analyzed 6 months of search logs. We wanted to see the gap between what people typed and what they actually needed. We found a 'semantic gap': people were asking questions, but the machines were just looking for strings.

Discovery & Methods

We interviewed 62 employees and ran a 5-day 'search diary' study. We discovered the average employee searched 10 times a day, succeeding only 40% of the time on the first try. The real cost wasn't just the 9 minutes lost per search—it was the decision to stop looking and start guessing.

Analysis of 100,000 historical search queries across 3 legacy tools
5-day 'Search Diary' study with 20 participants across departments
Interviews with 15 new hires on 'time-to-independence'
Technical audit of 50+ disconnected data sources and APIs
Tenure-weighted knowledge dependency mapping

The system was syntactic; the people were semantic.

People search for *concepts* ('How do we handle scope creep?'). Keyword engines look for *strings*. The gap wasn't a content problem; it was a translation problem. We needed to build a system that understood what people *meant*, allowing them to find the 'Change Control' doc by asking about 'Scope Creep'.

Design Philosophy

Return answers, not just links. If the policy exists, show the relevant paragraph in the search bar. Furthermore, search must be 'role-aware'—a developer searching for 'environments' should see AWS docs, while a recruiter seeing the same query should see office culture docs.

Constraints Respected

  • Privacy First: Users must never see a document they aren't authorized to view in the source system.
  • Zero Source Friction: We had to index 50+ tools without requiring any team to change how they currently store files.
  • Ubiquitous Access: Search had to be accessible via Slack, Chrome, and Mobile—meeting employees where they already work.
  • Low Maintenance: The 15-person KM team had to be able to tune the system without needing a data scientist.
The Solution

A search platform that speaks the company's internal language.

We built a unified intelligence layer that sits above the tool sprawl, indexing 2.3 million documents and understanding the specific jargon of the client's business.

Architecture Spec

Semantic Intent Engine

Function

Uses vector embeddings to match the *meaning* of a query against the *meaning* of a document. It handles synonyms and conceptual links invisibly.

Impact

It fundamentally shifts the user's relationship with the search bar. The 40% success rate jumped to 85% because the system finally 'gets' the question, even if the user doesn't know the right jargon.

Implementation Note
Bi-encoder architecture with a Qdrant vector database. Hybrid retrieval combines semantic scores with BM25 keyword matching for product codes and proper nouns.
Tech Stack
Qdrant & Elasticsearch

Vector-driven semantic search combined with legacy keyword precision

Fine-Tuned LLMs (GPT-4 / Transformers)

Context-aware answer extraction and semantic query expansion

Neo4j

Mapping the complex 'Knowledge Graph' of documents, topics, and human experts

Apache Kafka

Real-time streaming of user interaction signals for the learning engine

React & Chrome Extension API

Ubiquitous search bar accessible inside every browser tab

AWS (EKS & S3)

Scalable, HIPAA-compliant infrastructure to handle millions of document chunks

Design Decision

Ubiquitous 'Search Anywhere' Chrome Extension.

The biggest barrier to search is navigation friction. By putting the search bar inside every tab, employees started searching 3.4× more frequently. It became a reflex, not a destination.

Design Decision

The 'Source & Date' stamp for AI answers.

AI answers alone aren't trusted in business. We paired every answer with a clear link to the source doc and its 'last updated' date. This transparency eliminated the fear of acting on outdated policy.

Execution

Fourteen weeks to launch. We fine-tuned the AI on the company's specific jargon.

A search engine that doesn't know your product names or acronyms is useless. We built a 'terminology fine-tuning' phase into the first month to ensure the AI spoke the same language as the employees.

Delivery Timeline

Operational Log

1

Audit & Permission Mapping

Weeks 1–2

Analyzed 100k queries and mapped the complex access controls across 50+ sources to ensure zero data leakage across team boundaries.

2

Connector Build & Initial Index

Weeks 3–6

Connected all 50 sources. Processed 2.3 million documents into vector chunks. The IT team validated the security of the federated indexing layer.

3

Semantic Fine-Tuning

Weeks 7–9

Fine-tuned the transformer model on the company's internal jargon and 30,000 document pairs to ensure 'Scope Creep' correctly mapped to 'Change Control'.

4

Soft Launch & Feedback Loop

Weeks 10–12

Launched to 50 power users. Used their interaction data to re-calibrate the ranking engine and 'Answer' confidence scores.

5

Full Rollout & Onboarding

Weeks 13–14

Company-wide rollout. Held 30-minute training sessions focusing on the Slack and Chrome integrations to drive immediate habitual use.

Team Topology

Deployed Roster

1 × Engagement Lead
2 × ML Engineers (Semantic Search & RAG)
2 × Backend Engineers (Connectors & Security)
2 × Frontend Developers (Chrome & Slack integrations)
1 × Product Designer

Collaboration

Working Rhythm

The 15-person KM team were our lead architects. They knew every 'hidden' folder and legacy quirk. By running weekly workshops, we used their domain expertise to tune the AI, turning their years of frustration into a clear technical roadmap.

Course Corrections

Diagnostic Log

Friction Point

The three SharePoint instances had incompatible permission models, risking accidental exposure of sensitive files to the wrong departments.

Resolution

We built a 'Unified Permission Resolver' that checked the live IDP at query-time. If a file's permissions were ambiguous, we defaulted to 'Deny', only unlocking search results as the IT team manually cleaned the legacy groups.

Friction Point

Conflicting data. Old policy docs were appearing alongside new ones, causing the AI to give 'hallucinated' answers that were technically outdated.

Resolution

We added a 'Freshness Signal' to the RAG layer. If the system detected two documents with conflicting info, it would refuse to give a direct answer and instead flag both for the KM team to consolidate.

Friction Point

The Data Engineering team felt the search was too 'concept-heavy' and missed their specific technical function signatures or error codes.

Resolution

We built a secondary 'Technical Pathway' that identifies code-like queries and routes them through a specialized embedding model trained on the company's GitHub repositories.

Measured Impact

Sixty days later: Search became the company's most-used internal tool.

The metrics were huge, but the culture shift was better. The Slack 'feedback' channel was full of employees finding documents they thought were lost forever. The 'Wikipedia problem' was solved.

Primary KPIVerified Metric

80%

Faster time-to-information

average reduction in time spent on failed or manual searches

Search satisfaction score

85%

up from 35% with the prior keyword-based tool

Employee adoption rate

90%

active users within two months—zero 'mandated' training required

Qualitative Objectives Reached

  • The KM team identified 23 major 'Knowledge Gaps' in month one. They used this data to run their first ever 'evidence-based documentation sprint', closing gaps that employees had been struggling with for years.
  • The 'Expert Bottleneck' eased. Senior engineers reported significantly fewer 'Where is the file?' interruptions, allowing them to focus on high-value architecture work.
  • Onboarding surveys showed a massive jump in sentiment. New hires went from rating 'ease of finding info' at 2.8/5 to 4.3/5 within one month of using the new platform.

"I've been here six years. I know where things are because I was here when they were created. Last month, a new engineer who'd been here six weeks found a process doc from an old acquisition that I didn't even know existed. That is what good search is supposed to do—it should make years of context available to someone who only has weeks."

Principal Engineer, 6 years tenure
Principal Engineer, 6 years tenure

Technology Services Company Client

Key Learnings

Insights Gained

Valuable lessons and strategic insights uncovered through this project that inform our future work and architectural decisions.

01

Search logs are your most honest signal.

Analyzing what people search for tells you exactly what the organization is failing to teach them. We treat search logs as a strategic map for content creation, not just a list of technical queries.

02

Permissions are the hardest part of enterprise AI.

Building a search engine is easy. Building a search engine that respects 50 different legacy security models without leaking data is the real engineering challenge. We now treat permission mapping as its own mandatory workstream.

03

Search isn't a destination; it's a reflex.

If employees have to go to a specific URL to search, they won't do it. Integrating search into Chrome and Slack turned it from a 'task' into a 'reflex,' which was the single biggest driver of our 90% adoption rate.

Exploration

Capabilities & Archive

Running an organization where people spend more time looking for information than using it? The search queries your employees are making right now are the map to that problem. We can help you read it.

Let's Work Together

Your organization's wisdom exists. The question is whether anyone can find it.

We've built enterprise search for teams where the tools were incompatible and the knowledge was lost in the noise. The turnaround is faster than you expect because the content is usually already there—it just needs the intelligence to surface. Tell us about your tool sprawl, and we'll give you an honest view of what's recoverable.

"No generic AI vendor pitches. A real conversation about your knowledge problem."