Founder Story: The Tenant's Voice
0-to-1: Building & Scaling a RAG System for Social Good
The Context & Problem
Like many renters, I’d had negative experiences with landlords, but lacked the confidence to challenge unfair situations. That changed when a landlord tried to deduct £2,625 from my girlfriend's tenancy deposit. This time, we decided to contest it.
I turned to Google's NotebookLM, uploading official UK tenancy law documents to guide our response. Armed with a clear understanding of "fair wear and tear" and "betterment," we drafted a formal rebuttal. The result was a success: the Tenancy Deposit Scheme (TDS) returned £2,350 of the £2,625 claimed. This victory proved that with the right information, tenants can effectively defend their rights.
The Flaw in General-Purpose AI
I wanted to share what I'd learned, but quickly hit two roadblocks. First, people were hesitant to trust a general-purpose tool like ChatGPT for specific legal matters. Second, I found that even NotebookLM could pull information from incorrect sources. For something this important, **accuracy and trust are non-negotiable**.
This is why I built The Tenant's Voice. It was created to be a dedicated, reliable platform for tenants.
£2,350
Successfully Recovered
The personal victory that sparked the idea and validated the user need.
The Approach & Solution
The goal was never just to provide answers, but to differentiate from ChatGPT by empowering users to *easily encourage action*. The product had to help tenants "think less and do more."
Guiding Principles
Trust & Accuracy
Only use verified, official sources.
Encourage Action
Make the next steps clear and simple.
Mobile First
Design for users in real-world situations.
The Hands-On Build: A Production-Grade RAG
I made the conscious, senior-level product trade-off of sacrificing some speed of output for guaranteed accuracy. I built a production-grade RAG (Retrieval-Augmented Generation) system from the ground up, using a modern, scalable tech stack. The core of the system is a vector database built using only legitimate sources (gov.uk, Shelter, Citizens Advice, TDS).
To ensure accuracy, content was chunked using a recursive character text splitter, and we ingested the "last modified" dates for each document. This critical step prevents the model from sharing outdated advice—for example, citing a law from 1985 that was superseded in 2015—ensuring every piece of guidance is grounded in the most current, reliable facts.
RAG Architecture
(gov.uk, Shelter)
(Vectorization)
(Vector Storage)
(Synthesis & Response)
Technical Deep Dive & Optimizations
As the product manager, I tracked, diagnosed, and solved three core issues that moved our product from unstable to user-centric. This log breaks down how I identified each problem, analyzed the user impact, and implemented the solution.
Part 1: Solving the 36k-Byte Crash (Stability)
The Problem: Crashing on Long Conversations
I identified that the core function was crashing with a 400 Bad Request error. I realized that my most engaged users—those with long, detailed conversations—were the most likely to experience a total app failure, breaking trust and halting their journey.
What Was Wrong
I dove into the logs and saw that the text-embedding-004 model has a small 36,000-byte limit. My code was sending the entire chat history for vectorization, which was inefficient and, for long chats, fatal.
The Fix: A Dual-History Approach
I identified two different needs: RAG only needs recent context to find relevant documents, while the AI needs full context to understand the user's journey. I implemented a solution by creating two history variables:
- recentHistoryText: A small, truncated history sent to the embedding model for efficient document retrieval.
- fullHistoryText: The complete history sent to the final chat model (gemini-2.5-flash) to maintain conversational context.
Part 2: Fixing Accidental Submissions (Usability)
The Problem: User Frustration from Quickfire Questions
I noticed that users who were asked for details (e.g., "how long has the mould been present?") would try to type a multi-line answer. When they hit Enter for a new line, the UI submitted their partial, incomplete thought, confusing the AI and forcing it to ask the same questions again.
What Was Wrong
The UI was fighting the user's intent. A single-line <input> box that submitted on Enter was preventing users from providing the detailed, multi-line answers the AI needed.
The Fix: Aligning the UI with User Intent
My solution was to re-align the UI to match the user's natural workflow. I implemented this by:
- Replacing the single-line
<input>with a multi-line, auto-resizing<textarea>. - Changing the submit event from "Enter" to "Ctrl+Enter" (or "Cmd+Enter").
- Updating the placeholder text to teach this new, more deliberate interaction.
Part 3: Key Learning: Balancing Technical Limits & User Experience
The Challenge: Solving for Both System and User
This project highlighted a core product management tension: how do you balance non-negotiable technical constraints with intuitive, user-centric design? Two critical issues brought this into sharp focus.
What Was Wrong: A Two-Front Problem
The initial build faced failures on two fronts:
- System Failure: A technical limitation (the 36k-byte context window for the embedding model) was causing the app to crash for our most engaged users, directly punishing them for using the product correctly.
- User Failure: A usability flaw (an input field that submitted on "Enter") fought against natural user behavior, leading to incomplete submissions, frustrating loops, and poor data quality for the AI.
The Solution: A Holistic Approach
Fixing these required a holistic view. For stability, I engineered a dual-history system to manage context efficiently without sacrificing conversational quality. For usability, I redesigned the input field to align with user expectations, enabling detailed, multi-line responses.
Key Learning: True User-Centricity is Full-Stack
This experience underscored that a successful product must solve for both technical stability and user intuition. Optimizing one at the expense of the other leads to failure. A truly user-centric product manager must understand the entire stack—from API limitations to user psychology—and design solutions that respect both.
The Results & Impact
The tool quickly found its audience. By sharing the exact advice I pulled from the tool, I became a 'star contributor' on several large tenant and landlord Facebook groups. This grassroots adoption is a clear sign of Product-Market Fit.
50+ Daily Active Users
Achieved steady daily usage through organic, community-led growth with zero marketing spend.
Community Recognition
Became a trusted voice in the target community, validating the tool's accuracy and usefulness.
Future Roadmap
The current tool is just the beginning. I have a clear, three-pronged vision for the future, including a high-value multimodal feature and potential monetization paths.
-
The Contract Analyzer (Multimodal)
Allow users to upload tenancy agreements. The tool would use AI to highlight sketchy or unenforceable clauses, providing a critical service that would justify a monetization model to cover processing costs.
-
Solicitor Referral Network
Use the tool as a referral point to recommend solicitors for complex cases, creating a revenue stream via referral fees.
-
Charity & Council Integration
Partner with councils, support groups, and charities to integrate the tool into their websites, improving the user experience and helping tenants plan their next steps more effectively.