Case Study: Google

Rescuing Millions in Revenue from a "Black Box"

How I ideated, built, and drove the full product lifecycle of an AI pilot that navigated Google's most complex compliance and safety challenges.

$Xm+

Initial Revenue Unlocked

2024

'Risk Taker' Award Winner

78%

Automatic Reapproval Rate

99%

Accuracy on False Positives

The Problem

The Client: A global marketplace

The Business Threat: They were at immediate risk of being banned from Google Ads due to cascading Merchant Center warnings. This was a multi-million dollar problem.

The User Threat: The client and their good sellers were completely blind. They had no idea why ads were disapproved. Was it the image? The text? A rigid, legacy system was catching false positives, blocking legitimate products and revenue.

The Emotional Strain: Not everything that can be counted counts. Not everything that counts can be counted, and in this case, the emotional strain caused by unfair disapprovals was damaging the overall relationship between Google and the client, even without a clear measure of the revenue loss.

Real Examples of False Positives

Perfectly legitimate products were being blocked due to the system's inability to understand nuance. Here are a few examples of items that could be incorrectly flagged, causing confusion for sellers and lost revenue.

A firework shaped chandelier, an example of a product that could be falsely flagged by automated systems.

Firework Chandelier

Potential Flag: Title contains "shooter" (e.g., "crystal shooter chandelier") or mentions 'explosive' in its content

Printer ink cartridges, another example of a product that could be falsely flagged.

Ink Cartridge 952XL

Potential Flag: 'Cartridge' could be misinterpreted as related to ammunition or weapons.

A hanging egg chair, which could be falsely flagged for containing keywords like 'hanging'.

Hanging Egg Chair

Potential Flag: The term "swing" or "hanging" in the description could be associated with multiple egregious policies

A children's book titled 'Tough Cookie', which could be flagged for various text policy reasons.

"Tough Cookie"

Potential Flag: Description of the book could relate to a controversial theme or to web tracking policies.

The Approach & Strategy

Discovery: I led the discovery process to find the 'why.' This wasn't just a client complaining; it was a system-level failure. I diagnosed the root cause: a conflict between a rigid system and nuanced, real-world products.

Strategy: This couldn't be solved with a better rule. It required nuance. I proposed a Lean, MVP-driven strategy to prove an AI-powered solution could outperform the existing system.

Stakeholders: I was the bridge between the client, two internal Product teams (who loved it), Directors within gTech, and the Trust & Safety team (who were the ultimate gatekeepers).

The Solution: An "Explainable AI" MVP

An AI Tool for Compliance & Explainability

Google Sheet (Input)
Google Cloud AI API
(Engineered Prompt)
AI Output
(Title, Desc, Why, Confidence)
Google Sheet (Output)
Merchant Center

Core Features:

  • AI-Powered Rewrites: I engineered a prompt that condensed Google's complex ad policies to rewrite non-compliant titles and descriptions.
  • Explainability Nodes: The tool didn't just fix the problem; it explained it. It told the user why an ad was banned, turning the black box into a feedback loop.
  • Safety & Confidence Rating: To solve the 'bad seller' problem, the AI provided a confidence score to help distinguish genuine products from nefarious ones.
  • Rapid MVP: I built this in a Google Sheet to deliver value and test the hypothesis in days, not months.

Built-in Guardrails:

  • Focused Scope for Quality Assurance: The pilot was limited to a single merchant and one language (English) to allow for accurate manual QA.
  • Risk Mitigation: The tool did not test on products disapproved for reasons considered highly offensive or harmful, mitigating the chance of a dangerous ad going live.

The Results & Impact

Success Metrics

  • 78% automatic reapproval rate in Merchant Center for products with amended titles or descriptions.
  • 99% of the products approved by the tool were correcting what were originally unreasonable disapprovals.
  • <1% of products with amended titles and descriptions were approved against Google's advertising guidelines, demonstrating high safety and compliance.

The Win & The Lesson

The Win (The Pilot): The pilot was an unqualified technical and business success. We tested it in one market and proved it could solve the problem, unlocking the $Xm+ revenue stream and delighting the client.

The "Failure" (The PM Lesson): The project was ultimately shelved by Trust & Safety until they had done their own research into the false positive rate, and how it compares with their own solutions. This is where I won the 2024 'Risk Taker' Award.

The Reframe: My tool worked. It exposed a deep, internal policy conflict: Commercial Enablement vs. Platform Safety. T&S feared that bad actors could use the tool to learn how to bypass the system. While the product teams saw a revenue-saver, T&S saw a potential weapon.

My PM Stance: My takeaway wasn't a technical one, but a strategic one. For AI to succeed, Trust & Safety cannot be a final gate; they must be a founding partner. My next step, which I was pushing for, was to work with them on calculating the overall impact.

Future Roadmap

  • Rebuild the approval system with an AI first approach for the titles and descirptions in Merchant Center.
  • Build a 'Seller Reputation Score' using the 'Confidence Rating' to proactively flagging bad actors while fast-tracking good ones.