Back to Portfolio
strategy completed

Operations Strategy: SaaS Support Analysis

Root cause analysis of a SaaS support operation revealing a systemic failure chain — from sales pitch to customer crisis

4
Technologies

Tech Stack

Data AnalysisRoot Cause AnalysisWeighted Scoring ModelsOperations Strategy

The Problem

A hospitality SaaS platform was seeing high support volume and couldn’t figure out why. Hundreds of inbound tickets over a single month, a dozen support and billing agents, and a gut feeling that something was wrong — but no clear diagnosis.

The highest-volume category was performance and speed complaints. The instinct was to focus there.

The Diagnosis

Performance wasn’t the real problem. Overbooking was.

The company’s AI prediction algorithm — marketed as a competitive advantage — was aggressively reopening tables based on no-show likelihood. When original diners showed up, restaurants had more guests than seats during dinner service. Active emergencies, not frustrations.

Overbooking had fewer tickets than performance. But the damage per ticket was catastrophic: restaurants in crisis during their busiest hours, trust in the product destroyed.

Nearly all callers were repeat callers. The same customers, calling about the same unresolved problems, because the system wasn’t learning from its own failures.

The Systemic Chain

The overbooking problem wasn’t a bug — it was a compounding failure across every handoff:

  1. Sales pitches the AI feature as a competitive differentiator
  2. Onboarding skips configuration — never asks about walk-in traffic or no-show tolerance
  3. Algorithm predicts aggressively with default settings
  4. Restaurant gets overbooked during dinner service
  5. Support — roughly one in three crisis calls go unanswered
  6. Agents improvise without a playbook
  7. No feedback loop — nobody tags tickets by sales pitch to close the circle

Information was lost at every stage. That was the connecting thread.

The Agent Analysis

I built a weighted scoring model to evaluate every agent, prioritizing customer satisfaction and qualitative evidence (what they actually did in calls and emails) over internal process metrics like QA scores and handle time.

The model surfaced patterns that aggregate metrics would have hidden:

  • Top performer: Turned a restaurant mid-crisis into a positive outcome in minutes. Diagnosed the algorithm issue, offered creative compensation, adjusted settings, and scheduled follow-up. That’s the playbook.
  • The QA paradox: The agent with the highest internal quality score had the lowest customer satisfaction. Technically correct, consistently leaving people unhappy. The rubric was measuring process compliance, not outcomes.
  • The call-email gap: One agent handled an overbooking call expertly — diagnosed the issue, offered credit, resolved it. Same agent, different channel: a customer emails needing help. Agent tells them to call the main line. Sent them back into the broken system.

The Recommendations

Three tiers, starting with what could be done this week:

Tier 1: This Week (zero engineering)

  • Standardized onboarding configuration: three questions before launch (walk-in traffic, no-show tolerance, aggressive vs. conservative fill)
  • Emergency response playbook modeled on the top performer’s approach: Acknowledge, Diagnose, Compensate creatively, Adjust settings, Follow up in writing

Tier 2: Month 1

  • Support-to-sales feedback loop: tag every overbooking ticket by which sales pitch the customer received
  • Quantify the correlation between pitch and support load
  • Staggered agent scheduling to reduce crisis call abandonment

Tier 3: Month 2

  • Customer risk score: repeat contacts + abandoned calls + overbooking = churn prediction model
  • QA rubric redesign: add “Was the problem actually resolved?” and “Is this customer more or less likely to stay?”

What This Demonstrates

This is operations strategy work. Not code — diagnosis. Finding the actual root cause when the obvious answer (performance complaints are highest volume) isn’t the right one. Building a scoring model that reveals patterns invisible in aggregate data. Tracing a failure chain across five departments to find where information gets lost.

The recommendations start with what’s free and immediate. The expensive stuff comes last, after you’ve validated the diagnosis.

Need something similar for your team?

Let's Talk