About This Data

How to read the dashboard and where the data falls short

What This Project Does

Call transcripts from Avoma (or manual export) are processed through Claude to extract structured insights — pain points, feature requests, objections, competitive mentions, interest levels, and company segmentation. The results are displayed in this dashboard for filtering and exploration.

How to Interpret the Data

Interest Level (1–5)

An inferred measure of how interested the prospect seemed during the call. 1 = no interest, 5 = very high interest. This is Claude's best guess based on tone, questions asked, and stated intent — not a self-reported score.

Themes

Recurring topics extracted from individual calls, then grouped across calls during analysis. A theme appearing on multiple calls suggests a pattern, but the grouping is fuzzy — different calls may express similar ideas in different terms.

Pain Points, Feature Requests, Objections, Value Signals

Each insight is extracted with a confidence level (high/medium/low) and optionally categorized. Categories are predefined (e.g., "traffic-visibility", "ai-governance", "security-trust") to enable aggregation, but a single conversation can be miscategorized or split across categories.

Competitive Mentions

References to other products or vendors, captured with sentiment (positive/neutral/negative toward the competitor). Only captures what was explicitly discussed — silence about a competitor doesn't mean they aren't in the picture.

Company Segments

Companies are classified by size (startup/SMB/mid-market/enterprise), stage (seed through public), and industry. These are inferred from conversation context and may be wrong, especially when the prospect didn't state these details directly.

Funnel Stage

Where the prospect sits in our sales process (awareness, evaluation, trial, negotiation, etc.). Assigned based on conversation content and call type. A single call is a snapshot — the actual relationship may be more nuanced.

Important Limitations

Read the dashboard as directional signal, not ground truth. Several layers of distortion sit between reality and what you see here.

Conversations are approximations

These results represent our best attempt to understand the potential customer based on a conversation. What someone says in a call may not reflect their actual position:

  • We may have explained our product poorly. If the prospect didn't understand what Qpoint does, their reactions tell us more about our pitch than about their needs.
  • Stated interest may not equal real interest. People are polite. Someone saying "this is interesting" or "let's schedule a follow-up" doesn't necessarily mean they'll buy.
  • Different mental models lead to misperception. Participants may hold fundamentally different cognitive maps of the problem space, leading them (or us) to misjudge value, risk, or relevance.

AI extraction adds an interpretation layer

Claude reads transcripts and produces structured output. This introduces its own potential for misclassification, missed nuance, or over-inference. The extraction is only as good as the transcript quality and the prompt guiding it.

Groupings are directional, not definitive

Theme and segment groupings are our attempt at organizing signal. They aren't exhaustive or fully accurate. Some insights don't fit neatly into a category. Some categories overlap. These will evolve as we refine the extraction and learn more.

Small sample size

At seed stage, we have a small number of calls. Patterns may shift significantly as more data comes in. A single call can skew an entire theme or segment. Don't over-index on any one data point.

Selection bias

The people we talk to are not a random sample. They're filtered by who agreed to a call, how they found us, who referred them, and which outreach channels worked. The problems and sentiments represented here are the problems and sentiments of people who talk to us, which may differ from the broader market.