Article
5 min read

Before You Blame AI, Look at What It's Running On

Most AI sales investments underdeliver for the same reason: incomplete data. Here's how to spot the real problem before you sign anything.

Before You Blame AI, Look at What It's Running On
Written by
Spence Lee
Published on
March 25, 2026
copy link button
share linkedin button
share twitter button
share facebook button
Key Takeaways
  • The AI isn't the problem. The data it's running on probably is - and nobody says that out loud until after the contract is signed.
  • The demo is not showing you your data. It's showing you what your data could look like if reps logged everything. They don't.
  • Before you evaluate vendors, audit yourself. The answers to three internal questions will tell you more than any demo will.
  • There is a version of this story that plays out the same way at dozens of companies every year, and everyone involved is acting completely reasonably the whole time.

    The forecast call happens every week. The pipeline looks fine on paper. Deals keep slipping anyway, and when the CRO asks why, the answers take too long and feel shaky when they arrive. Someone decides it's time to look at AI. This makes sense. The category exists precisely for this moment. Vendors promise visibility, forecasting accuracy, deal intelligence. The demos look sharp.

    Contracts get signed. Tools get deployed. Six months later, the forecast is still unreliable, and now there's a line item on the budget for technology nobody fully trusts.

    The thing that's almost never said out loud during the evaluation: the AI was never the problem. The data underneath it was. And by the time anyone figures that out, they've already moved on to blaming the implementation, or the vendor's customer success team, or the reps who didn't adopt it fast enough.

    If you're building the internal case for an AI investment right now, this is what's worth understanding before you sit through another presentation.

    Everyone Agrees AI Matters. Nobody Agrees on What That Means.

    Ask ten vendors to explain the role of AI in sales and you'll get ten versions of the same answer: visibility, intelligence, automation, forecast accuracy. These aren't false claims. They're just claims that describe a category rather than a product, which is a useful distinction to hold onto when you're evaluating one.

    How useful AI actually is in a sales environment depends almost entirely on the quality of the data underneath it. AI finds patterns in what already exists. If what already exists is a CRM where reps logged half their activity sometime on Friday because there's a standing reminder in their calendar, AI will find patterns in that. The outputs will look precise. They just won't reflect what's actually happening in your pipeline.

    This is worth sitting with for a moment, because it runs counter to how most people think about technology. The conventional framing is: better software produces better outcomes. But the input problem breaks that logic completely. You can have genuinely sophisticated AI reasoning on genuinely incomplete information and get outputs that are wrong in ways that are hard to detect, because the system has no way to flag what it doesn't know.

    Before you evaluate any vendor, ask yourself: what percentage of rep activity in your CRM was automatically captured versus manually entered? If you don't know, that's already your answer.

    The Demo Is Doing a Specific Job, and It's Not Showing You Your Data

    Most AI evaluations get shaped by the demo, which is understandable, because demos are designed to be persuasive and they're usually good at it. A polished interface, a deal risk score, a forecast that explains its own logic. It looks like exactly what your team has been missing.

    Almost every demo runs on vendor-curated data. Clean, complete, perfectly matched to accounts and opportunities. It's a controlled environment that happens to look indistinguishable from a real one, and that distinction matters a lot when your actual CRM data has spent years being partially maintained by people who had other things to do.

    A few things that routinely get skipped in the evaluation process:

    • Where does the data come from? Is activity captured automatically from email, calendar, and calls, or does the system depend on reps logging what they did? These are categorically different things.
    • How does the system handle ambiguous data? A meeting with someone at a prospect company could be pre-sales, post-sales, or entirely unrelated to the opportunity in question. Ask specifically how the system decides, because the answer reveals a lot about the architecture.
    • What does the AI specifically do? Summarizing a call transcript and surfacing a deal risk signal grounded in behavioral data are genuinely different capabilities. Both get marketed under the same word.
    • Where does it live in the workflow? Technology that asks reps to open a separate tab from the tools they already live in tends to get opened once, evaluated positively, and then quietly ignored.

    A useful test: remove the vendor's name and re-read their pitch. If a competitor could have written the exact same sentences, you're looking at a category claim. Those aren't worthless, but they're not what you're paying for.

    Two Architectures, One Category Name

    When AI sales enablement is working the way it's supposed to, it doesn't feel like a tool you added. It feels like your CRM started reflecting reality.

    Getting there requires understanding that the AI sales enablement category contains two meaningfully different architectures that share a marketing vocabulary, which makes them easy to conflate in a demo and genuinely consequential to confuse in practice.

    AI on top of existing CRM data

    The system takes what reps entered into Salesforce and applies intelligence to it. Smarter deal scoring. Forecast modeling. Natural language queries. The AI is real, and the interface is usually good. The inputs are whatever reps had time to log.

    Capture first, intelligence second

    The system automatically captures activity from email, calendar, and calls without any rep action required. That data gets matched to the right accounts, contacts, and deals. The intelligence layer then works from a complete picture of what actually happened.

    The second approach is harder to build, which is why not everyone has built it. The practical difference shows up not in the demo but in the deals you catch in week two versus the ones you find out about in week eight, after the window to do anything about them has mostly closed.

    One question cuts through most of the positioning: ask any vendor, "If a rep sends 20 emails and has four meetings in a week and logs nothing in Salesforce, what does your system capture?" That answer is the architecture.

    What Coaching Actually Looks Like When the Data Is Real

    There's a version of the AI sales training conversation that gets skipped in most vendor pitches, probably because it requires admitting something unflattering about how most sales organizations currently operate.

    Most coaching conversations about deals are built on rep narratives. A manager asks what happened. The rep explains. The coaching that follows is only as accurate as that explanation, which is to say: it's pretty good when the rep's account is accurate and not very useful when it isn't. The rep usually believes their own version, which complicates things further.

    When complete activity data exists, the manager already knows which stakeholders were engaged, when engagement dropped off, and what a comparable deal that closed looked like at the same stage. The conversation moves past reconstructing what happened into deciding what to do next. That's a different kind of meeting.

    The same logic scales. The patterns that separate deals that close from deals that slip are in the data: who got engaged and when, whether the economic buyer was ever actually in the room, whether a concrete next step existed after every meeting. AI can surface those patterns, but only when the underlying data is complete enough to surface them from. A model trained on partial information will find patterns in the partial information, and those patterns will not generalize to your actual pipeline.

    The organizations doing the best AI sales training right now are not necessarily running more sessions or buying better content. They've built environments where the evidence exists, so there's something real to coach from.

    Five Questions Worth Answering Before the Demo

    The most useful thing you can do before evaluating vendors is get clear on these internally. They're not gotcha questions for the demo. They're diagnostic questions about your own environment, and the answers will shape every conversation that follows.

    • What percentage of rep activity is captured automatically vs. logged manually? When that number tips past 50% manual, no AI layer produces a reliable forecast. This is a hard constraint, not a preference.
    • When a deal slips, how do you find out? If the answer is the pipeline review or a rep update, the signal is consistently arriving too late to change the outcome. That's decision latency, and it's the actual problem you're trying to solve.
    • How much time does RevOps spend reconciling data each week? Quantify it. That number is what the foundation problem costs in labor before you get to any missed revenue.
    • What do you already have that overlaps with what vendors are pitching? Call recording is free in most conferencing platforms. A vendor whose pitch leans heavily on this is worth pressing on, because you're probably not paying for something you don't already have.
    • What would success look like in 90 days? If you can't name a metric that would move in that window, the ROI case is speculative. That's fine to know going in, less fine to discover after the contract is signed.
    Going deeper on vendor evaluation? The AI Revenue Intelligence Buyer's Guide covers the nine questions that actually separate vendors, how to structure a proof of concept, and the red flags most teams miss before signing. Worth reading before you get to the shortlist stage. Download the Buyer's Guide -> 

    Why the Foundation Question Is Actually a Strategy Question

    There's a version of the future of B2B sales that gets talked about a lot right now, usually in terms of AI models getting smarter and more capable. That's probably true. It's also, for most practical purposes, the less interesting variable.

    The more interesting variable is what the AI is running on. A smarter model applied to partial data produces more sophisticated noise. The same model applied to complete, automatically captured behavioral data produces something different: signals that actually reflect what's happening in your pipeline, because they're drawn from what actually happened in your pipeline.

    The gap between organizations that can answer basic pipeline questions in seconds and those that spend hours assembling the same answers keeps widening. The teams on the right side of that gap aren't there because they have better AI. They built a foundation clean enough for AI to work on, and then they let that compound.

    The compounding is worth taking seriously. Clean activity data produces accurate signals. Accurate signals produce better decisions. Better decisions produce outcomes that inform what the system learns next. Six months in, the AI running on a complete foundation is meaningfully better than it was at go-live. The AI running on a partial CRM has gotten better at producing confident noise.

    Every quarter spent building the intelligence layer on top of an incomplete foundation is a quarter that improvement is accumulating somewhere else. That's not a technology consideration. It's a competitive one.

    Building the Internal Case

    If you're the person who identified this problem, you're probably not the person who signs the contract. That's a common position to be in, and it has a specific set of challenges: you can see the issue clearly, but you have to translate it for people who are looking at a lot of things at once.

    Three things that tend to move that conversation:

    • Name the cost of staying put. Doing nothing has a price that doesn't always show up in a budget line but is absolutely real: forecast misses, late-stage losses, hours of manual reporting, coaching conversations that rely on guesswork. Find the dollar figures before walking into the room.
    • Reframe how the problem gets described. It almost always gets called a rep behavior problem (reps don't log) or a systems problem (Salesforce isn't configured right). Both diagnoses lead to solutions that don't fix it. Framing it as a data infrastructure problem is more accurate, and it also points toward the actual category of fix.
    • Propose a scoped proof of concept. One team. One use case. Thirty to sixty days. Success criteria in writing before it starts. A scoped POC is a much easier yes than a platform commitment, and it generates the kind of evidence that makes the broader conversation go faster.

    What You're Actually Evaluating

    Here's the thing about AI in sales that doesn't come up enough: the technology is usually fine. The vendors are often telling the truth about what their product can do. The demos work because they're built to work.

    What they can't show you in a demo is what the product does when it's running on your data, in your environment, with your reps' logging habits and your CRM's particular history of partial maintenance. That's the actual evaluation. And the only way to run it is to understand your own data situation before you start asking vendors about theirs.

    If something feels off in your revenue team right now, the instinct is probably right. The question is where the break actually is. Most teams look at execution and miss what's underneath it. The rep logging behavior is a symptom. The system that depends on rep logging to produce a complete picture is the problem.

    Start there. Once that's solid, the AI becomes a lot more interesting.

    See what complete activity data looks like ->