Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases

Or: Why “Can we turn off generation” might be the smartest question in generative AI Not long ago, I found myself in a meeting with technical leaders from a large enterprise. We were discussing Parlant as a solution for building fluent yet tightly controlled conversational agents. The conversation was going well—until someone asked a question […] The post Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases appeared first on MarkTechPost.

Jun 23, 2025 - 14:40
 0
Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases

Or: Why “Can we turn off generation” might be the smartest question in generative AI

Not long ago, I found myself in a meeting with technical leaders from a large enterprise. We were discussing Parlant as a solution for building fluent yet tightly controlled conversational agents. The conversation was going well—until someone asked a question that completely caught me off guard:

“Can we use Parlant while turning off the generation part?”

At first, I honestly thought it was a misunderstanding. A generative AI agent… without the generation? It sounded paradoxical.

But I paused. And the more I considered it, the more the question started to make sense.

The High Stakes of Customer-Facing AI

These teams weren’t playing around with demos. Their AI agents were destined for production—interfacing directly with millions of users per month. In that kind of environment, even a 0.01% error rate isn’t acceptable. One in ten thousand bad interactions is one too many when the outcome could be compliance failures, legal risk, or brand damage.

At this scale, “pretty good” isn’t good enough. And while LLMs have come a long way, their free-form generation still introduces uncertainty—hallucinations, unintended tone, and factual drift.

So no, the question wasn’t absurd. It was actually pivotal.

A Shift in Perspective

Later that night, I kept thinking about it. The question made more sense than I had initially realized, because these organizations weren’t lacking resources or expertise.

In fact, they had full-time Conversation Designers on staff. These are professionals trained in designing agentic behaviors, crafting interactions, and writing responses that align perfectly with brand voice and legal requirements, and get customers to actually engage with the AI — which turns out to be no easy task in practice!

So they weren’t asking to turn off generation out of fear—they were asking to turn it off because they wanted—and were able—to take control into their own hands.

That’s when it hit me: we’ve been misframing what “generative AI agents” actually are.

They’re not necessarily about open-ended token-by-token generation. They’re about being adaptive: responding to inputs in context, with intelligence. Whether those responses come directly, token-by-token, from an LLM, or from a curated response bank, doesn’t actually matter. What matters is whether they’re appropriate: compliant, contextual, clear, and useful.

The Hidden Key to the Hallucination Problem

Everyone is looking for a fix to hallucinations. Here’s a radical thought: we think it’s already here.

Conversation Designers.

Having conversation designers on your team—as many enterprises already do—you’re not just mitigating output hallucinations, you’re actually primed to eliminate them completely.

They also bring clarity into the customer interaction. Intentionality. An engaging voice. And, they create more effective interactions than foundation LLMs can, because LLMs (on their own) still don’t sound quite right in customer-facing scenarios.

So instead of trying to retrofit generative systems with band-aids, I realized: Why not bake this into Parlant from the ground up? After all, Parlant is all about design authority and control. It’s about giving the right people the tools to shape how AI behaves in the world. This was a perfect match—especially for these enterprise use cases which had so much to gain from adaptive conversations, if only they could trust them with real customers.

From Insight to Product: Utterance Matching

That was the breakthrough moment that led us to build Utterance Templates into Parlant.

Utterance Templates let designers provide fluid, context-aware templates for agent responses: responses that feel natural but are fully vetted, versioned, and governed. It’s a structured way to maintain LLM-like adaptability while keeping a grip on what’s actually said.

Under the hood, utterances templates work in a 3-stage process:

  1. The agent drafts a fluid message based on the current situational awareness (interaction, guidelines, tool results, etc.)
  2. Based on the draft message, it matches the closest utterance template found in your utterance store
  3. The engine renders the matched utterance template (which is in Jinja2 format), using tool-provided variable substitutions where applicable

We immediately knew this would work perfectly with Parlant’s hybrid model: one that gives software developers the tools to build reliable agents, while letting business and interaction experts define how those agents behave. And the guys at that particular enterprise immediately knew it would work, too.

Conclusion: Empower the Right People

The future of conversational AI isn’t about removing people from the loop. It’s about empowering the right people to shape and continuously improve what AI says and how it says it.

With Parlant, the answer can be: the people who know your brand, your customers, and your responsibilities best.

And so the only thing that turned out to be absurd was my initial response. Turning off—or at least heavily controlling—generation in customer-facing interactions: that wasn’t absurd. It’s most likely just how it should be. At least in our view!


Disclaimer: The views and opinions expressed in this guest article are those of the author and do not necessarily reflect the official policy or position of Marktechpost.

The post Solving LLM Hallucinations in Conversational, Customer-Facing Use Cases appeared first on MarkTechPost.