The word 'chatbot' has become a catch-all that covers everything from a FAQ widget to a fully autonomous business agent. Here's how to tell them apart.
The terminology problem in AI is real. "Chatbot" gets applied to everything from a decision-tree FAQ widget on a checkout page to a system that autonomously qualifies leads, books meetings, updates a CRM, and drafts personalised follow-up emails. These are not the same thing. Understanding the difference matters when you're deciding what to build.
A traditional chatbot is a rules-based or retrieval-based system. It responds to inputs based on a defined set of patterns, intents, or pre-written answers. The more sophisticated versions use intent classification — they can recognise that "I want to return something" and "can I send this back?" mean the same thing — but they're still working from a fixed knowledge base and a fixed set of possible responses.
Chatbots are good at a narrow range of tasks: answering FAQs, guiding users through structured flows, capturing contact details. They're predictable, relatively cheap to build, and easy to audit. They're also brittle — they fail when a user's input doesn't match an expected pattern, and they have no ability to take action in the world.
An AI agent — in the way we use the term — is a system that can perceive context, reason about it, and take actions. The key word is actions. An agent doesn't just answer questions. It can call APIs, update databases, send emails, query external systems, and make decisions based on the results.
An agent handling a customer support query might: look up the customer's order history, check the current status with a courier API, determine whether a refund is within policy, issue the refund via the payment platform, update the CRM, and send a confirmation — all within a single conversation, without human involvement.
That's a fundamentally different class of capability. It's not answering questions. It's doing work.
There's a spectrum between these two poles. A lot of "AI customer support" products sit somewhere in the middle — they use large language models for natural conversation but are constrained to a narrow action set. That's fine for many use cases. Not every customer interaction needs a fully autonomous agent.
The question to ask when evaluating what you need is: how much of this work currently requires a human to take action in external systems? If the answer is "all of it" — the conversation is just information, and a human does everything else — you need an agent. If the answer is "none of it" — the user just needs information — a well-configured chatbot might be sufficient.
The market is full of "AI agent" products that are, in practice, sophisticated chatbots with a nice UI. Before committing to any solution, ask two questions: what systems can it write to, not just read from? And what happens when it encounters a case it hasn't seen before?
The answers tell you whether you're looking at a retrieval system dressed up in agent language, or something that can actually take work off your team's plate.