The difference between an AI assistant and an AI agent

Dr Claude DelormeHead of Research, moccet

An AI assistant produces text in response to user requests. An AI agent takes actions on behalf of a user to achieve goals. The two categories have different architectures, different failure modes, and different requirements.

An AI assistant is a system that produces text in response to user requests. An AI agent is a system that takes actions on behalf of a user to achieve a goal. The distinction was formalised in December 2025 when Anthropic, OpenAI, and Block co-founded the Agentic AI Foundation under the Linux Foundation. The two categories have different architectures, different failure modes, and different requirements. moccet uses agents internally as workers, coordinated by an orchestrator that maintains a continuous model of the user.

This essay explains where the line sits, what crosses it, and why the distinction matters for anyone trying to evaluate the products that are about to flood the market.

What is an AI assistant?

An AI assistant is a chat-based system that responds to user prompts. The user types a request. The system reads the request along with whatever conversation history is available and produces a response. The response is text, occasionally accompanied by code, images, or the output of a tool call. The user reads the response, evaluates whether it is useful, and either uses it, refines it, or discards it. The interaction ends when the user closes the chat or moves to a new task.

ChatGPT, Claude, Gemini, and Microsoft Copilot in their default conversational modes are AI assistants. The chat is the product. The thinking happens inside the system. The doing happens in the user's hands.

An AI assistant is bounded by the conversation. Whatever the system produces stays in the chat until the user copies it out or acts on it. Continuity between sessions is supplied either by the user repeating themselves or by a memory feature that retrieves stored facts and inserts them into the next prompt. The architecture has been the dominant pattern in consumer AI since November 2022, and the products built around it are excellent at the work the architecture is suited for.

What is an AI agent?

An AI agent is a system that takes actions in the world, not only in a conversation. Anthropic's February 2026 research paper Measuring AI agent autonomy in practice gave a workably operational definition. An agent, the researchers wrote, is "an AI system equipped with tools that allow it to take actions, like running code, calling external APIs, and sending messages to other agents."

The tools are the boundary marker. A system without tools that affect the outside world is an assistant. A system with tools that do affect the outside world is an agent.

An agent operates outside the conversation. The agent calls APIs, reads files, writes to other systems, sends messages, books reservations, runs code. The thinking and the doing happen inside the same loop. The user is involved at the boundaries, typically at the start when the goal is set, and at certain checkpoints along the way.

Crossing into having tools is the categorical step. Whether an agent operates autonomously between human checkpoints, whether it pursues complex multi-step goals, whether it composes with other agents, are gradations along the agent axis after the boundary is crossed.

moccet — AI built for you

Why does the distinction matter for engineering?

The architectural implications follow from the difference between producing text and producing effects, and the implications are larger than they first appear.

A system that produces text needs to be smart. The output is the product, so the output must be good. Engineering effort is concentrated in the underlying model and in the prompts that elicit good behaviour. If the text is wrong, the user reads it, recognises that it is wrong, and either corrects it or discards it. The system does not have to be reliable in the strong sense. The user supplies the reliability by serving as a check on the output before anything irreversible happens.

A system that produces effects needs to be smart and reliable. Reliability is a different property from intelligence, with a different engineering signature. A brilliant agent that succeeds 80 percent of the time and fails catastrophically 20 percent of the time is worse than a less brilliant agent that succeeds 95 percent of the time and fails gracefully when it does. Catastrophic failures in an agent are not bad text. Catastrophic failures are emails sent to the wrong people, meetings booked at the wrong times, payments dispatched to the wrong accounts.

The discipline that produces reliability has a recognisable shape. Anthropic's internal data on Claude Code, published in February 2026, found that the agent paused for clarification more than twice as often as humans interrupted it on the most complex tasks. Stopping to ask, rather than forging ahead with confident-sounding wrong action, is what makes an agent usable in production. The same paper noted that most agent actions on Anthropic's public API were low-risk and reversible. Limiting the blast radius of any single action is the deployment pattern that works at scale.

This is the engineering that distinguishes a real agent from a chatbot with buttons. The buttons are not the work. The work is in the orchestration around the model that decides when to act, what to confirm, what to refuse, what to route to other workers, and what to surface to the user. The intelligence sits in the model. The trustworthiness sits in the system around the model. A fuller account of what it means for an AI to take action is its own essay.

Where does personal intelligence fit?

A useful third category complicates the picture. A bounded agent has a beginning, a middle, and an end. A research agent reads twenty papers and produces a summary. A coding agent opens a pull request. A travel agent books a trip. The agent's life span is the goal's life span. Once the goal is achieved, the agent is done.

A personal intelligence is not bounded in this sense. A personal intelligence runs continuously across the unbounded surface of a life. The system uses bounded agents internally, as workers, but the system as a whole does not terminate. The orchestrator inside a personal intelligence makes continuous decisions about what is worth the user's attention this hour as opposed to last hour. The shape of the work is closer to operating an institution than completing a task. moccet is being built around this continuous orchestrator pattern, with specialised workers for communication, scheduling, focus, health, work, and relationships sharing a continuously updated model of the user.

The three-category structure of assistant, bounded agent, and personal intelligence maps to three distinct kinds of value for the user, and the choice among them depends on what the user actually has to get done.

A user whose work is solving discrete problems with a great mind on the other end of a chat needs an assistant. ChatGPT, Claude, and Gemini in their conversational modes are the best examples in computing's history of this kind of system. Every knowledge worker should have one.

A user with a specific bounded job they want done end-to-end needs a bounded agent. Coding agents like Claude Code, research agents like Perplexity Deep Research, and customer-service agents inside enterprise software are reaching production maturity in 2026. These systems handle their bounded jobs better than any human-only workflow could.

A user whose problem is running a life that has outgrown what a calendar and to-do list can hold needs a personal intelligence. An assistant cannot do the work because an assistant waits to be summoned. A bounded agent cannot do the work because the work has no ending.

moccet — AI built for you

How can a user tell which category a product fits?

The marketing convergence is going to make the choice harder over the next two years rather than easier. Every chat product will claim to be an agent. Every bounded agent will claim to be a personal intelligence. The words will lose their information value, and the user will be left having to evaluate the architecture rather than the brand. The good news is that architecture is observable. Three diagnostic questions sort the categories.

What can the system do? An assistant produces text. A bounded agent calls a finite list of tools to achieve a stated goal. A personal intelligence runs continuously and acts across multiple connected sources. A product page that does not name the answer is hiding it.

When does the system act without being asked? An assistant never does. A bounded agent acts within its goal but stops when the goal is complete. A personal intelligence acts continuously, with confirmation, across the user's life. The user should know which of these is true before handing over their data.

What does the system know about the user across sessions? An assistant knows what is in this chat. A bounded agent knows what is in this goal. A personal intelligence maintains a continuous, structured model of the user that updates from connected sources. The depth of cross-session memory is the clearest signal of which category a product actually fits, and a fuller account is in the essay on memory in AI.

The line between an assistant and an agent is the willingness to act in the world. The line between an agent and a personal intelligence is whether the system runs continuously across the user's life or terminates when a particular goal is met. The categories are stacked rather than parallel. Every personal intelligence contains agents. Every agent contains assistant-like components. They are not interchangeable. Choosing among them well is now a genuine consumer skill, and it is one the next generation of AI products will reward.

Try moccet

moccet is a personal intelligence built around a continuous model of one person’s life. Connect the apps you already use and let moccet pay attention to your week. Setup takes under five minutes.

Try moccet

Common questions.

An AI assistant is a chat-based system that produces text in response to user requests. An AI agent is a system equipped with tools that allow it to take actions in the world, like running code, calling external APIs, and sending messages. The distinction was formalised in the founding charter of the Agentic AI Foundation in December 2025.
ChatGPT in its default conversational mode is an AI assistant. ChatGPT's recent additions, including Operator and Workspace Agents, give it limited agent capabilities, but the underlying architecture remains a chat. A system whose centre is the conversation is an assistant. A system whose centre is action with tools is an agent.
A system that produces text needs to be smart. A system that produces effects in the world needs to be smart and reliable. Reliability is a different property from intelligence and requires sandboxing, confirmation steps, observability, and the discipline to refuse to act when uncertain. None of these is a feature of the underlying language model.
A personal intelligence is a system that runs continuously across the unbounded surface of a person's life, maintaining a structured model of the user and acting on it with confirmation. A personal intelligence uses bounded agents internally as workers, coordinated by an orchestrator. The category is distinct from both assistants and bounded agents.
Three diagnostic questions sort the categories. What can the system do? When does the system act without being asked? What does the system know about the user across sessions? Assistants only produce text and only when prompted. Bounded agents call tools to achieve a stated goal. Personal intelligences run continuously and maintain a structured model of the user.
Personal AI for people whose lives have outgrown a calendar.

A personal AI for the week you actually have.

Connect the apps you already use. moccet reads what matters, drafts what needs drafting, and surfaces only the few things that need you. Setup takes under five minutes.

Try moccet