The week as a system, how to think about AI for your life

Dr Claude DelormeHead of Research, moccet

A tool helps with a task. The interesting unit of human life is not the task. The interesting unit is the week, with its rhythms, dependencies, commitments, and slack.

The week is the right unit for thinking about AI for your life. The dominant metaphor for AI has been the tool, and a tool helps with a task. The interesting unit of human life is not the task. The interesting unit is the week, with its rhythms, dependencies, commitments, and slack. The shift from thinking about AI as a tool to thinking about it as life infrastructure changes how users evaluate AI products, what they expect from them, and what they pay for. moccet is being built around the week as the unit of work.

This essay explains why the week is the right frame, what changes when a user makes the shift, and the diagnostic questions that distinguish a tool from infrastructure.

Why is the week the right unit for AI, not the task?

A tool is something you use to do something. The hammer drives the nail. The keyboard types the email. The chatbot writes the report. The relationship is transactional. You pick the tool up, you use it, you put it down. Whatever the tool helped you accomplish in the moment of use, it has done. Whatever it might have helped you accomplish if you had picked it up at a different moment, it has not.

A week is a working system with inputs, outputs, dependencies, and feedback loops. A week has rhythm. A week has commitments and slack. A week has good mornings and bad afternoons. A week has things that should happen and things that should not, and the difficulty of running a life is mostly the difficulty of telling them apart. The week is what most knowledge workers are actually trying to manage, and the week is what no current AI tool helps them manage in any complete sense.

A tool helps with a task. A tool does not see the week. The chatbot drafts the email but does not know whether the email should go today or wait. The calendar tool optimises the calendar but does not know that the meeting it is optimising is a meeting that should not have been booked at all. The note-taking app captures the note but does not know that the note will be relevant next Thursday. Each tool is good at its slice. None of them is responsible for the system.

The user is responsible for the system. The user looks across the tools, decides what should happen, sequences the work, holds the rhythm of the week, and remembers the things that matter. The cognitive work that has not gotten easier with AI is not the work each tool does. The cognitive work is the integration across the tools. The Boston Consulting Group study published in Harvard Business Review in March 2026 found that productivity peaks at three simultaneous AI tools and declines past that, because integration costs exceed per-tool benefits. The integration is the bottleneck. The integration is the user. A fuller account is in the essay on why AI tools have not made you more productive.

What changes when AI becomes life infrastructure rather than a tool?

The shift from AI as tool to AI as life infrastructure changes how the user experiences the technology, evaluates products, expects to interact with them, and pays for them. Each shift is a consequence of the unit changing from task to week.

A user who thinks of AI as a tool reaches for it when they have a task. They open ChatGPT, they prompt, they get an answer, they leave. The interaction is bounded by the task. Between tasks, the AI does nothing. The user remains the centre of the work, and the tool is an accessory.

A user who thinks of AI as life infrastructure does not reach for it. The AI is on the way the lights are on. The AI is paying attention continuously. The user looks at it occasionally, when something has been surfaced for their attention. Most of what the AI does, the user never sees, because most of the work of running a life is routine and the system handles routine quietly. moccet is being built to operate this way, with continuous selective attention as the design centre rather than a feature.

The two experiences feel different because they are different categories of relationship between the person and the technology.

moccet — AI built for you

How should users evaluate a personal intelligence?

A tool is evaluated by the quality of its outputs. A chatbot is evaluated by how well it writes. An infrastructure is evaluated by what happens in the user's life when they have it versus when they don't. The right evaluation question for a personal intelligence is not how well does it draft an email, but how much of my week does it carry that I used to carry myself.

The question is harder to answer. The question also reveals what the product is actually for.

Four diagnostic questions sort the categories.

What does the system know about my week? A personal intelligence worth using has a structured model of the user, updated continuously from connected sources. The user can verify what the system knows by asking it. The answer should be a coherent picture, not a list of stored facts. The depth of this answer is the most diagnostic test for whether the product has been built around a real model or around a Rolodex with marketing copy.

What does the system do during my week? A personal intelligence worth using takes action, with confirmation. The actions should be visible in a log the user can read. The user should be able to see what the system did, when, and why. The log is the receipt for the relationship. Without it, trust cannot be built across the small daily actions that compound into deep delegation.

What does the system surface to me? A personal intelligence worth using is selective. The signals it brings to the user are the few that need the user. The rest, the system handles. The user experiences the system as quieter than the systems they currently use, not louder. If the user is being notified more after adopting the product than before, the product has failed the basic ambient discipline that distinguishes infrastructure from a louder version of the existing tools.

What does the system cost me? A personal intelligence worth using is a piece of infrastructure. The financial cost is a subscription, often higher than a chat product, because the engineering and the operational complexity are larger. The other cost is the trust the user is placing in the system. The privacy architecture, the audit logs, the right to disconnect, are the price the user is also paying. The user should be willing to pay it. The product should be willing to earn it.

What should users expect from a personal intelligence?

A tool is expected to be smart. An infrastructure is expected to be trustworthy. The two are different.

A smart tool that occasionally fails is fine. The user notices the failure, corrects it, moves on. An infrastructure that occasionally fails is a serious problem. The failure is not a bad sentence. The failure is a meeting that did not move, an email that did not go, a Tuesday morning that was not protected. The cost of failure scales with the level of trust, and personal intelligence lives at a level of trust where the engineering must clear a different bar than chat does.

The bar shows up in three places. First, in reliability. The system has to be right most of the time, and graceful when it is not. Second, in the privacy architecture that makes deep access safe. Third, in the discipline of selectivity, which is the architectural answer to the question of what reaches the user.

A user who has chosen a personal intelligence is not buying a smarter chatbot. The user is buying a different kind of relationship with software, one that is closer to a long-term service than to a tool. The closest analogues are not other AI products. The closest analogues are the people users hire to handle parts of their life. An assistant. A chief of staff. A coach. A long-term advisor. The personal intelligence is in the same lineage, made possible at scale that approaches the marginal cost of compute, available to anyone willing to delegate.

moccet — AI built for you

What does the next decade of AI for your life look like?

The week as a system is not a metaphor stretched thin. The week is a closer description of what is actually happening in a knowledge worker's life than the task model the AI industry has been building for over the past three years. The task model is appropriate for tools. The task model is inappropriate for infrastructure, and infrastructure is what the work of running a life requires.

The companies that understand this distinction are building personal intelligences. The companies that do not are building better chatbots. Both are useful. Only one of them changes the shape of the user's week. Whoever picks the right shape for their life will spend the next decade in a different relationship with software than the one most people have now, and the difference, at the end of the decade, will look in retrospect like the difference between living in a house with electricity and living in one without.

Try moccet

moccet is a personal intelligence built around a continuous model of one person’s life. Connect the apps you already use and let moccet pay attention to your week. Setup takes under five minutes.

Try moccet

Common questions.

A tool helps with a task. A week is a working system with rhythm, dependencies, commitments, and slack, and the difficulty of running a life is mostly the difficulty of telling what should happen apart from what should not. The cognitive work that has not gotten easier with AI is the integration across tools, and the integration is what happens at the level of the week, not the task.
A tool is something the user reaches for when they have a task. Between tasks, the tool does nothing. Life infrastructure is something the user notices is on, paying attention continuously and handling most of the work of running a life quietly. The user surfaces only when their attention is genuinely required. The two are different categories of relationship between the person and the technology.
Four diagnostic questions sort the products. What does the system know about my week? What does the system do during my week? What does the system surface to me? What does the system cost me? A personal intelligence worth using has a structured model of the user, takes action with confirmation, surfaces only the few things that need attention, and earns the trust the user is paying with.
A smart tool that occasionally fails is fine because the user notices and corrects. An infrastructure that fails has different consequences. The failure is a meeting that did not move, an email that did not go, a morning that was not protected. The cost of failure scales with the level of trust, and personal intelligence lives at a level of trust where the engineering must clear a different bar than chat does.
The closest analogues to a personal intelligence are not other AI products. They are the people users hire to handle parts of their life. An executive assistant, a chief of staff, a coach, a long-term advisor. A personal intelligence is in the same lineage, made possible at scale that approaches the marginal cost of compute, available to anyone willing to delegate.
Personal AI for people whose lives have outgrown a calendar.

A personal AI for the week you actually have.

Connect the apps you already use. moccet reads what matters, drafts what needs drafting, and surfaces only the few things that need you. Setup takes under five minutes.

Try moccet