Journal

Claude Projects and the Rise of Persistent AI Workspaces

6 December 2025

ClaudeAI ToolsProductivity

Claude Projects and the Rise of Persistent AI Workspaces

One of the most significant shifts in how we work with AI tools isn't about model capability — it's about memory. For years, every AI conversation started from scratch. You'd open a chat, explain your context, upload your files, set your tone, and begin. Then close the window. Next time: repeat everything.

That pattern is changing. The major AI platforms have all introduced some version of a persistent workspace — a way to give the model standing context so work can continue across sessions without constant re-establishment. It's a seemingly small change that has a large impact on how useful AI actually is in practice.


The Problem It Solves

Working without persistent context is the AI equivalent of a colleague who forgets everything between meetings. You spend the first ten minutes of every session re-briefing them — who you are, what the project is, what was already decided, how you like to work. The actual work doesn't start until you've done all that.

For simple, one-off tasks this doesn't matter much. But for anything ongoing — a product being built, a client account being managed, a document being developed over time — the overhead compounds quickly. Every session carries a setup tax.

Persistent workspaces are the solution to that tax.


Claude Projects

Claude Projects, launched in mid-2024, is Anthropic's approach to this problem. A Project is a self-contained workspace with its own knowledge base, custom instructions, and conversation history. Files uploaded to a Project remain accessible across every conversation within it. Instructions set at the Project level shape every response without needing to be repeated.

In practical terms, this means you can create a Project for, say, a specific client, upload their brand guidelines and relevant documents, set instructions for tone and output style, and then work within that context indefinitely. New conversations within the Project start already briefed.

For paid plans, Projects also support RAG (Retrieval Augmented Generation), which automatically scales the knowledge base when it approaches context limits — maintaining response quality as the volume of material grows. On Team and Enterprise plans, Projects can be shared across an organisation, with role-based access controls distinguishing between those who can view and those who can edit.

More recently, Anthropic has extended this further with Claude Memory — a persistent layer that carries knowledge, preferences, and project context across sessions using a file-based architecture. Rather than relying on complex vector databases, memory is stored in simple markdown files (CLAUDE.md), which are transparent, editable, and version-controllable. It's a deliberate design choice that keeps the user in control of what's remembered.

This project-based approach is directly connected to the context rot problem discussed elsewhere in this journal. By keeping each Project isolated, there's no bleed between contexts — a client's confidential material stays in their Project, and the noise from one workstream doesn't pollute another.


ChatGPT Projects

OpenAI introduced Projects to ChatGPT in late 2024, rolling it out progressively across its free and paid tiers through 2025. The concept is similar: a dedicated space where you can group related conversations, upload files, and set persistent instructions that apply to every chat within that Project.

Where ChatGPT's implementation differs is in its memory model. Rather than a file-based approach, ChatGPT uses project-level memory that draws on conversation history within the Project to surface relevant context. The model may recall previous discussions when answering new questions — though this isn't a visible, auditable list of stored facts in the same way Claude's approach is.

Shared Projects became available across Teams and Enterprise plans in late 2025, with collaboration controls that allow multiple users to contribute chats, files, and edits. For organisations already embedded in OpenAI's ecosystem — using Custom GPTs or the broader ChatGPT business stack — Projects slot naturally into that environment.

One notable addition is the ability to connect Projects to external tools including Google Drive, Slack, SharePoint, and GitHub, turning the Project into a genuine organisational hub rather than just a chat container.


Microsoft 365 Copilot

Microsoft's approach takes a different shape — partly because Copilot lives inside an existing ecosystem rather than being a standalone product.

Copilot Pages is the closest equivalent to a persistent project workspace. It allows Copilot Chat responses to be turned into editable, shareable canvases — durable documents that persist, can be refined over time, and shared with colleagues for collaboration. Pages integrate with Microsoft Loop and are stored in OneDrive, making them part of the broader Microsoft 365 governance and compliance framework.

Copilot Notebooks (arriving in OneNote in late 2025) takes this further — allowing users to gather project materials including notes, files, images, and recordings into a single context that grounds Copilot's responses. It's designed for the kind of long-running project work where context accumulates organically over time.

For enterprise users, Copilot also connects deeply into the M365 graph — meaning it can draw on emails, calendar context, Teams conversations, SharePoint documents, and more as background context. This gives it a form of ambient awareness that standalone tools don't have: it already knows what you've been working on, who you've been speaking to, and what's in your documents.

The trade-off is that this depth of integration comes with complexity. Copilot lives inside Microsoft's licensing and governance model, and the experience varies significantly depending on which plan, which apps, and which features an organisation has enabled.


How They Compare

Claude Projects ChatGPT Projects Microsoft Copilot
Persistent knowledge base Yes — file upload + RAG Yes — file upload + conversation memory Yes — via Pages, Notebooks, M365 graph
Custom instructions Yes — per-Project Yes — per-Project Yes — via Copilot agents and prompts
Cross-session memory Yes — CLAUDE.md file-based Yes — project-level memory Yes — Work IQ memory layer
Team sharing Yes — Team/Enterprise Yes — Team/Enterprise Yes — deeply integrated
External integrations Growing connector support Google Drive, Slack, GitHub, Outlook Entire M365 ecosystem
Context isolation Project-level silos Project-level silos Organisation-wide, permission-controlled

What This Means in Practice

The emergence of persistent workspaces marks a maturation of AI tooling from commodity chat to structured work environment. The models themselves haven't changed — what's changed is the architecture around them, making it practical to use AI as a genuine ongoing collaborator rather than a one-off query tool.

For anyone building workflows around AI, this is the shift worth paying attention to. The question is no longer just "which model is best?" but "how do I structure my context so the model stays useful over time?"

The answer looks different depending on your setup. For individuals working in Claude, Projects combined with disciplined context management is a powerful foundation. For teams inside the Microsoft stack, Copilot's integration with M365 tools can leverage context that already exists. For teams that need a flexible, model-agnostic approach, ChatGPT Projects with external connectors covers a lot of ground.

The common thread across all three is the same insight: context is the work. The better you manage what the model knows, the better it performs. Persistent workspaces are the infrastructure that makes that practical at scale.


Posted by Envision8 · envision8.com