YouTube Deep SummaryYouTube Deep Summary

Star Extract content that makes a tangible impact on your life

Video thumbnail

All My AI Apps Are Connected to One MIND β€” With Open Memory

AI LABS β€’ 2025-05-15 β€’ 10:36 minutes β€’ YouTube

πŸ€– AI-Generated Summary:

Unlocking Seamless Collaboration: How Open Memory Bridges AI Tool Context Gaps

If you’ve ever juggled multiple AI clients like Claude Desktop and Cursor for brainstorming or project planning, you’ve probably run into a frustrating roadblock: lack of shared context. Imagine working on a project, making changes in one AI tool, then switching to another only to find it has no idea about the updates you made. This disconnect happens because most AI clients operate in isolation without a shared memory. But what if all your AI tools could talk to the same memory, syncing knowledge seamlessly across platforms?

Enter Open Memory β€” an exciting new tool building on the success of Mem Zero, designed to create a unified memory space accessible by all your MCP (Memory-Capable) clients. In this post, we’ll dive into what Open Memory is, how to set it up, and why it can transform your workflow by connecting your AI assistants like never before.


The Problem: No Shared Memory Across AI Tools

Many users enjoy brainstorming inside Claude Desktop because it writes clearly and provides solid plans. However, when you try to switch between different AI tools for the same project, none of them share memory or context. For example, if you create a project plan in Claude and then ask Cursor for help based on that plan, Cursor won’t know what changes you made earlier. This lack of shared awareness limits productivity and causes repetitive work.


What Is Open Memory?

Open Memory is a memory layer tool for AI agents that acts like a shared memory chip for all your MCP clients. Instead of isolated memories per tool, Open Memory creates one continuous memory space accessible by all connected clients. It currently supports local usage via Docker containers and is designed for cloud deployment, meaning you can optionally store your memories in the cloud without installing anything extra.

This idea originated from Mem Zero, a popular AI memory layer that impressed many users by dramatically enhancing agent capabilities. Open Memory builds on that foundation with easier setup and broader client support.


How to Set Up Open Memory Locally

  1. Clone the Mem Repository
    Open Memory resides inside the mem repository on GitHub, so you need to clone the entire repo:
    bash git clone <mem_zero_repository_link>
    Then navigate to mem/open_memory folder to access Open Memory.

  2. Prepare Your Environment

  3. Ensure Docker is installed and running on your system. Open Memory uses Docker containers to manage dependencies.
  4. Run the following commands inside the open_memory directory:

    • make build β€” builds and downloads the required Docker containers (run once).
    • make up β€” starts the containers (run every time you want to use the server).
  5. Configure API Keys

  6. In the API folder, rename av.example to .env.
  7. Paste your OpenAI API key inside the .env file to enable language model interactions.

  8. Start the MCP Server and UI
    After running make up, the MCP server will be available locally at localhost:3000. Navigate there in your browser to access the Open Memory UI.


Connecting MCP Clients Like Claude and Cursor

Open Memory supports multiple MCP clients such as Claude Desktop and Cursor. You can install MCP for these tools either manually or using provided pre-built commands that automatically configure the connection.

Once connected, both clients link to the same MCP server running locally, enabling them to read from and write to the shared memory.


Key Features of Open Memory

  • Personalized Interactions: Save user preferences in memory for tailored AI responses.
  • Full Memory Control: Define retention policies, pause memories, and edit stored data.
  • Multi-client Support: Connect multiple AI clients to the same memory server.
  • Cloud and Local Options: Use Open Memory locally with Docker or opt for cloud storage (sign-up required).

Real-world Use Case: Building a Time Tracking App

Here’s how Open Memory shines in practice:

  • Using Claude Desktop, a project plan for a time tracking app was brainstormed and refined.
  • The plan was added to Open Memory, where OpenAI’s API automatically broke down the full plan into smaller, manageable tasks.
  • Cursor was then prompted to build the app by pulling details from the shared memory β€” including tech stack choices like Next.js, React, and TypeScript.
  • As progress was made, Cursor saved updates back into Open Memory, chunking notes and code snippets for easy retrieval.
  • When a new chat session started and context size became an issue, Cursor successfully retrieved relevant memories from Open Memory to continue development smoothly.
  • UI bugs were fixed with memory's help, although attempts to store complex structured data (like directory trees) highlighted current limitations β€” Open Memory works best with plain text data.

The shared memory approach improved continuity and reduced redundant explanations or context-setting.


Current Limitations and Future Improvements

One notable challenge is memory segregation across projects. When building multiple projects with similar names or overlapping tech stacks, Open Memory currently doesn’t clearly separate memories. This can cause memories from different projects to merge, leading to confusion and errors.

For example, when switching between a to-do list app and a time tracker, both projects’ tech stacks were retrieved together, confusing the AI clients. Adding explicit project-based memory separation or namespaces would greatly enhance usability for multi-project workflows.


Final Thoughts

Open Memory is a powerful step forward for anyone using multiple AI clients in tandem. It creates a unified memory space that helps agents work together by sharing knowledge effortlessly. While it’s still early days and some features like multi-project separation need refinement, the foundation is solid and promising.

If you frequently switch between AI tools like Claude and Cursor, Open Memory could be a game changer for your productivity and project continuity.


Join the Community and Stay Updated

If you want to explore Open Memory yourself, check out the GitHub repository (linked in this post) and follow the setup instructions. The tool continues to evolve, and your feedback could help shape its future.

Also, if you found this overview helpful, consider subscribing to channels and communities sharing AI development tutorials and updates. There’s always more to learn and exciting tools to discover!


Ready to unify your AI clients with shared memory? Give Open Memory a try and unlock seamless collaboration across your favorite tools today.


πŸ“ Transcript (310 entries):

Many of you brainstorm inside Claude Desktop because it is pretty good. It writes clearly and gives you a solid plan. What I usually do is take that plan and go back and forth between different tools. But then this problem comes up. They do not have any shared context. For example, if you are working on a single project and make a change in one place, then go to another tool and ask something that depends on that change. It just does not know. There is no awareness of what happened elsewhere. No shared memory. But what if I told you that all these clients, especially MCP clients, could have one shared memory block? That is what I am going to show you today. You've probably heard about me zero, which was a memory layer for AI agents, and it turned out to be really impressive. It was featured on many channels, and a lot of people praised how powerful it made their agents. Now, they've released a pretty cool tool called Open Memory. It basically gives you a single memory, which you can think of as a memory chip that works across all your MCP clients. It connects all your memory clients together into one continuous memory space. Right now you can use it locally and it's also designed for cloud use which means you will not need to install anything. Your memories will be stored on the cloud if you choose although both options are fully supported. Okay. So this is the open memory GitHub folder and you can see that open memory is actually inside mem because mem is the main repository. In order to get open memory we're going to have to clone the entire mem repository. What you're going to do is go back to the mem zero repository, get the link, copy it, then open your terminal and type git clone followed by the GitHub repository link. Once that's done, you'll go inside that repository and inside the mem folder, you're going to find the open memory folder. You'll then navigate into that and all further commands will happen from there. If you scroll down, you'll see that to quick start, you need to run these commands which are basically make files that set up the dependencies. You'll need to run the UI and the MCP server. First things first, Docker needs to be up and running on your system because it downloads and sets up Docker containers along with the dependencies. To do this, just run the make build command which installs those containers. After that, run makeup which starts the containers. Keep in mind that you only need to run make build once to build the containers. Later, when you want to use it again, just run makeup. Also, whenever you want to use the MCP server, Docker must be up and running on your system until you get access to the cloud, you'll need to keep Docker running to use it locally. Now, in the other tab, you can see that the MCP server is currently running. If I go back, you can see that the Open Memory MCP server is now up and running. Let me show you. It's currently running on localhost 3000, which is what we want. That's the address where the UI runs. So, if you want to track the UI, navigate to that address. One more thing I forgot to mention. You need to open this directory and cursor. Once you open it, it'll look something like this. And the file structure will appear like this. Inside the file structure, go to the API folder. In there, you'll find av example file. You need to paste your open API key into this file. Copy it. Rename it to env by removing the word example from the file name and then paste your actual API key into it. Once that's done, you'll be able to use the makeup command. They've listed this step as a prerequisite because it's required for LLM interactions which is why they ask for the open API key. Okay, you can see that now that the app is open again, we need to install the MCP for different tools. We have the MCP link which you have to manually configure in the settings. The MCP configuration lets you accept it manually or what I really like is the set of pre-built commands they provide. For example, if I write a command, you'll see the one they've given. When you run it, it automatically adds the MCP to the clawed client for you. The same applies to cursor. I'll just set it for climb or whatever you want to use and it handles it for you. Let me show you. You can see that I installed both of these MCPs. Here I installed it for Claude and down here I installed it for cursor. Now if I go back for example, you can see that in cursor it's already up and running. And if you check claude 2, you'll see that claude connected to the MCP server name I assigned. it's also present and both are connected to this server running locally on the system. Let's look at what the open memory MCP has to offer. On the website, they've listed a lot of features and we can see them right here. For example, you can personalize your interactions with your preferences saved in memory. Then there are supported clients besides the ones shown. Others can be added too. You also get full memory control including the ability to define retention and even pause memories if you want. As I told you, if you want to use it with the cloud platform, you should go ahead and sign up for the weight list. They've listed a bunch of use cases, too. Beyond that, it's mostly standard information. If you're enjoying the video, I'd really appreciate it if you could subscribe to the channel. We're aiming to reach 25,000 subscribers by the end of this month, and your support genuinely helps. We share videos like this three times a week, so there is always something new and useful for you to explore. This is an example of how you can actually use the MCP server. What I did was open Claw Desktop and asked it to brainstorm an idea for a time tracking app. First, it gave me its own plan. Then, I added my follow-up points, things I thought should be implemented. After it integrated my changes into the original plan, I asked it to add the plan to memory as time track plan. I didn't know exactly how that worked at first, but it turns out you can't add full plans directly to memory. What actually happens is, let me just open the MCP for you. It takes the whole text as input. And remember, we input the open AI key earlier. That's used to break the input down into smaller tasks automatically. For example, I opened this plan here. And although other tasks had also been added, I think about 10 tasks were extracted from this single plan. You can see it's broken into different plans and categories. Now, you might be thinking all these plans are scattered. But don't worry, I'll explain later how they've actually grouped the prompts. I didn't notice it at first either, but eventually I saw that they were being grouped together. I actually figured out the method they used and I'll explain that part soon. Moving on to cursor. I gave it a prompt saying I want to build a time track app and asked if it could pull the details from memory. It then use the MCP tools to list and search the memory. This part is really useful. It searches relevant information. So when it queried about the time track app, it retrieved memories related to that. From there, it pulled details about Nex.js, React, TypeScript, and the rest of the stack we'd be using. It started building. After it finished, I asked it to save its progress to memory and it did. It added those progress notes, broke everything into chunks and stored that too. So now all those updates were saved in memory. Let me actually show you the app. This is the app that was created. There were a few small changes and I'll also show how memory helped with that. After a while, cursor gave me an error when starting a new chat because of context size. Once I did start a new chat, I asked it to retrieve memories related to the app's progress. It called the MCP tool again and retrieved all the relevant data like where it was running and what it had done so far. Then I had to give it a screenshot because the contrast in some React elements was bad and the text wasn't visible. I asked it to fix the UI a bit. While it was doing that, it kept calling itself again and again trying to locate the source directory, but it didn't know there was a front-end folder. So I thought I'd try giving the full directory structure in memory. After it fixed the issue I asked about, it listed the whole directory structure in text form and returned that to me. But that part didn't work. It ended up returning an empty result. So I think only plain text gets saved properly. That kind of structural info wasn't understood or saved by the LLM. I also got some other UI fixes done and eventually the app was finished. It's working completely now. Let me just add an entry. Let's say we're working on something. And here's the date. March 3rd, 2015. You can see it was added. There are still a few UI issues but nothing major. They can be fixed easily. Now what I want to show you is how it actually retrieves the memories. Before that you can also see the source app for each memory like some were created by cursor others by claude. If we open up memory you can see the access log. You can change the status or even edit the memory itself. For example, I could paste the directory structure here too. I'll try that out and see if it works. But the main thing I want to show is this. This memory is linked to all the other memories created in the same session. So if the MCP client requests one memory labeled time, it also fetches related memories. That's how they're grouped. I figured this out while checking the search calls. When it searched for time tracking app, it also pulled in context about other related functions. So that's where all these came from. Right now it's actually really good. Memory is consistent across sessions. My local MCP's client can access it and even the MCP agents built with MCPUs can access it. That's also pretty good. You can see right here that I wanted to build a to-do list app. At first, I used the same text stack just to test how it would perform compared to another project. But then I thought, why not change it up and properly test if it can differentiate between different projects. So I told it that we'd be using the M stack and then I asked Claude Desktop to push that to the MCP server. After that, I went into cursor and asked it which text stack we'd be using for the project while also telling it to only use the MCP and not check the directory. It made the MCP call, but it got totally confused. It said we'd be using both the MER stack and Nex.js. Basically, it pulled in the stack from the previous project and this new one. Both got uploaded and it retrieved them together. Now, this is a problem that could really hurt when you're building multiple projects with this memory layer. For example, even if you're not using the same tech stack, let's say you build a to-do app, then later want to build another one or any project with the same name, there's no clear way to separate those memories. At some point, one memory will cross into another and that can break your whole project. That's one thing I feel should be added to this system. But overall, it's a really strong start, and I really love the direction it's going in. It works great if you're doing single projects or projects with very different names. It also depends on how cursor executes queries and sends them. Usually, you just write something like text stack for this specific app, but in my case, the prompt was vague because I just wanted to test it. Other than that, it's a pretty solid tool and super useful. The concept is really impressive and with just a bit more improvement, it could go a long way. That brings us to the end of this video. If you'd like to support the channel and help us keep making tutorials like this, you can do so by using the super thanks button below. As always, thank you for watching and I'll see you in the next one.