[00:00] (0.16s)
Many of you brainstorm inside Claude
[00:02] (2.00s)
Desktop because it is pretty good. It
[00:03] (3.92s)
writes clearly and gives you a solid
[00:06] (6.00s)
plan. What I usually do is take that
[00:08] (8.00s)
plan and go back and forth between
[00:09] (9.84s)
different tools. But then this problem
[00:11] (11.68s)
comes up. They do not have any shared
[00:13] (13.44s)
context. For example, if you are working
[00:15] (15.44s)
on a single project and make a change in
[00:17] (17.60s)
one place, then go to another tool and
[00:19] (19.68s)
ask something that depends on that
[00:21] (21.36s)
change. It just does not know. There is
[00:23] (23.28s)
no awareness of what happened elsewhere.
[00:25] (25.28s)
No shared memory. But what if I told you
[00:27] (27.36s)
that all these clients, especially MCP
[00:30] (30.08s)
clients, could have one shared memory
[00:32] (32.16s)
block? That is what I am going to show
[00:34] (34.00s)
you today. You've probably heard about
[00:35] (35.52s)
me zero, which was a memory layer for AI
[00:37] (37.92s)
agents, and it turned out to be really
[00:39] (39.84s)
impressive. It was featured on many
[00:41] (41.60s)
channels, and a lot of people praised
[00:43] (43.52s)
how powerful it made their agents. Now,
[00:45] (45.68s)
they've released a pretty cool tool
[00:47] (47.20s)
called Open Memory. It basically gives
[00:49] (49.28s)
you a single memory, which you can think
[00:51] (51.20s)
of as a memory chip that works across
[00:53] (53.28s)
all your MCP clients. It connects all
[00:55] (55.68s)
your memory clients together into one
[00:58] (58.00s)
continuous memory space. Right now you
[01:00] (60.00s)
can use it locally and it's also
[01:01] (61.68s)
designed for cloud use which means you
[01:03] (63.68s)
will not need to install anything. Your
[01:05] (65.60s)
memories will be stored on the cloud if
[01:07] (67.44s)
you choose although both options are
[01:09] (69.36s)
fully supported. Okay. So this is the
[01:11] (71.52s)
open memory GitHub folder and you can
[01:13] (73.84s)
see that open memory is actually inside
[01:15] (75.92s)
mem because mem is the main repository.
[01:18] (78.96s)
In order to get open memory we're going
[01:20] (80.56s)
to have to clone the entire mem
[01:22] (82.56s)
repository. What you're going to do is
[01:24] (84.32s)
go back to the mem zero repository, get
[01:26] (86.72s)
the link, copy it, then open your
[01:28] (88.80s)
terminal and type git clone followed by
[01:30] (90.96s)
the GitHub repository link. Once that's
[01:33] (93.36s)
done, you'll go inside that repository
[01:35] (95.68s)
and inside the mem folder, you're going
[01:38] (98.08s)
to find the open memory folder. You'll
[01:40] (100.24s)
then navigate into that and all further
[01:42] (102.48s)
commands will happen from there. If you
[01:44] (104.40s)
scroll down, you'll see that to quick
[01:46] (106.56s)
start, you need to run these commands
[01:48] (108.56s)
which are basically make files that set
[01:50] (110.64s)
up the dependencies. You'll need to run
[01:52] (112.56s)
the UI and the MCP server. First things
[01:55] (115.36s)
first, Docker needs to be up and running
[01:57] (117.52s)
on your system because it downloads and
[01:59] (119.60s)
sets up Docker containers along with the
[02:01] (121.92s)
dependencies. To do this, just run the
[02:04] (124.24s)
make build command which installs those
[02:06] (126.40s)
containers. After that, run makeup which
[02:09] (129.04s)
starts the containers. Keep in mind that
[02:11] (131.04s)
you only need to run make build once to
[02:13] (133.12s)
build the containers. Later, when you
[02:15] (135.20s)
want to use it again, just run makeup.
[02:17] (137.60s)
Also, whenever you want to use the MCP
[02:20] (140.16s)
server, Docker must be up and running on
[02:22] (142.48s)
your system until you get access to the
[02:24] (144.56s)
cloud, you'll need to keep Docker
[02:26] (146.24s)
running to use it locally. Now, in the
[02:28] (148.24s)
other tab, you can see that the MCP
[02:30] (150.08s)
server is currently running. If I go
[02:31] (151.92s)
back, you can see that the Open Memory
[02:33] (153.68s)
MCP server is now up and running. Let me
[02:36] (156.08s)
show you. It's currently running on
[02:37] (157.52s)
localhost 3000, which is what we want.
[02:40] (160.00s)
That's the address where the UI runs.
[02:42] (162.00s)
So, if you want to track the UI,
[02:43] (163.68s)
navigate to that address. One more thing
[02:45] (165.76s)
I forgot to mention. You need to open
[02:47] (167.60s)
this directory and cursor. Once you open
[02:49] (169.76s)
it, it'll look something like this. And
[02:51] (171.60s)
the file structure will appear like
[02:53] (173.12s)
this. Inside the file structure, go to
[02:55] (175.36s)
the API folder. In there, you'll find av
[02:58] (178.80s)
example file. You need to paste your
[03:00] (180.64s)
open API key into this file. Copy it.
[03:03] (183.36s)
Rename it to env by removing the word
[03:06] (186.32s)
example from the file name and then
[03:08] (188.16s)
paste your actual API key into it. Once
[03:10] (190.80s)
that's done, you'll be able to use the
[03:12] (192.56s)
makeup command. They've listed this step
[03:14] (194.64s)
as a prerequisite because it's required
[03:16] (196.72s)
for LLM interactions which is why they
[03:19] (199.12s)
ask for the open API key. Okay, you can
[03:21] (201.92s)
see that now that the app is open again,
[03:24] (204.24s)
we need to install the MCP for different
[03:26] (206.56s)
tools. We have the MCP link which you
[03:28] (208.88s)
have to manually configure in the
[03:30] (210.56s)
settings. The MCP configuration lets you
[03:33] (213.36s)
accept it manually or what I really like
[03:35] (215.60s)
is the set of pre-built commands they
[03:37] (217.68s)
provide. For example, if I write a
[03:39] (219.52s)
command, you'll see the one they've
[03:41] (221.04s)
given. When you run it, it automatically
[03:43] (223.20s)
adds the MCP to the clawed client for
[03:45] (225.60s)
you. The same applies to cursor. I'll
[03:47] (227.76s)
just set it for climb or whatever you
[03:49] (229.60s)
want to use and it handles it for you.
[03:51] (231.60s)
Let me show you. You can see that I
[03:53] (233.20s)
installed both of these MCPs. Here I
[03:55] (235.52s)
installed it for Claude and down here I
[03:57] (237.60s)
installed it for cursor. Now if I go
[03:59] (239.68s)
back for example, you can see that in
[04:01] (241.52s)
cursor it's already up and running. And
[04:03] (243.52s)
if you check claude 2, you'll see that
[04:05] (245.36s)
claude connected to the MCP server name
[04:07] (247.52s)
I assigned. it's also present and both
[04:09] (249.68s)
are connected to this server running
[04:11] (251.28s)
locally on the system. Let's look at
[04:13] (253.12s)
what the open memory MCP has to offer.
[04:15] (255.60s)
On the website, they've listed a lot of
[04:17] (257.60s)
features and we can see them right here.
[04:19] (259.60s)
For example, you can personalize your
[04:21] (261.60s)
interactions with your preferences saved
[04:23] (263.52s)
in memory. Then there are supported
[04:25] (265.28s)
clients besides the ones shown. Others
[04:27] (267.60s)
can be added too. You also get full
[04:29] (269.44s)
memory control including the ability to
[04:31] (271.60s)
define retention and even pause memories
[04:33] (273.92s)
if you want. As I told you, if you want
[04:35] (275.68s)
to use it with the cloud platform, you
[04:37] (277.68s)
should go ahead and sign up for the
[04:38] (278.96s)
weight list. They've listed a bunch of
[04:40] (280.40s)
use cases, too. Beyond that, it's mostly
[04:42] (282.88s)
standard information. If you're enjoying
[04:44] (284.96s)
the video, I'd really appreciate it if
[04:46] (286.96s)
you could subscribe to the channel.
[04:48] (288.40s)
We're aiming to reach 25,000 subscribers
[04:51] (291.12s)
by the end of this month, and your
[04:52] (292.80s)
support genuinely helps. We share videos
[04:55] (295.04s)
like this three times a week, so there
[04:56] (296.96s)
is always something new and useful for
[04:58] (298.80s)
you to explore. This is an example of
[05:01] (301.04s)
how you can actually use the MCP server.
[05:03] (303.36s)
What I did was open Claw Desktop and
[05:05] (305.68s)
asked it to brainstorm an idea for a
[05:07] (307.84s)
time tracking app. First, it gave me its
[05:09] (309.92s)
own plan. Then, I added my follow-up
[05:11] (311.84s)
points, things I thought should be
[05:13] (313.52s)
implemented. After it integrated my
[05:15] (315.52s)
changes into the original plan, I asked
[05:17] (317.76s)
it to add the plan to memory as time
[05:20] (320.00s)
track plan. I didn't know exactly how
[05:21] (321.76s)
that worked at first, but it turns out
[05:23] (323.36s)
you can't add full plans directly to
[05:25] (325.84s)
memory. What actually happens is, let me
[05:28] (328.00s)
just open the MCP for you. It takes the
[05:30] (330.40s)
whole text as input. And remember, we
[05:32] (332.32s)
input the open AI key earlier. That's
[05:34] (334.56s)
used to break the input down into
[05:36] (336.32s)
smaller tasks automatically. For
[05:38] (338.24s)
example, I opened this plan here. And
[05:40] (340.40s)
although other tasks had also been
[05:42] (342.24s)
added, I think about 10 tasks were
[05:44] (344.32s)
extracted from this single plan. You can
[05:46] (346.40s)
see it's broken into different plans and
[05:48] (348.40s)
categories. Now, you might be thinking
[05:50] (350.24s)
all these plans are scattered. But don't
[05:52] (352.08s)
worry, I'll explain later how they've
[05:54] (354.08s)
actually grouped the prompts. I didn't
[05:55] (355.76s)
notice it at first either, but
[05:57] (357.36s)
eventually I saw that they were being
[05:59] (359.12s)
grouped together. I actually figured out
[06:01] (361.12s)
the method they used and I'll explain
[06:02] (362.88s)
that part soon. Moving on to cursor. I
[06:05] (365.12s)
gave it a prompt saying I want to build
[06:07] (367.20s)
a time track app and asked if it could
[06:09] (369.36s)
pull the details from memory. It then
[06:11] (371.44s)
use the MCP tools to list and search the
[06:14] (374.00s)
memory. This part is really useful. It
[06:16] (376.08s)
searches relevant information. So when
[06:18] (378.08s)
it queried about the time track app, it
[06:20] (380.24s)
retrieved memories related to that. From
[06:22] (382.32s)
there, it pulled details about Nex.js,
[06:25] (385.20s)
React, TypeScript, and the rest of the
[06:27] (387.28s)
stack we'd be using. It started
[06:29] (389.04s)
building. After it finished, I asked it
[06:31] (391.12s)
to save its progress to memory and it
[06:33] (393.20s)
did. It added those progress notes,
[06:35] (395.36s)
broke everything into chunks and stored
[06:37] (397.36s)
that too. So now all those updates were
[06:39] (399.52s)
saved in memory. Let me actually show
[06:41] (401.28s)
you the app. This is the app that was
[06:43] (403.20s)
created. There were a few small changes
[06:45] (405.20s)
and I'll also show how memory helped
[06:46] (406.96s)
with that. After a while, cursor gave me
[06:49] (409.20s)
an error when starting a new chat
[06:50] (410.96s)
because of context size. Once I did
[06:53] (413.04s)
start a new chat, I asked it to retrieve
[06:55] (415.04s)
memories related to the app's progress.
[06:57] (417.12s)
It called the MCP tool again and
[06:59] (419.20s)
retrieved all the relevant data like
[07:01] (421.04s)
where it was running and what it had
[07:02] (422.72s)
done so far. Then I had to give it a
[07:04] (424.64s)
screenshot because the contrast in some
[07:06] (426.56s)
React elements was bad and the text
[07:08] (428.56s)
wasn't visible. I asked it to fix the UI
[07:10] (430.72s)
a bit. While it was doing that, it kept
[07:12] (432.64s)
calling itself again and again trying to
[07:14] (434.64s)
locate the source directory, but it
[07:16] (436.40s)
didn't know there was a front-end
[07:17] (437.76s)
folder. So I thought I'd try giving the
[07:19] (439.84s)
full directory structure in memory.
[07:21] (441.92s)
After it fixed the issue I asked about,
[07:24] (444.08s)
it listed the whole directory structure
[07:26] (446.00s)
in text form and returned that to me.
[07:28] (448.24s)
But that part didn't work. It ended up
[07:29] (449.92s)
returning an empty result. So I think
[07:31] (451.76s)
only plain text gets saved properly.
[07:33] (453.84s)
That kind of structural info wasn't
[07:35] (455.68s)
understood or saved by the LLM. I also
[07:38] (458.16s)
got some other UI fixes done and
[07:40] (460.32s)
eventually the app was finished. It's
[07:42] (462.08s)
working completely now. Let me just add
[07:43] (463.84s)
an entry. Let's say we're working on
[07:45] (465.76s)
something. And here's the date. March
[07:47] (467.52s)
3rd, 2015. You can see it was added.
[07:50] (470.32s)
There are still a few UI issues but
[07:52] (472.24s)
nothing major. They can be fixed easily.
[07:54] (474.64s)
Now what I want to show you is how it
[07:56] (476.80s)
actually retrieves the memories. Before
[07:58] (478.80s)
that you can also see the source app for
[08:01] (481.12s)
each memory like some were created by
[08:03] (483.12s)
cursor others by claude. If we open up
[08:05] (485.60s)
memory you can see the access log. You
[08:07] (487.84s)
can change the status or even edit the
[08:10] (490.08s)
memory itself. For example, I could
[08:12] (492.16s)
paste the directory structure here too.
[08:14] (494.16s)
I'll try that out and see if it works.
[08:16] (496.08s)
But the main thing I want to show is
[08:17] (497.60s)
this. This memory is linked to all the
[08:19] (499.92s)
other memories created in the same
[08:21] (501.76s)
session. So if the MCP client requests
[08:24] (504.08s)
one memory labeled time, it also fetches
[08:26] (506.72s)
related memories. That's how they're
[08:28] (508.24s)
grouped. I figured this out while
[08:29] (509.76s)
checking the search calls. When it
[08:31] (511.44s)
searched for time tracking app, it also
[08:33] (513.44s)
pulled in context about other related
[08:35] (515.52s)
functions. So that's where all these
[08:37] (517.20s)
came from. Right now it's actually
[08:38] (518.80s)
really good. Memory is consistent across
[08:41] (521.04s)
sessions. My local MCP's client can
[08:43] (523.12s)
access it and even the MCP agents built
[08:45] (525.44s)
with MCPUs can access it. That's also
[08:47] (527.60s)
pretty good. You can see right here that
[08:49] (529.36s)
I wanted to build a to-do list app. At
[08:51] (531.52s)
first, I used the same text stack just
[08:53] (533.36s)
to test how it would perform compared to
[08:55] (535.52s)
another project. But then I thought, why
[08:57] (537.52s)
not change it up and properly test if it
[08:59] (539.92s)
can differentiate between different
[09:01] (541.60s)
projects. So I told it that we'd be
[09:03] (543.52s)
using the M stack and then I asked
[09:05] (545.52s)
Claude Desktop to push that to the MCP
[09:07] (547.92s)
server. After that, I went into cursor
[09:10] (550.48s)
and asked it which text stack we'd be
[09:12] (552.56s)
using for the project while also telling
[09:14] (554.56s)
it to only use the MCP and not check the
[09:17] (557.28s)
directory. It made the MCP call, but it
[09:19] (559.60s)
got totally confused. It said we'd be
[09:21] (561.60s)
using both the MER stack and Nex.js.
[09:24] (564.08s)
Basically, it pulled in the stack from
[09:26] (566.08s)
the previous project and this new one.
[09:28] (568.00s)
Both got uploaded and it retrieved them
[09:30] (570.08s)
together. Now, this is a problem that
[09:32] (572.00s)
could really hurt when you're building
[09:33] (573.60s)
multiple projects with this memory
[09:35] (575.52s)
layer. For example, even if you're not
[09:37] (577.36s)
using the same tech stack, let's say you
[09:39] (579.36s)
build a to-do app, then later want to
[09:41] (581.60s)
build another one or any project with
[09:43] (583.68s)
the same name, there's no clear way to
[09:45] (585.76s)
separate those memories. At some point,
[09:48] (588.16s)
one memory will cross into another and
[09:50] (590.24s)
that can break your whole project.
[09:51] (591.92s)
That's one thing I feel should be added
[09:53] (593.60s)
to this system. But overall, it's a
[09:55] (595.52s)
really strong start, and I really love
[09:57] (597.36s)
the direction it's going in. It works
[09:59] (599.28s)
great if you're doing single projects or
[10:01] (601.36s)
projects with very different names. It
[10:03] (603.36s)
also depends on how cursor executes
[10:05] (605.52s)
queries and sends them. Usually, you
[10:07] (607.44s)
just write something like text stack for
[10:09] (609.36s)
this specific app, but in my case, the
[10:11] (611.36s)
prompt was vague because I just wanted
[10:13] (613.28s)
to test it. Other than that, it's a
[10:15] (615.36s)
pretty solid tool and super useful. The
[10:17] (617.60s)
concept is really impressive and with
[10:19] (619.44s)
just a bit more improvement, it could go
[10:21] (621.28s)
a long way. That brings us to the end of
[10:23] (623.28s)
this video. If you'd like to support the
[10:25] (625.12s)
channel and help us keep making
[10:26] (626.64s)
tutorials like this, you can do so by
[10:28] (628.64s)
using the super thanks button below. As
[10:30] (630.64s)
always, thank you for watching and I'll
[10:32] (632.40s)
see you in the next one.