[00:00] (0.08s)
Hey everyone, I am currently working
[00:02] (2.08s)
inside cursor and I want to talk about a
[00:04] (4.24s)
problem I have been running into. I use
[00:06] (6.16s)
the Gemini 2.5 model and like most
[00:09] (9.28s)
language models, it has a cutoff date.
[00:11] (11.68s)
This means it does not have knowledge of
[00:13] (13.44s)
newer tools or libraries unless you
[00:15] (15.84s)
manually explain them or paste in the
[00:18] (18.16s)
code yourself. Cursor tries to improve
[00:20] (20.16s)
this limitation by allowing you to link
[00:22] (22.16s)
documentation. For example, when I am
[00:24] (24.40s)
working with the MCPU's library, I can
[00:26] (26.96s)
simply add the GitHub read me and cursor
[00:29] (29.60s)
is able to read the content. The issue
[00:31] (31.84s)
is that it loads the entire
[00:33] (33.44s)
documentation and attempts to process
[00:35] (35.84s)
everything at once, which creates
[00:37] (37.76s)
problems with context and can lead to
[00:40] (40.00s)
confusion. To address this, I tried
[00:42] (42.16s)
using context 7 MCP. It is a solid
[00:45] (45.28s)
platform that hosts updated
[00:47] (47.36s)
documentation and uses retrieval
[00:49] (49.60s)
augmented generation to serve only the
[00:52] (52.32s)
relevant information. This helped in
[00:54] (54.32s)
some cases, but the results were
[00:56] (56.24s)
inconsistent. Even when I explicitly
[00:58] (58.48s)
told cursor to use context 7mcp, it
[01:01] (61.76s)
often ignored that instruction and
[01:03] (63.52s)
search the web instead. At times, it
[01:06] (66.00s)
pulled in unrelated content that made
[01:08] (68.16s)
things more difficult, especially when
[01:10] (70.32s)
working with overlapping tools such as
[01:12] (72.48s)
MCPUs. These issues became more
[01:14] (74.88s)
noticeable when I was dealing with tools
[01:16] (76.88s)
or frameworks hosted on GitHub because
[01:19] (79.12s)
many repositories and resources are
[01:21] (81.16s)
interconnected. Cursor ends up
[01:23] (83.12s)
retrieving context that does not align
[01:25] (85.12s)
with what I am actually working on. That
[01:27] (87.20s)
is where git MCP becomes useful. It
[01:29] (89.76s)
transforms any GitHub repository into
[01:32] (92.48s)
its own dedicated MCP server with
[01:35] (95.36s)
focused documentation. The setup process
[01:37] (97.68s)
takes just a few seconds and provides
[01:39] (99.92s)
Cursor with exactly the context it
[01:42] (102.00s)
needs. In this video, I'm going to show
[01:43] (103.76s)
how Git MCP implemented the new A2A
[01:46] (106.56s)
protocol correctly without introducing
[01:48] (108.80s)
any errors. It offers a straightforward
[01:51] (111.20s)
way to improve cursors accuracy and
[01:53] (113.68s)
makes the experience of working in
[01:55] (115.36s)
coding editors much smoother. This is
[01:57] (117.36s)
the GitHub repository for the GitHub MCP
[02:00] (120.16s)
tool and it contains a lot of useful
[02:02] (122.48s)
information. There is a great example
[02:04] (124.40s)
that demonstrates how they built a 3JS
[02:06] (126.80s)
project both with and without GitHub
[02:09] (129.28s)
MCP. So, let's take a look at that. You
[02:11] (131.36s)
can clearly see the level of detail that
[02:13] (133.28s)
resulted from using GitHub MCP in
[02:15] (135.92s)
comparison to not using it. The key
[02:18] (138.08s)
difference lies in the context the AI
[02:20] (140.08s)
model receives and how specific that
[02:22] (142.32s)
context is. When the context is highly
[02:24] (144.72s)
specific, the model is able to generate
[02:26] (146.96s)
much better graphics as demonstrated
[02:28] (148.96s)
here. That is not even the most
[02:30] (150.88s)
impressive part. Other tools can feel
[02:33] (153.20s)
overwhelming to set up. As I mentioned
[02:35] (155.44s)
with context 7 MCP, there are times when
[02:38] (158.56s)
it completely ignores the MCP server and
[02:41] (161.52s)
begins searching the web instead. In
[02:43] (163.76s)
this case, the setup is so minimal and
[02:46] (166.24s)
lightweight that there is no friction
[02:48] (168.16s)
involved. You simply configure it and
[02:50] (170.56s)
begin coding. It is also completely
[02:52] (172.56s)
free. You can self-host it with MCP
[02:55] (175.04s)
servers if you prefer, although I do not
[02:57] (177.28s)
believe it is necessary. It connects
[02:59] (179.28s)
with any integrated development
[03:01] (181.04s)
environment including cloud desktop,
[03:03] (183.52s)
windsurf, VS code client or cursor. They
[03:07] (187.04s)
have also demonstrated how to use it
[03:09] (189.04s)
with a specific repository or even a
[03:11] (191.52s)
GitHub pages site. This means that if
[03:13] (193.76s)
the documentation is hosted on a GitHub
[03:16] (196.08s)
pages site such as Langraph, you can
[03:18] (198.64s)
feed it to the agent. However, if the
[03:21] (201.12s)
documentation is not hosted there, you
[03:23] (203.28s)
will not be able to use it with this MCP
[03:25] (205.44s)
at this time. Now I will show you how
[03:27] (207.36s)
quick the setup process is and how fast
[03:29] (209.52s)
you can add the MCP server and begin
[03:32] (212.00s)
working with it. This is the repository
[03:34] (214.00s)
for the Google A2A protocol which is a
[03:36] (216.80s)
relatively new standard designed for
[03:38] (218.80s)
communication between agents built on
[03:40] (220.88s)
any framework. We also have a video that
[03:43] (223.36s)
covers this topic in detail and you can
[03:45] (225.68s)
find the link to that in the description
[03:47] (227.52s)
below. If you want to provide this
[03:49] (229.28s)
GitHub repository to cursor so that it
[03:51] (231.84s)
can begin building with it, especially
[03:53] (233.68s)
since this is a new protocol, you will
[03:55] (235.52s)
need to supply the right context. The
[03:57] (237.60s)
process is very straightforward. You
[03:59] (239.60s)
simply replace github.com with
[04:02] (242.84s)
gitmcp.io in the URL and press enter.
[04:05] (245.76s)
That action will give you an active MCP
[04:08] (248.16s)
server based on the repository. There is
[04:10] (250.40s)
no need to create any rules manually
[04:12] (252.80s)
because it is already configured for all
[04:14] (254.80s)
major coding environments. You just need
[04:16] (256.96s)
to copy the MCP rule and paste it into
[04:19] (259.68s)
cursor. Once that is done, cursor will
[04:22] (262.24s)
have access to the documentation and the
[04:24] (264.48s)
MCP server will deliver accurate and
[04:26] (266.88s)
relevant instructions for using this
[04:28] (268.72s)
protocol to build agents. You also have
[04:30] (270.72s)
the option to interact with the
[04:32] (272.24s)
documentation directly. When the page
[04:34] (274.48s)
opens, you will see a clean chat
[04:36] (276.56s)
interface that is powered by language
[04:38] (278.56s)
models. These models are available for
[04:40] (280.56s)
free and while they are not the most
[04:42] (282.40s)
advanced, they perform well enough to
[04:44] (284.56s)
answer questions and assist with using
[04:46] (286.72s)
the MCP tool. You can ask any question
[04:49] (289.36s)
related to the Google A2A documentation
[04:52] (292.08s)
and the system will help you. I have a
[04:54] (294.08s)
pretty cool example to show you. I
[04:56] (296.32s)
implemented the A2A protocol between
[04:58] (298.56s)
three separate agents and I did not read
[05:01] (301.12s)
a single line of the A2A documentation
[05:03] (303.52s)
beforehand. After completing the
[05:05] (305.04s)
implementation, I went through the
[05:06] (306.80s)
documentation to confirm that everything
[05:08] (308.72s)
had been done correctly. I began by
[05:10] (310.80s)
prompting it to explain the A2A protocol
[05:13] (313.20s)
and how it works and it immediately
[05:15] (315.52s)
fetched the documentation. One of the
[05:17] (317.92s)
things I really appreciate is that when
[05:20] (320.40s)
I mention the term A2A protocol, it
[05:22] (322.80s)
automatically retrieves information from
[05:24] (324.80s)
the A2A MCP because that is the name
[05:27] (327.28s)
under which it is saved. When I used
[05:29] (329.12s)
context 7 MCP, it would occasionally
[05:31] (331.60s)
hallucinate based on how the prompt was
[05:33] (333.60s)
written, unless I specifically
[05:35] (335.28s)
instructed it to use the MCP. That added
[05:37] (337.84s)
a layer of friction to the overall
[05:39] (339.52s)
process. Once it retrieved the
[05:41] (341.20s)
documentation, it provided a complete
[05:43] (343.52s)
explanation of how the protocol works
[05:46] (346.16s)
and what it is intended to do. After
[05:48] (348.40s)
that, I gave it the actual task. I asked
[05:51] (351.12s)
it to create three agents, including one
[05:53] (353.68s)
main agent that I would interact with
[05:55] (355.76s)
directly, one that discusses only
[05:57] (357.76s)
animals and another that discusses only
[06:00] (360.08s)
plants. Although this is a fairly simple
[06:02] (362.16s)
setup, it demonstrates an important
[06:04] (364.32s)
concept. You can connect smaller
[06:06] (366.16s)
language models to rag databases that
[06:08] (368.32s)
are focused on specific domains. There
[06:10] (370.48s)
is no need to rely on one large model
[06:12] (372.56s)
with extensive training data. Instead,
[06:15] (375.12s)
you can break the problem down into
[06:16] (376.80s)
focused areas of knowledge or tools. I
[06:19] (379.04s)
then instructed it that the main agent
[06:20] (380.88s)
should communicate with the second and
[06:22] (382.72s)
third agents using the A2A protocol and
[06:25] (385.68s)
that was the key requirement. It began
[06:27] (387.76s)
building the agents and continued to
[06:29] (389.44s)
call the MCP tool throughout the process
[06:32] (392.48s)
which was helpful because it
[06:33] (393.84s)
consistently referenced the correct
[06:35] (395.40s)
documentation. It generated all of the
[06:37] (397.68s)
necessary files and separated the logic
[06:39] (399.84s)
for each agent. I also asked it to
[06:42] (402.08s)
include a readme file and it created
[06:44] (404.16s)
that along with a requirements file. I
[06:46] (406.56s)
followed the steps outlined in the
[06:48] (408.16s)
readme and the agents are now live and
[06:50] (410.56s)
successfully connected. If you're
[06:52] (412.48s)
enjoying the video, I'd really
[06:54] (414.00s)
appreciate it if you could subscribe to
[06:55] (415.68s)
the channel. We're aiming to reach
[06:57] (417.52s)
25,000 subscribers by the end of this
[06:59] (419.92s)
month, and your support genuinely helps.
[07:02] (422.24s)
We share videos like this three times a
[07:04] (424.32s)
week, so there is always something new
[07:06] (426.16s)
and useful for you to explore. I have
[07:08] (428.48s)
started the agents in the terminal, and
[07:10] (430.56s)
this is the main agent. You can see that
[07:12] (432.64s)
it is running and is connected to both
[07:14] (434.56s)
the plant agent and the animal agent. I
[07:17] (437.04s)
asked it a question specifically what a
[07:19] (439.44s)
lion would eat and it correctly
[07:21] (441.52s)
identified that the question was related
[07:23] (443.60s)
to the animal agent. It sent the request
[07:26] (446.00s)
to the appropriate agent and received
[07:28] (448.00s)
the correct answer in response. Here is
[07:30] (450.16s)
the plant agent which is running on its
[07:32] (452.40s)
own server and has not received any
[07:34] (454.24s)
requests yet. And here is the animal
[07:36] (456.24s)
agent running on another server where
[07:38] (458.64s)
you can see that it received the lion
[07:40] (460.56s)
related question and responded with the
[07:42] (462.80s)
appropriate answer. This demonstrates
[07:44] (464.80s)
that the request was routed correctly.
[07:46] (466.96s)
Now I will ask another question this
[07:48] (468.96s)
time about the most important
[07:50] (470.32s)
requirements for plants to grow. After
[07:52] (472.32s)
sending the question, the system routed
[07:54] (474.56s)
the request and we received the answer.
[07:56] (476.88s)
If we go back, you will see that the
[07:58] (478.88s)
animal agent did not receive anything
[08:00] (480.88s)
for this query, but the plant agent did.
[08:03] (483.52s)
This clearly confirms that the system is
[08:05] (485.76s)
functioning as expected. Once I
[08:07] (487.68s)
confirmed that everything was working
[08:09] (489.28s)
properly, I reviewed the agent structure
[08:11] (491.68s)
to ensure that the implementation was
[08:13] (493.76s)
correct. That was the only manual
[08:15] (495.60s)
verification I needed to do in order to
[08:18] (498.16s)
be certain for the purposes of this
[08:20] (500.00s)
video. What stood out to me the most was
[08:22] (502.00s)
the retrieval process. While it was
[08:24] (504.40s)
building the agents, it was continuously
[08:26] (506.80s)
retrieving relevant information. That is
[08:29] (509.20s)
the aspect I found most impressive and
[08:31] (511.52s)
what I believe makes this MCP server so
[08:34] (514.24s)
effective. Before ending the video, I
[08:36] (516.32s)
wanted to share something cool that I
[08:38] (518.40s)
also found pretty funny. I actually used
[08:40] (520.72s)
the Git MCP chat to learn more about the
[08:43] (523.68s)
Git MCP tool itself. I placed its GitHub
[08:46] (526.80s)
link into the prompt box and started
[08:49] (529.04s)
chatting with it to see how it would
[08:50] (530.64s)
respond. The main question I had was
[08:52] (532.64s)
whether the tool uses vector databases
[08:54] (534.96s)
and implements rag. It turns out that it
[08:57] (537.36s)
does not follow that approach
[08:58] (538.72s)
specifically. Instead, it uses large
[09:00] (540.72s)
language model text from each GitHub
[09:02] (542.80s)
page and the readme files and it
[09:04] (544.80s)
navigates the codebase based on that
[09:06] (546.64s)
content. It does support textual search,
[09:09] (549.28s)
but it does not apply retrieval
[09:11] (551.04s)
augmented generation in the traditional
[09:13] (553.04s)
way. What impressed me the most was how
[09:15] (555.12s)
it created the A2A agent without
[09:17] (557.36s)
introducing any errors and implemented
[09:19] (559.68s)
it correctly by even generating the
[09:21] (561.76s)
agent cards. If you are not sure what I
[09:24] (564.16s)
am referring to, you can check out my
[09:26] (566.00s)
A2A video for a more detailed
[09:28] (568.28s)
explanation. It is definitely an
[09:30] (570.24s)
excellent tool and I would recommend
[09:32] (572.08s)
giving it a try. I am not suggesting
[09:34] (574.08s)
that you stop using the Context 7 MCP,
[09:36] (576.96s)
but this was something I came across and
[09:38] (578.88s)
found genuinely interesting. After
[09:40] (580.96s)
testing it, I really like the results.
[09:43] (583.36s)
You are welcome to try it yourself and
[09:45] (585.12s)
see how well it works for your use case.
[09:47] (587.12s)
That brings us to the end of this video.
[09:49] (589.12s)
If you'd like to support the channel and
[09:50] (590.96s)
help us keep making tutorials like this,
[09:53] (593.36s)
you can do so by using the super thanks
[09:55] (595.44s)
button below. As always, thank you for
[09:57] (597.92s)
watching and I'll see you in the next