[00:00] (0.16s)
N8 is a crazy powerful automation
[00:02] (2.48s)
platform. It's got everything. MCPs, AI
[00:05] (5.12s)
agents, and insane integrations. YouTube
[00:07] (7.44s)
is exploding with tutorials about it,
[00:09] (9.44s)
and it totally deserves the hype. You
[00:11] (11.28s)
can do absolutely anything with it. So
[00:13] (13.20s)
many tools are already baked right in.
[00:15] (15.04s)
It's basically like Zapier on pure
[00:17] (17.04s)
steroids. But here's the catch. There's
[00:18] (18.88s)
way too much to learn. Hundreds of
[00:20] (20.64s)
different pieces called nodes. And sure,
[00:22] (22.56s)
the drag and drop thing makes it easier,
[00:24] (24.32s)
but honestly, building stuff in code is
[00:26] (26.48s)
still way faster because you can just
[00:28] (28.40s)
ask AI to write it for you. You're
[00:30] (30.24s)
probably thinking, there's got to be a
[00:31] (31.76s)
better way. Well, forget all that.
[00:33] (33.28s)
There's now a way to just tell Claude or
[00:35] (35.36s)
any AI agent exactly what you want, and
[00:37] (37.44s)
it will build the entire workflow for
[00:39] (39.28s)
you. You literally won't have to touch a
[00:41] (41.12s)
single thing. It just does everything.
[00:42] (42.64s)
And I'm about to show you how. The tool
[00:44] (44.96s)
that makes this possible is just an MCP.
[00:47] (47.44s)
And here's why it changes everything and
[00:49] (49.28s)
why you might not really need to learn
[00:50] (50.96s)
anything anymore. MCPs like this are
[00:53] (53.28s)
going to take over entire applications.
[00:55] (55.36s)
To show you the difference, I tried the
[00:57] (57.12s)
Blender MCP. It was decent, but
[00:59] (59.44s)
everything it built felt incomplete,
[01:01] (61.44s)
kind of like AI slop. But why? It's
[01:03] (63.76s)
because it didn't really know how things
[01:05] (65.52s)
worked, just vague descriptions about
[01:07] (67.44s)
the tools that it had. But the N8 and
[01:09] (69.52s)
MCP is completely different. This thing
[01:11] (71.52s)
has access to the full documentation,
[01:13] (73.60s)
real documentation. It understands 90%
[01:16] (76.16s)
of the official docs and has dedicated
[01:18] (78.24s)
tools that grab that documentation
[01:20] (80.24s)
before doing anything. So it never
[01:21] (81.92s)
guesses, it actually knows. Here's how
[01:24] (84.00s)
it's structured. First, the core tools.
[01:26] (86.24s)
These gather all the information first.
[01:28] (88.40s)
They research, pull the right docs, and
[01:30] (90.32s)
prep everything. Then the advanced
[01:32] (92.24s)
tools. These actually build the
[01:34] (94.00s)
workflow. They turn all that research
[01:36] (96.08s)
into real structure. Finally, the
[01:38] (98.32s)
management tools. These take your
[01:40] (100.16s)
completed workflow and deploy it
[01:42] (102.00s)
straight into your workspace. You don't
[01:43] (103.92s)
have to touch anything at all, plus some
[01:45] (105.92s)
back-end tools that keep everything
[01:47] (107.52s)
running. But those first three are the
[01:49] (109.52s)
real game changers. Once you see how
[01:51] (111.44s)
they work together, you'll understand
[01:53] (113.12s)
why this is so powerful. Now, since this
[01:55] (115.60s)
is an MCP, it works with both Claude and
[01:58] (118.00s)
Cursor. So, it's really up to you,
[01:59] (119.84s)
whichever one you prefer or already have
[02:02] (122.08s)
set up. That said, they do recommend
[02:04] (124.08s)
using it with Claude. And I think that's
[02:05] (125.76s)
mainly because of its artifacts feature,
[02:07] (127.76s)
which gives it way more flexibility
[02:09] (129.52s)
during execution. But here's something
[02:11] (131.36s)
even more interesting. They've built in
[02:13] (133.04s)
a system that makes sure the MCP follows
[02:15] (135.36s)
the correct order when calling tools so
[02:17] (137.60s)
it doesn't mix anything up or call the
[02:19] (139.36s)
wrong thing at the wrong time. And they
[02:20] (140.96s)
do this through the clawed project
[02:22] (142.48s)
setup. When you create a new Claude
[02:24] (144.56s)
project, you just drop these
[02:26] (146.08s)
configurations in there and it gives the
[02:28] (148.00s)
MCP a full rule book to follow. Now, if
[02:30] (150.64s)
you're not on the pro plan for Claude
[02:32] (152.48s)
and you're using cursor instead, no
[02:34] (154.40s)
problem. You can just add those same
[02:36] (156.24s)
rules into your cursor rules file and
[02:38] (158.40s)
it's going to work just fine. What this
[02:40] (160.24s)
does is set up a clear workflow
[02:42] (162.24s)
structure that the agent sticks with. It
[02:44] (164.48s)
prevents the kind of hallucination or
[02:46] (166.64s)
broken output you sometimes get from
[02:48] (168.72s)
LLMs when they don't have proper
[02:50] (170.64s)
guidance. So with this setup in place,
[02:52] (172.88s)
you're much more likely to get stable
[02:54] (174.72s)
working workflows, not something
[02:56] (176.40s)
halffinished or made up. Now the way N8N
[02:59] (179.68s)
actually works is that it gives you this
[03:01] (181.60s)
visual builder. You can add different
[03:03] (183.52s)
nodes to build your automations, kind of
[03:05] (185.76s)
like connecting blocks in a flowchart.
[03:07] (187.60s)
And for those who don't know, each of
[03:09] (189.36s)
these nodes represents a different task
[03:11] (191.60s)
or function in your workflow. But behind
[03:13] (193.76s)
the scenes, it's all just a JSON file.
[03:15] (195.92s)
That file contains every detail. What
[03:17] (197.84s)
nodes are used, how they're connected,
[03:19] (199.68s)
the parameters, everything. And the cool
[03:21] (201.52s)
thing is, if you already have that JSON
[03:23] (203.52s)
pre-built, you can just import it right
[03:25] (205.44s)
into N8N and have your workflow show up
[03:27] (207.92s)
instantly in the builder. Now, at this
[03:29] (209.60s)
point, you might be thinking, wait, why
[03:31] (211.52s)
not just ask Chad GPT or Claude to
[03:34] (214.08s)
generate that JSON file for me? Well,
[03:36] (216.08s)
here's the issue. If you try that, what
[03:38] (218.08s)
you'll usually get is a broken mess. The
[03:40] (220.32s)
nodes often don't connect properly or
[03:42] (222.48s)
the structure doesn't make sense and it
[03:44] (224.40s)
definitely won't run. That's because
[03:45] (225.92s)
those models don't have the context
[03:47] (227.68s)
needed to build actual working
[03:49] (229.36s)
workflows. And this is exactly where the
[03:51] (231.52s)
MCP comes in and completely outperforms.
[03:54] (234.32s)
As I mentioned earlier, this MCP follows
[03:56] (236.80s)
a proper workflow of its own. First
[03:59] (239.20s)
retrieving context, then building
[04:00] (240.96s)
intelligently based on that context. It
[04:03] (243.12s)
doesn't just guess or make up structure.
[04:05] (245.12s)
It knows what's valid, what's
[04:06] (246.72s)
compatible, and what actually works
[04:08] (248.56s)
inside N8N. Here's how it works. First,
[04:11] (251.44s)
it pulls up the relevant search nodes,
[04:13] (253.60s)
lists available options, and figures out
[04:15] (255.92s)
which ones to use. It applies internal
[04:18] (258.32s)
rules and logic to guide that process.
[04:20] (260.72s)
And based on all that, it starts
[04:22] (262.56s)
assembling the JSON file fully formed
[04:25] (265.04s)
and ready to run. Now, this is also
[04:26] (266.88s)
where Claude's artifacts feature really
[04:28] (268.80s)
shines. In Claude code, the JSON is
[04:30] (270.80s)
built directly inside the chat context.
[04:33] (273.04s)
And you can see it takes shape piece by
[04:34] (274.88s)
piece. It's just more powerful that way.
[04:36] (276.88s)
But even outside of Claude, the MCP
[04:39] (279.12s)
still does all of this behind the
[04:40] (280.64s)
scenes. Once it has what it needs, it
[04:42] (282.88s)
begins constructing the workflow,
[04:44] (284.64s)
updates it incrementally, and pushes it
[04:46] (286.96s)
directly into your N8N builder where you
[04:49] (289.52s)
can see it live and editable just like
[04:51] (291.52s)
that. Let me show you how this works
[04:53] (293.44s)
with a real example. I wanted to create
[04:55] (295.44s)
a deep search agent, something that
[04:57] (297.36s)
could pull research from multiple
[04:59] (299.04s)
sources and take its time processing
[05:01] (301.12s)
everything. So, I told it the flow I
[05:03] (303.12s)
wanted. I asked a question and if
[05:05] (305.12s)
needed, the agent follows up with
[05:06] (306.88s)
clarifying questions before giving me a
[05:09] (309.12s)
detailed final answer. And it started
[05:11] (311.36s)
activating its tools. It looked up
[05:13] (313.36s)
templates, searched for the right nodes,
[05:15] (315.60s)
and because it understands the context
[05:17] (317.44s)
of each node thanks to the built-in
[05:19] (319.28s)
documentation, it picked the exact ones
[05:21] (321.52s)
we needed. Then, it built the workflow.
[05:23] (323.52s)
And here's the cool part. It validated
[05:25] (325.44s)
the workflow using a validator tool.
[05:27] (327.52s)
That validator checked the logic,
[05:29] (329.28s)
referenced the docs, and caught any
[05:31] (331.04s)
issues before they even happened. Now,
[05:33] (333.04s)
it wanted me to use the SER API key for
[05:35] (335.52s)
Google search. I had some trouble on my
[05:37] (337.60s)
account setting it up. So, I told it to
[05:39] (339.60s)
swap it out. And it did. It replaced SER
[05:42] (342.00s)
with DuckDuck. Go search, Wikipedia
[05:44] (344.16s)
search, and Reddit search. So, it
[05:46] (346.08s)
created the JSON structure and uploaded
[05:48] (348.32s)
it directly to my workspace. I cleaned
[05:50] (350.48s)
up the layout a bit since AI created
[05:52] (352.64s)
workflows usually end up messy, but that
[05:54] (354.96s)
was quick. Then I provided my OpenAI API
[05:58] (358.08s)
key and tested it with a question. Is
[06:00] (360.40s)
N8N better than other automation tools?
[06:03] (363.12s)
And if yes, why? I hit enter and it
[06:05] (365.36s)
executed successfully. It pulled in
[06:07] (367.44s)
insights from different sources,
[06:09] (369.20s)
including a discussion on hacker news.
[06:11] (371.28s)
Now, the sources could have been better
[06:12] (372.88s)
for this specific question. So, here's
[06:14] (374.96s)
what I did to improve that. Oh, and if
[06:17] (377.52s)
you're enjoying the content we're
[06:18] (378.96s)
making, I'd really appreciate it if you
[06:20] (380.80s)
hit that subscribe button. We're also
[06:22] (382.64s)
testing out channel memberships. Launch
[06:24] (384.72s)
the first tier as a test, and 85 people
[06:27] (387.20s)
have joined so far. The support's been
[06:29] (389.04s)
incredible, so we're thinking about
[06:30] (390.64s)
launching additional tiers. Right now,
[06:32] (392.80s)
members get priority replies to your
[06:34] (394.72s)
comments. Perfect. If you need feedback
[06:36] (396.48s)
or have questions, so I went ahead and
[06:39] (399.04s)
asked it to implement the Brave Search
[06:40] (400.96s)
node using the Brave Search API. I
[06:43] (403.36s)
thought this would be necessary to give
[06:44] (404.96s)
it a better tool for web search. Now,
[06:47] (407.28s)
this free API gives you about 2,000
[06:49] (409.68s)
requests per month and it's limited to
[06:51] (411.76s)
one request per second. And just like
[06:53] (413.60s)
that, it went ahead and implemented it
[06:55] (415.52s)
for me. Let me show you the final
[06:57] (417.20s)
result. Okay, so this is what it came up
[06:59] (419.52s)
with. It removed the other nodes like
[07:01] (421.44s)
Wikipedia and Reddit because I told it
[07:03] (423.52s)
to since this would have been enough for
[07:05] (425.36s)
our test. Now, let me just send a
[07:07] (427.12s)
greeting and you can see that it replies
[07:09] (429.04s)
back. Now I'm going to go ahead and ask
[07:10] (430.80s)
it to tell me what the reviews have been
[07:12] (432.64s)
for the latest Jurassic World movie. And
[07:14] (434.80s)
we're going to run it. You can see that
[07:16] (436.24s)
it's running and it has given me the
[07:17] (437.92s)
output about how the reviews were for
[07:19] (439.84s)
the movie. Moving on to the
[07:21] (441.60s)
installation. It's actually pretty
[07:23] (443.20s)
simple. There's only one requirement.
[07:24] (444.96s)
You need to have Docker running on your
[07:26] (446.64s)
system. That's because the tool works as
[07:28] (448.56s)
a Docker container and it needs Docker
[07:30] (450.64s)
to stay active in the background. If you
[07:32] (452.72s)
only need the basic configuration where
[07:34] (454.80s)
you just read documentation and manually
[07:36] (456.96s)
build workflows, that's all you need
[07:38] (458.80s)
with this. You're good to go. But if you
[07:40] (460.72s)
want the full experience where the tool
[07:42] (462.96s)
manages everything for you, edits files,
[07:45] (465.60s)
validates workflows, and basically takes
[07:48] (468.08s)
care of everything, you'll need to set
[07:49] (469.76s)
up the full integration. The only extra
[07:52] (472.00s)
things you'll need for that are your API
[07:54] (474.08s)
URL and API key. Since this is
[07:56] (476.48s)
essentially an MCP server setup, all you
[07:58] (478.88s)
need to do is copy the configuration
[08:00] (480.72s)
string for whichever tool you're using.
[08:02] (482.80s)
If you're using Claude, you just go to
[08:04] (484.40s)
your settings, head into the developer
[08:06] (486.32s)
options, and click edit config. That'll
[08:08] (488.72s)
open up the config file. You just paste
[08:10] (490.80s)
your details in and you're set. If
[08:12] (492.64s)
you're using cursor, it's slightly
[08:14] (494.16s)
different. Just open settings, go to
[08:16] (496.16s)
tool integrations, and hit add MCP. Then
[08:19] (499.04s)
paste the config string there. That
[08:21] (501.28s)
works perfectly fine if you just want to
[08:23] (503.12s)
run some simple workflows in the cloud
[08:25] (505.12s)
without hosting anything yourself. But
[08:26] (506.96s)
if you want to run it locally on your
[08:28] (508.72s)
own system, you'll need to either use
[08:30] (510.56s)
Docker or the npx command. So if any of
[08:33] (513.28s)
you watching know the exact URL that
[08:35] (515.68s)
should be pasted, which makes the local
[08:37] (517.68s)
version of N8N work with Claude, or if
[08:40] (520.48s)
you figure it out, please drop it in the
[08:42] (522.32s)
comments. It'll help me and a ton of
[08:44] (524.32s)
other people trying to set this up. As
[08:46] (526.24s)
for the API key, it works the same for
[08:48] (528.32s)
both local and online versions. Like I
[08:50] (530.48s)
mentioned earlier, the only difference
[08:52] (532.08s)
is the address part. In the online
[08:53] (533.92s)
version, the blurred part in the link
[08:55] (535.76s)
you see, that's your unique ID, and the
[08:57] (537.76s)
rest of the link is standard. That full
[08:59] (539.60s)
address is what you'll paste into the
[09:01] (541.28s)
config. To get the API key itself, just
[09:03] (543.76s)
go into your settings, look for the API
[09:05] (545.92s)
section, and you'll see an option to
[09:07] (547.84s)
create a new key. It's the same flow for
[09:09] (549.92s)
both versions. Just generate it, copy
[09:12] (552.00s)
it, and paste it into your integration
[09:14] (554.00s)
settings. That brings us to the end of
[09:16] (556.08s)
this video. If you'd like to support the
[09:17] (557.92s)
channel and help us keep making videos
[09:19] (559.60s)
like this, you can do so by using the
[09:21] (561.60s)
super thanks button below. As always,
[09:23] (563.68s)
thank you for watching and I'll see you
[09:25] (565.28s)
in the next one.