[00:00] (0.00s)
Remember how we used to click through
[00:01] (1.52s)
menus and drag sliders? That era is
[00:03] (3.84s)
closing fast. Very soon, every
[00:05] (5.76s)
application will be driven by a single
[00:07] (7.84s)
conversation with an AI agent. And yes,
[00:10] (10.88s)
this also includes the applications that
[00:12] (12.96s)
you build. There'll be no buttons, no
[00:15] (15.12s)
dashboards, just you talking. Fast API
[00:18] (18.16s)
MCP server is the switch that makes that
[00:20] (20.40s)
future real today. This new MCP server
[00:23] (23.04s)
pulls every API that makes your app tick
[00:25] (25.28s)
and places them on an MCP layer. It
[00:27] (27.44s)
turns each one into a ready-made tool an
[00:29] (29.68s)
LLM can call on command. I wired it up
[00:32] (32.16s)
and watched my fast API app fully
[00:34] (34.40s)
controlled by an AI agent. I'll walk you
[00:36] (36.56s)
through every step. Any project or app
[00:38] (38.56s)
that you build can plug into this
[00:40] (40.16s)
server. Each function becomes a tool not
[00:42] (42.72s)
only for cursor or other MCP clients,
[00:45] (45.68s)
but I'll also show you how you can
[00:47] (47.36s)
connect this to an AI agent that can
[00:49] (49.60s)
control the MCP and thus control the
[00:52] (52.00s)
app. Here is how it works and how you
[00:53] (53.92s)
can do it right now. To integrate the
[00:55] (55.76s)
fast API MCP server, you first need a
[00:58] (58.48s)
fast API app. That means you'll need a
[01:00] (60.80s)
front-end app and a backend powered by
[01:02] (62.96s)
fast API. If you want to create an app
[01:05] (65.44s)
and get MCP running, the app can be
[01:07] (67.92s)
anything you want. In this example,
[01:10] (70.08s)
we'll use a to-do list app and automate
[01:12] (72.24s)
it using the fast API MCP server. We're
[01:14] (74.96s)
choosing a to-do list app because of how
[01:16] (76.72s)
easy they are to build so that we can
[01:18] (78.64s)
get to the important part of the video.
[01:20] (80.56s)
Start by creating a new folder and
[01:22] (82.24s)
opening it in cursor. Then create a new
[01:24] (84.24s)
environment. When the prompt box shows
[01:26] (86.00s)
up, choose this and select the
[01:27] (87.60s)
interpreter. This sets up your Python
[01:29] (89.68s)
virtual environment. After that, build
[01:32] (92.00s)
the front end. I've created a Nex.js
[01:34] (94.60s)
app. I gave cursor the prompt that we
[01:37] (97.36s)
have a Next.js app and asked it to make
[01:39] (99.60s)
the to-do list application. I specified
[01:41] (101.76s)
that it should only create the front end
[01:44] (104.08s)
and expose the APIs to be used later
[01:46] (106.48s)
with fast API. That's how a full web app
[01:48] (108.96s)
comes together. You have the front end,
[01:50] (110.96s)
the backend logic and several
[01:52] (112.56s)
microservices that support the rest.
[01:54] (114.56s)
Based on the prompt, cursor created the
[01:56] (116.72s)
next.js app. You can see it right here.
[01:59] (119.28s)
This is what it created. After that, I
[02:01] (121.12s)
added the fast API backend. I told
[02:03] (123.52s)
cursor to make a fast API app in the
[02:05] (125.60s)
root. It generated the main models and
[02:08] (128.08s)
to-dos files with all the endpoints used
[02:10] (130.64s)
to control the app. The app became fully
[02:12] (132.88s)
functional. Now we could add tasks and
[02:14] (134.72s)
delete them as well. Both front end and
[02:16] (136.80s)
back end were working together.
[02:24] (144.08s)
Now that your front end and back end are
[02:26] (146.00s)
running, you'll want to control the app
[02:27] (147.92s)
using MCP. To set up the Fast API MCP
[02:31] (151.20s)
server, go to the GitHub repo. I'll link
[02:33] (153.76s)
this in the description as well. First,
[02:35] (155.92s)
install the Fast API MCP package in your
[02:38] (158.72s)
virtual environment. You can do this
[02:40] (160.32s)
using either UV or pip. I used pip here.
[02:43] (163.12s)
Just copy the install command. Open a
[02:45] (165.36s)
new terminal in cursor and make sure the
[02:47] (167.60s)
virtual environment is active. You'll
[02:49] (169.36s)
know it's active if the environment name
[02:51] (171.28s)
shows on the terminal. Now paste the
[02:53] (173.60s)
command and install the fast API MCP
[02:56] (176.16s)
library. Once installed, you need to
[02:58] (178.16s)
implement it in your code. In the GitHub
[03:00] (180.32s)
repo, you'll find a basic usage example.
[03:02] (182.96s)
You don't have to write the code
[03:04] (184.08s)
yourself anymore. That was the old way.
[03:06] (186.40s)
Just copy the example, go back to
[03:08] (188.40s)
cursor, paste the code, and ask cursor
[03:11] (191.12s)
to add it to the main file. You'll see
[03:13] (193.04s)
that it adds the fast API MCP server,
[03:15] (195.84s)
which means the initial setup is
[03:17] (197.44s)
complete. There are a couple more steps
[03:19] (199.52s)
you'll need that aren't covered in the
[03:21] (201.20s)
basic example, but they're essential
[03:23] (203.20s)
when building more complex apps. You
[03:25] (205.36s)
know, these MCP servers come with tool
[03:27] (207.44s)
names, right? They include specific
[03:29] (209.20s)
tools the model uses to control the
[03:31] (211.20s)
application. In the case of fast API
[03:33] (213.28s)
MCP, you get tools that the LLM can
[03:35] (215.92s)
access to interact with your app. These
[03:37] (217.84s)
API tools have names and there's a
[03:40] (220.08s)
method for setting them that isn't shown
[03:41] (221.68s)
in the basic example we pasted earlier.
[03:43] (223.84s)
There's full documentation in the GitHub
[03:45] (225.68s)
repo where it explains that you can name
[03:48] (228.00s)
tools using the operation ID tag while
[03:50] (230.72s)
defining endpoints in fast API MCP. Just
[03:53] (233.76s)
set the operation ID to whatever name
[03:55] (235.76s)
you want the tool to have and the server
[03:57] (237.92s)
will use that. If you forget to name
[03:59] (239.92s)
them or choose not to, the names are
[04:02] (242.12s)
autogenerated. But if you want more
[04:04] (244.00s)
control, it's better to name them
[04:05] (245.84s)
yourself. Just like before, copy the
[04:08] (248.32s)
code example and ask the AI agent to add
[04:11] (251.20s)
an operation ID to every endpoint based
[04:14] (254.16s)
on its purpose. Once that's done, each
[04:16] (256.24s)
tool will have a clear name and you can
[04:18] (258.00s)
call the one you want directly. Now,
[04:20] (260.16s)
while setting up and exploring the MCP
[04:22] (262.32s)
server, I ran into a problem. Even after
[04:24] (264.56s)
naming all the tools, none of them were
[04:26] (266.64s)
showing up. At first, I thought the
[04:28] (268.32s)
server was broken. But when I checked
[04:30] (270.00s)
the documentation, I found the fix. You
[04:32] (272.32s)
have to set up the MCP server after all
[04:34] (274.48s)
the endpoints are declared. If you
[04:36] (276.16s)
don't, then you need to call a specific
[04:38] (278.16s)
function at the end of the file to
[04:39] (279.76s)
reregister everything. That's what makes
[04:41] (281.68s)
the tools appear. Go to your main file
[04:44] (284.08s)
and add that line at the very bottom.
[04:46] (286.32s)
That should fix the issue and all your
[04:48] (288.16s)
tools will start showing up. Now, the
[04:50] (290.32s)
question is, how do you use the MCP
[04:52] (292.56s)
server you just set up with any MCP
[04:54] (294.72s)
client? These clients include cursors
[04:56] (296.96s)
agent, windsurf, or even cloud desktop.
[04:59] (299.60s)
If you go back to the repo, you'll see
[05:01] (301.36s)
it clearly states that the MCP server
[05:03] (303.52s)
will be available at a specific URL.
[05:05] (305.76s)
Whether cursor sets it up or you run the
[05:07] (307.76s)
back end yourself. Remember that the
[05:09] (309.60s)
front end and back end are separate. To
[05:11] (311.60s)
run the back end, use the given command.
[05:14] (314.08s)
It will start the server and usually
[05:15] (315.84s)
provide a local URL. You need to copy
[05:18] (318.32s)
this URL. Go into cursor, open your
[05:20] (320.76s)
MCP.json file and add a new global MCP
[05:24] (324.08s)
server. You can use the provided format.
[05:26] (326.40s)
Paste the URL and add MCP at the end.
[05:29] (329.20s)
This makes the MCP server available for
[05:31] (331.36s)
your agent. You can follow the same
[05:32] (332.96s)
steps in Claude and Windsurf as well.
[05:35] (335.04s)
Once that's done, your app can be
[05:36] (336.88s)
controlled by the agent. I modified the
[05:38] (338.88s)
front-end app to refresh every second so
[05:41] (341.20s)
we can see changes live as they happen.
[05:43] (343.60s)
I already removed the previous tasks
[05:45] (345.52s)
from the app using the MCP server so
[05:47] (347.92s)
that we can test it out. I'll ask it to
[05:49] (349.76s)
create a task to make a new YouTube
[05:53] (353.32s)
video. You can see it called the MCP
[05:56] (356.00s)
tool and the task was created. Now,
[05:58] (358.16s)
let's say I want to build a new
[05:59] (359.44s)
front-end project. I'll break it down
[06:01] (361.20s)
and add each part as a
[06:04] (364.20s)
task. I gave it the prompt and now you
[06:07] (367.28s)
can see the tasks being added step by
[06:09] (369.52s)
step. This shows how you can give MCP
[06:12] (372.00s)
access to any application you build.
[06:14] (374.08s)
Even if you're creating an AIdriven app,
[06:16] (376.48s)
you don't need to make things more
[06:18] (378.00s)
complicated to enable full AI control.
[06:20] (380.80s)
Just expose the endpoints through an MCP
[06:23] (383.52s)
server and the app can be controlled by
[06:25] (385.28s)
any MCP client. Now you might be
[06:27] (387.84s)
thinking this only works inside cursor
[06:30] (390.00s)
or claw desktop. But how do you go
[06:32] (392.00s)
beyond that and build fully agentic AI
[06:34] (394.16s)
applications using this MCP server? The
[06:37] (397.20s)
answer to that question is the MCPUs
[06:39] (399.20s)
framework. If you're enjoying the video,
[06:41] (401.12s)
please consider subscribing. We're
[06:42] (402.96s)
trying to hit 25,000 subs by the end of
[06:45] (405.44s)
this month and your support means a lot.
[06:47] (407.52s)
Using this with the cursor agent or even
[06:49] (409.52s)
the winds surf agent isn't always
[06:51] (411.28s)
practical, but the real value shows up
[06:53] (413.52s)
when you want to build full AI agentic
[06:55] (415.76s)
applications. For example, imagine
[06:57] (417.92s)
having your to-do list app controlled
[06:59] (419.68s)
entirely by an AI agent. Just ignore the
[07:02] (422.32s)
small front-end issue below. It's a
[07:04] (424.24s)
minor bug. If you want an AI agent to
[07:06] (426.56s)
control your app, you just talk to it
[07:08] (428.88s)
and it does everything for you. Imagine
[07:11] (431.04s)
apps like Instagram, Facebook, WhatsApp
[07:13] (433.44s)
or anything you build yourself being
[07:15] (435.20s)
controlled the same way. This is where
[07:16] (436.72s)
the framework comes in. It lets you
[07:18] (438.64s)
integrate MCP servers directly with AI
[07:21] (441.28s)
agents. You write code and build agents
[07:23] (443.44s)
that can access these servers and their
[07:25] (445.36s)
tools. Then they can interact with them
[07:27] (447.28s)
automatically. I've already explained
[07:28] (448.88s)
this in a previous video, so I won't go
[07:30] (450.80s)
into too much detail here. You can check
[07:32] (452.80s)
that video out. I'll be linking it
[07:34] (454.40s)
above. This is the code I wrote for
[07:36] (456.00s)
connecting to the Fast API MCP server.
[07:38] (458.40s)
You'll see there's an agent file and a
[07:40] (460.28s)
config.json. The config holds the MCP
[07:42] (462.96s)
server just like before. Now, what this
[07:44] (464.72s)
framework does is let you talk to your
[07:46] (466.56s)
MCP server through an AI agent. It gives
[07:49] (469.36s)
you a full client library to build
[07:51] (471.28s)
applications that interact with the
[07:53] (473.04s)
server. For example, here's our agent.
[07:55] (475.20s)
Let's open the terminal and run
[07:57] (477.32s)
it. Now, you can see that it's live and
[08:00] (480.08s)
waiting for input. I'll ask it to add
[08:02] (482.64s)
four random tasks. It processes the
[08:04] (484.96s)
request and as you can see, those tasks
[08:07] (487.44s)
are added to the list. When we check our
[08:09] (489.36s)
to-do app, all four tasks are there.
[08:11] (491.60s)
This is the power of using AI agents.
[08:13] (493.84s)
You can literally automate any
[08:15] (495.36s)
application you build. That's it for
[08:17] (497.04s)
this video. If you want us to keep
[08:18] (498.88s)
making these videos, please consider
[08:20] (500.72s)
donating using the link below. Thanks as
[08:22] (502.96s)
always for watching and I'll see you in
[08:24] (504.88s)
the next video.