YouTube Deep SummaryYouTube Deep Summary

Star Extract content that makes a tangible impact on your life

Video thumbnail

I Made MCP AI Agents That Automate Every App I Build.

AI LABS โ€ข 2025-05-01 โ€ข 8:30 minutes โ€ข YouTube

๐Ÿค– AI-Generated Summary:

The Future of Apps: Conversational AI Control with FastAPI MCP Server

Remember the days of clicking through endless menus and dragging sliders to interact with software? That era is rapidly fading away. The future of applications is conversationalโ€”where a single dialogue with an AI agent controls everything. This revolution doesn't just apply to big tech apps; it extends to every application you build, transforming traditional interfaces into seamless, voice-driven experiences.

In this blog post, weโ€™ll explore how the FastAPI MCP server is making this future a reality today. Weโ€™ll walk through how to integrate it with your app, enabling AI agents to fully control your frontend and backend via conversation.


What is FastAPI MCP Server?

FastAPI MCP server acts as a bridge between your appโ€™s API and AI agents. It pulls your appโ€™s APIs and exposes them on an MCP (Modular Conversational Pipeline) layer, turning each API endpoint into a callable tool that large language models (LLMs) can use on command.

This means no more buttons or dashboardsโ€”just natural language interaction to control your app.


Building a Conversational To-Do List App: Step-by-Step

To illustrate this, letโ€™s build a simple to-do list app controlled entirely by an AI agent using FastAPI MCP server.

1. Setup Your Project Environment

  • Create a new project folder and open it in Cursor (the AI coding environment).
  • Create a Python virtual environment by selecting the interpreter when prompted.
  • This sets the stage for building both frontend and backend components.

2. Build the Frontend with Next.js

  • Use Cursorโ€™s AI prompts to generate a Next.js frontend for the to-do list.
  • The frontend exposes the necessary APIs but does not handle backend logic yet.

3. Add the FastAPI Backend

  • Prompt Cursor to create FastAPI backend files, including models and endpoints for managing to-dos.
  • Now you have a fully functional to-do list app with both frontend and backend working together.

Integrating FastAPI MCP Server

4. Install the FastAPI MCP Package

  • Visit the FastAPI MCP GitHub repo (link in description).
  • Inside your virtual environment, install the package using pip:

bash pip install fastapi-mcp

5. Add MCP Server to Your FastAPI App

  • Copy the basic example code from the GitHub repo.
  • Paste it into your main FastAPI file.
  • This enables your API endpoints to be exposed as MCP tools that LLMs can call.

6. Naming Your API Tools

  • Use the operation_id tag in your FastAPI endpoints to assign meaningful names to each tool.
  • This helps AI agents know exactly which tool to call for specific actions.
  • If you skip this, tool names are autogenerated but less descriptive.

7. Register Your Tools Properly

  • Ensure the MCP server setup happens after all endpoints are declared.
  • If not, call the re-registration function at the end of your main file to make tools visible.

Connecting MCP Server to AI Clients

8. Link Your MCP Server with AI Agents

  • Run your FastAPI backend; it will provide a local URL (e.g., http://localhost:8000).
  • Add this URL as a global MCP server in your MCP client (Cursor, Claude, Windsurf, etc.) by editing the MCP.json file.
  • Append /mcp to the URL to point to the MCP server endpoint.

9. Control Your App via AI Agent

  • With everything connected, you can now control your app by simply talking to the AI agent.
  • For example, instruct the agent to add a task, and watch it appear instantly in your to-do app.
  • You can also break down complex tasks into steps, letting the AI agent manage the workflow.

Beyond Cursor: Building Fully Agentic AI Applications

While MCP clients like Cursor and Windsurf are great for demos and development, the real power lies in building fully autonomous AI applications.

The MCPUs framework enables you to:

  • Integrate MCP servers directly with AI agents.
  • Write code that lets agents interact with MCP tools automatically.
  • Build apps where AI agents handle everythingโ€”from UI to backend logicโ€”just through conversation.

Imagine social media apps like Instagram or WhatsApp controlled entirely by AI agents responding to your natural language commands.


Final Thoughts

The FastAPI MCP server marks a significant shift in how we build and interact with applications. By exposing your API endpoints as tools accessible to AI agents, you unlock the potential for fully conversational, AI-driven app control without complicated UI development.

Whether you're building simple to-do apps or complex AI-powered platforms, integrating MCP servers can make your applications smarter and more intuitive.


Try It Yourself!

  • Check out the FastAPI MCP GitHub repo for installation and detailed documentation.
  • Experiment with creating your own conversational apps.
  • Explore the MCPUs framework to build fully autonomous AI agents.

If you enjoyed this post and want to see more content on cutting-edge AI app development, consider subscribing to our channel and supporting us with a donation.


Thanks for reading! Feel free to ask questions or share your experiences using FastAPI MCP server in the comments below.


This blog post was inspired by a detailed walkthrough on building AI-controlled apps using FastAPI MCP server and AI agents.


๐Ÿ“ Transcript (231 entries):

Remember how we used to click through menus and drag sliders? That era is closing fast. Very soon, every application will be driven by a single conversation with an AI agent. And yes, this also includes the applications that you build. There'll be no buttons, no dashboards, just you talking. Fast API MCP server is the switch that makes that future real today. This new MCP server pulls every API that makes your app tick and places them on an MCP layer. It turns each one into a ready-made tool an LLM can call on command. I wired it up and watched my fast API app fully controlled by an AI agent. I'll walk you through every step. Any project or app that you build can plug into this server. Each function becomes a tool not only for cursor or other MCP clients, but I'll also show you how you can connect this to an AI agent that can control the MCP and thus control the app. Here is how it works and how you can do it right now. To integrate the fast API MCP server, you first need a fast API app. That means you'll need a front-end app and a backend powered by fast API. If you want to create an app and get MCP running, the app can be anything you want. In this example, we'll use a to-do list app and automate it using the fast API MCP server. We're choosing a to-do list app because of how easy they are to build so that we can get to the important part of the video. Start by creating a new folder and opening it in cursor. Then create a new environment. When the prompt box shows up, choose this and select the interpreter. This sets up your Python virtual environment. After that, build the front end. I've created a Nex.js app. I gave cursor the prompt that we have a Next.js app and asked it to make the to-do list application. I specified that it should only create the front end and expose the APIs to be used later with fast API. That's how a full web app comes together. You have the front end, the backend logic and several microservices that support the rest. Based on the prompt, cursor created the next.js app. You can see it right here. This is what it created. After that, I added the fast API backend. I told cursor to make a fast API app in the root. It generated the main models and to-dos files with all the endpoints used to control the app. The app became fully functional. Now we could add tasks and delete them as well. Both front end and back end were working together. Now that your front end and back end are running, you'll want to control the app using MCP. To set up the Fast API MCP server, go to the GitHub repo. I'll link this in the description as well. First, install the Fast API MCP package in your virtual environment. You can do this using either UV or pip. I used pip here. Just copy the install command. Open a new terminal in cursor and make sure the virtual environment is active. You'll know it's active if the environment name shows on the terminal. Now paste the command and install the fast API MCP library. Once installed, you need to implement it in your code. In the GitHub repo, you'll find a basic usage example. You don't have to write the code yourself anymore. That was the old way. Just copy the example, go back to cursor, paste the code, and ask cursor to add it to the main file. You'll see that it adds the fast API MCP server, which means the initial setup is complete. There are a couple more steps you'll need that aren't covered in the basic example, but they're essential when building more complex apps. You know, these MCP servers come with tool names, right? They include specific tools the model uses to control the application. In the case of fast API MCP, you get tools that the LLM can access to interact with your app. These API tools have names and there's a method for setting them that isn't shown in the basic example we pasted earlier. There's full documentation in the GitHub repo where it explains that you can name tools using the operation ID tag while defining endpoints in fast API MCP. Just set the operation ID to whatever name you want the tool to have and the server will use that. If you forget to name them or choose not to, the names are autogenerated. But if you want more control, it's better to name them yourself. Just like before, copy the code example and ask the AI agent to add an operation ID to every endpoint based on its purpose. Once that's done, each tool will have a clear name and you can call the one you want directly. Now, while setting up and exploring the MCP server, I ran into a problem. Even after naming all the tools, none of them were showing up. At first, I thought the server was broken. But when I checked the documentation, I found the fix. You have to set up the MCP server after all the endpoints are declared. If you don't, then you need to call a specific function at the end of the file to reregister everything. That's what makes the tools appear. Go to your main file and add that line at the very bottom. That should fix the issue and all your tools will start showing up. Now, the question is, how do you use the MCP server you just set up with any MCP client? These clients include cursors agent, windsurf, or even cloud desktop. If you go back to the repo, you'll see it clearly states that the MCP server will be available at a specific URL. Whether cursor sets it up or you run the back end yourself. Remember that the front end and back end are separate. To run the back end, use the given command. It will start the server and usually provide a local URL. You need to copy this URL. Go into cursor, open your MCP.json file and add a new global MCP server. You can use the provided format. Paste the URL and add MCP at the end. This makes the MCP server available for your agent. You can follow the same steps in Claude and Windsurf as well. Once that's done, your app can be controlled by the agent. I modified the front-end app to refresh every second so we can see changes live as they happen. I already removed the previous tasks from the app using the MCP server so that we can test it out. I'll ask it to create a task to make a new YouTube video. You can see it called the MCP tool and the task was created. Now, let's say I want to build a new front-end project. I'll break it down and add each part as a task. I gave it the prompt and now you can see the tasks being added step by step. This shows how you can give MCP access to any application you build. Even if you're creating an AIdriven app, you don't need to make things more complicated to enable full AI control. Just expose the endpoints through an MCP server and the app can be controlled by any MCP client. Now you might be thinking this only works inside cursor or claw desktop. But how do you go beyond that and build fully agentic AI applications using this MCP server? The answer to that question is the MCPUs framework. If you're enjoying the video, please consider subscribing. We're trying to hit 25,000 subs by the end of this month and your support means a lot. Using this with the cursor agent or even the winds surf agent isn't always practical, but the real value shows up when you want to build full AI agentic applications. For example, imagine having your to-do list app controlled entirely by an AI agent. Just ignore the small front-end issue below. It's a minor bug. If you want an AI agent to control your app, you just talk to it and it does everything for you. Imagine apps like Instagram, Facebook, WhatsApp or anything you build yourself being controlled the same way. This is where the framework comes in. It lets you integrate MCP servers directly with AI agents. You write code and build agents that can access these servers and their tools. Then they can interact with them automatically. I've already explained this in a previous video, so I won't go into too much detail here. You can check that video out. I'll be linking it above. This is the code I wrote for connecting to the Fast API MCP server. You'll see there's an agent file and a config.json. The config holds the MCP server just like before. Now, what this framework does is let you talk to your MCP server through an AI agent. It gives you a full client library to build applications that interact with the server. For example, here's our agent. Let's open the terminal and run it. Now, you can see that it's live and waiting for input. I'll ask it to add four random tasks. It processes the request and as you can see, those tasks are added to the list. When we check our to-do app, all four tasks are there. This is the power of using AI agents. You can literally automate any application you build. That's it for this video. If you want us to keep making these videos, please consider donating using the link below. Thanks as always for watching and I'll see you in the next video.