[00:00] (0.16s)
You probably enjoy building AI agents
[00:02] (2.64s)
just like I do. Whenever an idea comes
[00:04] (4.72s)
to mind, I dive straight into building.
[00:06] (6.72s)
And for most of the agents I've created,
[00:08] (8.72s)
I need to run them in the terminal,
[00:10] (10.32s)
which is where everything usually
[00:11] (11.76s)
happens. Using these AI agents in the
[00:13] (13.84s)
terminal isn't exactly enjoyable. So
[00:16] (16.00s)
today, I want to share something that
[00:17] (17.52s)
can make the experience much better and
[00:19] (19.36s)
more userfriendly. This tool is not
[00:21] (21.28s)
entirely new, but I recently started
[00:23] (23.28s)
using it myself and thought it was worth
[00:25] (25.36s)
sharing. It's called Agent UI and it's a
[00:27] (27.76s)
modern open-source chat interface built
[00:30] (30.40s)
specifically for AI agents. Everything
[00:32] (32.40s)
in it is already designed to be reusable
[00:34] (34.72s)
and production ready. And since it's
[00:36] (36.40s)
open- source, it gives you complete
[00:38] (38.16s)
flexibility to adapt it however you
[00:40] (40.16s)
want. Even if you're planning to turn
[00:41] (41.76s)
your agent into a product, agent UI
[00:43] (43.92s)
still fits perfectly. You can apply your
[00:45] (45.84s)
own branding, change the interface to
[00:47] (47.84s)
match your style, and still benefit from
[00:49] (49.84s)
a polished and professional user
[00:51] (51.68s)
experience. In this video, I'm going to
[00:53] (53.52s)
show you how to install Agent UI,
[00:55] (55.52s)
explain what's required to get it
[00:57] (57.04s)
running, and walk you through how to
[00:58] (58.64s)
start using it in your own project.
[01:00] (60.64s)
Before we get into the main part of the
[01:02] (62.48s)
video, I want to take a moment to give
[01:04] (64.08s)
you a quick tour of this beautifully
[01:06] (66.16s)
designed interface so you can get a feel
[01:08] (68.32s)
for how everything works. What you're
[01:10] (70.32s)
looking at right now is the endpoint
[01:12] (72.08s)
where the Agno playground is currently
[01:13] (73.92s)
running. And this playground functions
[01:15] (75.52s)
as a middle layer that connects your AI
[01:17] (77.76s)
agents to the front-end interface,
[01:19] (79.68s)
allowing them to interact smoothly. In
[01:21] (81.68s)
order for things to work properly, you
[01:23] (83.52s)
need to have this playground running in
[01:25] (85.28s)
the background. And once it's up, you
[01:26] (86.96s)
can define and manage multiple AI agents
[01:29] (89.52s)
inside it without any additional
[01:31] (91.28s)
configuration. At the moment, there are
[01:33] (93.36s)
two AI agents running. And I've simply
[01:35] (95.68s)
used the default script provided by the
[01:37] (97.76s)
framework to give you a general idea of
[01:39] (99.84s)
how the system looks and behaves during
[01:41] (101.92s)
regular usage. You can also clearly see
[01:44] (104.16s)
which model is being used by each agent.
[01:46] (106.56s)
And one great feature of the playground
[01:48] (108.56s)
is that it includes a memory layer,
[01:50] (110.56s)
which means it can store session data
[01:52] (112.64s)
and maintain context across different
[01:54] (114.88s)
interactions. Let me quickly refresh the
[01:57] (117.28s)
interface so you can see what happens
[01:59] (119.52s)
when a new chat session begins from
[02:01] (121.44s)
scratch. As soon as it reloads, you'll
[02:03] (123.60s)
notice that a fresh conversation is
[02:05] (125.44s)
created. And right now, I'm currently
[02:07] (127.52s)
inside the web agent environment where
[02:09] (129.84s)
the session memory is also working
[02:11] (131.76s)
exactly as intended. Now I'm going to
[02:13] (133.60s)
test the setup by asking a question
[02:15] (135.68s)
related to the latest AI model released
[02:18] (138.08s)
by Google just to see how well the
[02:20] (140.08s)
system performs in a real scenario. As
[02:22] (142.40s)
you can see, the tool calls are being
[02:24] (144.00s)
made successfully and the response
[02:25] (145.92s)
indicates that Gemini 2.5 Pro is
[02:28] (148.88s)
currently the most recent model while
[02:30] (150.96s)
also providing a list of related
[02:32] (152.80s)
articles to offer additional context.
[02:35] (155.04s)
This entire user interface can be used
[02:37] (157.20s)
to run your own AI agents with full
[02:39] (159.36s)
compatibility regardless of which
[02:41] (161.36s)
framework or architecture you choose to
[02:43] (163.28s)
build on. Now, let's walk through the
[02:45] (165.36s)
installation process for agent UI. There
[02:47] (167.92s)
are two methods listed for setting it
[02:49] (169.60s)
up, but I recommend using the automatic
[02:51] (171.76s)
option since it's faster and more
[02:53] (173.68s)
convenient. This method uses npx to
[02:56] (176.08s)
download and generate the agent UI
[02:57] (177.92s)
template, which is essentially built on
[02:59] (179.68s)
top of a Nex.js project. To begin, copy
[03:02] (182.24s)
the command provided in the
[03:03] (183.64s)
documentation. Open your terminal.
[03:05] (185.92s)
Navigate to the directory where you want
[03:07] (187.92s)
to install agent UI and paste the
[03:10] (190.48s)
command there. The tool will
[03:12] (192.00s)
automatically handle the installation
[03:13] (193.84s)
process. Once it finishes, you'll see
[03:16] (196.00s)
that agent UI has been set up correctly
[03:18] (198.80s)
just like I have it running here. You'll
[03:20] (200.80s)
also need to ensure that the Agno
[03:22] (202.64s)
playground is running in the background.
[03:24] (204.64s)
When you open the agent UI directory,
[03:26] (206.72s)
you'll notice that it contains all the
[03:28] (208.64s)
standard files from a typical Nex.js JS
[03:31] (211.04s)
template. Before using agent UI, you
[03:33] (213.44s)
must have the Agno playground installed
[03:35] (215.28s)
and running. As I mentioned earlier,
[03:37] (217.12s)
this playground acts as a bridge between
[03:39] (219.04s)
your AI agents and the front-end
[03:40] (220.88s)
interface. Head to the official guide
[03:42] (222.88s)
and you'll find a background example
[03:44] (224.72s)
that shows how to get it up and running.
[03:46] (226.80s)
If you're building your agents using the
[03:48] (228.56s)
Agno framework, which is a reliable and
[03:51] (231.20s)
lightweight solution, you'll benefit
[03:53] (233.20s)
from features like support for MCP. Agno
[03:56] (236.08s)
gives you the ability to connect MCP
[03:58] (238.24s)
servers directly to your agents which is
[04:00] (240.56s)
incredibly useful for more advanced
[04:02] (242.56s)
applications. The example script
[04:04] (244.56s)
provided in the repository is ready to
[04:06] (246.88s)
use. So you can simply copy it and paste
[04:09] (249.36s)
it into cursor and it should work
[04:11] (251.20s)
immediately without additional setup.
[04:13] (253.20s)
However, if you prefer not to use Agno
[04:15] (255.20s)
and instead want to integrate a
[04:16] (256.72s)
different framework, you'll need to make
[04:18] (258.24s)
a few changes. Start by opening the
[04:20] (260.40s)
agent UI folder. Then go to the API
[04:22] (262.96s)
directory and look for the playground
[04:24] (264.64s)
TypeScript file. This is the file you'll
[04:26] (266.80s)
need to modify in order to connect your
[04:28] (268.64s)
own agent backend. You can use cursor to
[04:31] (271.04s)
help with the editing process, but make
[04:32] (272.96s)
sure the changes allow your custom
[04:34] (274.64s)
agents to connect properly. Whether
[04:36] (276.48s)
you've built your agents using
[04:37] (277.92s)
Langchain, the OpenAI SDK, or any other
[04:41] (281.28s)
tool, this is where the integration
[04:43] (283.28s)
happens. In my case, I'll be using a
[04:45] (285.84s)
fast API backend to handle the logic
[04:48] (288.32s)
behind my custom agent. All you need is
[04:50] (290.40s)
a fast API backend that exposes your
[04:52] (292.88s)
agent endpoints and then you can connect
[04:54] (294.96s)
those endpoints through the playground
[04:56] (296.64s)
file. Once that's done, your custom
[04:58] (298.88s)
agents will be ready to use inside the
[05:00] (300.88s)
agent UI interface, giving you full
[05:03] (303.36s)
control over the user experience. If
[05:05] (305.92s)
you're enjoying the video, I'd really
[05:07] (307.60s)
appreciate it if you could subscribe to
[05:09] (309.60s)
the channel. We're aiming to reach
[05:11] (311.44s)
25,000 subscribers by the end of this
[05:13] (313.84s)
month, and your support genuinely helps.
[05:16] (316.24s)
We share videos like this three times a
[05:18] (318.32s)
week, so there is always something new
[05:20] (320.16s)
and useful for you to explore. Now,
[05:22] (322.08s)
let's talk about how you can use your
[05:23] (323.68s)
own custom agent instead of relying on
[05:25] (325.68s)
the Agno framework. In my setup, I
[05:27] (327.68s)
decided to implement my own backend
[05:29] (329.60s)
entirely from scratch rather than using
[05:31] (331.60s)
the Agno playground. I also built my
[05:33] (333.76s)
agents using the OpenAI SDK, which gave
[05:36] (336.48s)
me more flexibility and control over how
[05:38] (338.56s)
they behave. These agents are connected
[05:40] (340.48s)
through A2A, also known as the agentto
[05:43] (343.12s)
aagent protocol, which allows agents
[05:45] (345.20s)
built on different frameworks to
[05:46] (346.88s)
communicate with each other seamlessly.
[05:48] (348.80s)
If you're curious to learn more about
[05:50] (350.40s)
A2A, we have a complete video covering
[05:52] (352.88s)
it in detail. So, be sure to check that
[05:54] (354.88s)
out. In this particular setup, I have a
[05:57] (357.28s)
main agent that communicates with two
[05:59] (359.20s)
other specialized agents. One is
[06:00] (360.96s)
responsible for handling plant related
[06:02] (362.88s)
queries and the other handles animal
[06:05] (365.04s)
related ones. Each of these agents runs
[06:06] (366.96s)
on its own separate endpoint. When I
[06:08] (368.88s)
send a request to the main agent, it
[06:10] (370.72s)
examines the message and forwards it to
[06:13] (373.04s)
the correct agent based on the type of
[06:15] (375.12s)
query it receives. To make all of this
[06:17] (377.28s)
work, I completely removed the Agno
[06:19] (379.60s)
playground and replaced it with my own
[06:21] (381.92s)
fast API backend. Now everything is
[06:24] (384.24s)
managed through fast API and the
[06:26] (386.24s)
integration process was fairly
[06:28] (388.16s)
straightforward. I only had to update
[06:30] (390.16s)
about three or four files and adjust a
[06:32] (392.24s)
few core functions to make the system
[06:34] (394.00s)
run smoothly. If you look closely at the
[06:36] (396.08s)
interface, you'll notice that the model
[06:37] (397.84s)
information from Agno has been removed
[06:39] (399.92s)
and the UI now reflects that fast API is
[06:42] (402.72s)
being used instead. The main agent and
[06:44] (404.72s)
its associated port are still listed and
[06:47] (407.20s)
I'm able to start a new chat session
[06:49] (409.12s)
without any issues. At this point,
[06:50] (410.88s)
memory integration hasn't been added
[06:52] (412.64s)
yet, so there are no saved sessions.
[06:54] (414.88s)
However, when I sent a general message,
[06:56] (416.96s)
the system responded with a greeting and
[06:59] (419.20s)
labeled the message as other, which
[07:01] (421.12s)
means it was handled directly by the
[07:02] (422.96s)
main agent instead of being routed to
[07:05] (425.28s)
one of the specialized agents. When I
[07:07] (427.36s)
asked, "What do plants need to survive?"
[07:09] (429.60s)
The system correctly recognized that the
[07:11] (431.52s)
query was plant related. It routed the
[07:13] (433.92s)
message to the plant agent, which
[07:15] (435.60s)
responded with a clear and concise
[07:17] (437.52s)
answer. Then I asked, "Where do lions
[07:19] (439.60s)
live?" And once again, the system
[07:21] (441.28s)
classified it correctly as an animal
[07:23] (443.44s)
related query. The message was sent to
[07:25] (445.28s)
the animal agent which provided a direct
[07:27] (447.52s)
and informative response. All of this is
[07:30] (450.00s)
currently running on the GPT40 mini
[07:32] (452.48s)
model which I configured through the
[07:33] (453.92s)
system prompt to give short focused
[07:36] (456.08s)
replies. The interface looks polished
[07:38] (458.00s)
and professional. The formatting is
[07:39] (459.76s)
clean. The icons can be customized and
[07:42] (462.00s)
the smooth animations throughout the UI
[07:44] (464.40s)
really help enhance the overall user
[07:46] (466.48s)
experience. In conclusion, I highly
[07:48] (468.96s)
recommend installing it and starting to
[07:50] (470.96s)
build your agents using this agent UI. I
[07:53] (473.52s)
personally think it looks amazing and is
[07:55] (475.36s)
highly configurable to suit your needs.
[07:57] (477.52s)
It saves you a lot of time because you
[07:59] (479.20s)
don't have to build a custom agent from
[08:00] (480.96s)
scratch with all the components and
[08:02] (482.72s)
animations integrated. If you want to
[08:04] (484.80s)
add your own features, you can easily do
[08:06] (486.72s)
that as well since it is open source.
[08:08] (488.48s)
So, go ahead and give it a try. That
[08:10] (490.40s)
brings us to the end of this video. If
[08:12] (492.24s)
you'd like to support the channel and
[08:13] (493.76s)
help us keep making tutorials like this,
[08:16] (496.00s)
you can do so by using the super thanks
[08:18] (498.00s)
button below. As always, thank you for
[08:20] (500.08s)
watching and I'll see you in the next