[00:00] (0.08s)
When Manis AI launched, one thing we
[00:02] (2.32s)
began to realize was that these
[00:03] (3.92s)
interconnected AI agents can now
[00:06] (6.16s)
honestly automate almost all the tasks
[00:08] (8.48s)
we perform on our computers. Human
[00:10] (10.64s)
computer interaction felt like it was
[00:12] (12.56s)
disappearing. And in some cases, that's
[00:14] (14.56s)
absolutely true. The use cases they've
[00:16] (16.56s)
shown are impressive. Humans don't want
[00:18] (18.40s)
to do something, so just ask an AI agent
[00:20] (20.80s)
to handle it and it gets done. These
[00:22] (22.64s)
agents have access to all sorts of tools
[00:24] (24.72s)
like a browser and local file access.
[00:27] (27.04s)
Think about it. What do you really do on
[00:28] (28.80s)
your computer? You browse, process
[00:30] (30.64s)
information, write it down somewhere or
[00:32] (32.56s)
apply it somewhere else. That's exactly
[00:34] (34.48s)
what these AI agents are doing. People
[00:36] (36.56s)
are asking them to perform real tasks
[00:38] (38.80s)
like building tax policy visualization
[00:40] (40.96s)
tools or writing code. You can also ask
[00:42] (42.96s)
them to do your taxes if the right data
[00:45] (45.04s)
is given to them. The basic plan is $19
[00:47] (47.60s)
per month. It gives you 1,900 total
[00:50] (50.56s)
credits a month and 300 refresh credits
[00:53] (53.04s)
per day. Credit usage adds up fast.
[00:55] (55.36s)
Making a simple chart can cost up to 200
[00:57] (57.84s)
credits, and that's most of your daily
[00:59] (59.60s)
quota gone in one go. Developing a web
[01:01] (61.68s)
app costs 900 credits, something you
[01:03] (63.92s)
can't even do on the basic plan. So, if
[01:06] (66.08s)
all you want is an AI agent to perform
[01:08] (68.24s)
tasks on your computer, Manis becomes
[01:10] (70.32s)
pretty costly very quickly. That's why I
[01:12] (72.40s)
came across this open-source alternative
[01:14] (74.64s)
called Agentic Seek. It basically does
[01:16] (76.72s)
what Manis does, but it's completely
[01:18] (78.64s)
local. If you've got a powerful enough
[01:20] (80.56s)
computer, you can run it entirely on
[01:22] (82.56s)
your machine without paying anything. No
[01:24] (84.64s)
subscriptions, no credit limits. This is
[01:26] (86.72s)
the power of open source and it's super
[01:28] (88.72s)
exciting. In this video, I'll show you
[01:30] (90.64s)
how Aentic Seek works. I'll walk you
[01:32] (92.80s)
through real examples of the agent in
[01:34] (94.80s)
action. And if you want to try it
[01:36] (96.40s)
yourself, I'll show you exactly how to
[01:38] (98.40s)
set it up. Don't worry if you don't have
[01:40] (100.08s)
a powerful computer, you can still use
[01:42] (102.08s)
Agentic Seek with an API connection,
[01:44] (104.24s)
which is still far more affordable than
[01:46] (106.00s)
Manis. First, let me show you a demo
[01:48] (108.32s)
they've shared to give you a glimpse of
[01:50] (110.08s)
how the agent works. Okay, so here you
[01:52] (112.24s)
can clearly see they ask their own agent
[01:54] (114.32s)
to search for the Agentic Seek project
[01:56] (116.24s)
they're currently working on. The agent
[01:58] (118.16s)
first needs to figure out what skills
[02:00] (120.08s)
are required for the project. Then they
[02:02] (122.00s)
provide it with a candidate zip file and
[02:04] (124.24s)
the agents task is to find the
[02:05] (125.84s)
candidates that best match the project
[02:07] (127.68s)
requirements. The workflow is broken
[02:09] (129.68s)
down across multiple specialized agents.
[02:12] (132.24s)
One of them is the planner agent. First,
[02:14] (134.40s)
it decides to go to the GitHub
[02:16] (136.24s)
repository to identify the Agentic Seek
[02:18] (138.96s)
project and determine the required
[02:20] (140.72s)
skills. It reads the readme and the
[02:22] (142.88s)
project description to extract that
[02:24] (144.80s)
data. For some reason, it also fetches a
[02:27] (147.28s)
comparison between Agentic Seek and
[02:29] (149.12s)
Manis AI. Not entirely sure why it does
[02:31] (151.60s)
that, but it's part of its planning
[02:33] (153.20s)
flow. After gathering the necessary
[02:35] (155.20s)
information, the planner agent proceeds
[02:37] (157.28s)
to extract the contents of the CV
[02:39] (159.36s)
candidate zip file. Once extracted, it
[02:41] (161.76s)
navigates into the folder and starts
[02:44] (164.00s)
reading the candidate files one by one.
[02:46] (166.24s)
You can see it going through different
[02:47] (167.84s)
profiles. After processing the files, it
[02:50] (170.48s)
compares the candidates skills against
[02:52] (172.72s)
those required by the Agentic Seek
[02:54] (174.72s)
project. It eventually concludes that
[02:56] (176.48s)
the best matches are Aisha Khan, Longch
[02:59] (179.20s)
and a few others as well. It even ranks
[03:01] (181.20s)
them based on fit. Normally this would
[03:03] (183.28s)
require a human to search for the
[03:05] (185.04s)
project, gather the skill requirements
[03:07] (187.28s)
and feed everything into a tool like
[03:09] (189.36s)
chat GPT, but this agent automates the
[03:11] (191.92s)
entire process from start to finish with
[03:14] (194.24s)
no manual input and no prompts. And the
[03:16] (196.72s)
best part, you don't need to pay for
[03:18] (198.32s)
anything like Manis AI. If you have a
[03:20] (200.32s)
powerful enough computer, it all runs
[03:22] (202.48s)
locally, fully private, and completely
[03:24] (204.72s)
free. That's what makes it so
[03:26] (206.24s)
incredible. I've even got this set up
[03:28] (208.40s)
here myself. And this is the prompt I
[03:30] (210.24s)
gave it. It needs to search online for
[03:32] (212.48s)
popular sci-fi movies from 2024 and pick
[03:35] (215.84s)
three that I should watch tonight. It
[03:37] (217.44s)
did exactly that. The planner agent came
[03:39] (219.52s)
online, broke my request into smaller
[03:41] (221.76s)
tasks. And since I asked it to save the
[03:43] (223.92s)
results in a movie night text file, it's
[03:46] (226.00s)
doing that as well. Over here, I was
[03:48] (228.00s)
seeing the browser view showing how it
[03:49] (229.84s)
was browsing and searching through the
[03:51] (231.36s)
web. And this is what it did. If we come
[03:53] (233.44s)
back, it actually saved the file where
[03:55] (235.36s)
the agent folder was placed in my
[03:57] (237.28s)
developer folder and automatically
[03:59] (239.20s)
created the movies.txt file there. These
[04:01] (241.92s)
are the movies it gave me. So, it's
[04:03] (243.68s)
pretty awesome and works really well. I
[04:05] (245.92s)
also did a timing analysis. The request
[04:08] (248.08s)
we just made to search for movies and
[04:09] (249.92s)
save them in a text file. I recorded the
[04:12] (252.16s)
time and it took 4 minutes to complete
[04:14] (254.08s)
the entire task. I felt three sources
[04:16] (256.32s)
weren't enough, so I asked it to search
[04:18] (258.32s)
10 sources instead. It did that,
[04:20] (260.48s)
searched all 10, and took about 8
[04:22] (262.48s)
minutes before giving me a report.
[04:24] (264.08s)
Overall, it's a pretty flexible agent.
[04:26] (266.24s)
The timing, in my opinion, is really
[04:28] (268.08s)
good, considering it's going out to all
[04:29] (269.92s)
these sites and gathering the data I
[04:31] (271.76s)
asked for. If you're enjoying the video,
[04:33] (273.92s)
I'd really appreciate it if you could
[04:35] (275.52s)
subscribe to the channel. We're also
[04:37] (277.28s)
testing out memberships to support the
[04:39] (279.20s)
channel. We've only launched the first
[04:40] (280.72s)
tier so far. It offers priority comment
[04:43] (283.20s)
replies for now, but subscribing would
[04:45] (285.28s)
really help us see how many of you are
[04:47] (287.20s)
interested and want to support what
[04:48] (288.72s)
we're doing. If you found this
[04:50] (290.40s)
worthwhile and want to install it,
[04:52] (292.32s)
here's what you're going to do. Come on
[04:54] (294.00s)
to the main website. I'll have the link
[04:55] (295.76s)
in the description below. From here, go
[04:58] (298.00s)
to the GitHub page. There you'll find
[05:00] (300.24s)
all the installation commands I'm about
[05:02] (302.32s)
to show you. Most of them are already
[05:04] (304.00s)
there. You just have to copy and paste
[05:05] (305.68s)
them. Follow along with what I do and
[05:07] (307.60s)
you'll have the agent installed and
[05:09] (309.20s)
running. First, clone the repository.
[05:11] (311.28s)
Just copy the first command and paste it
[05:13] (313.28s)
down here. This clones the repository,
[05:15] (315.60s)
navigates into the directory, and
[05:17] (317.52s)
renames the example env file to your
[05:20] (320.00s)
actual env file. Next, you're going to
[05:22] (322.40s)
create a virtual environment for Python.
[05:24] (324.72s)
Inside that folder, paste this command
[05:26] (326.80s)
to set up your Python environment. The
[05:28] (328.88s)
best thing about this agent is that it
[05:30] (330.64s)
supports installation on Windows, Mac
[05:32] (332.80s)
OS, and Linux. So, you're not limited by
[05:35] (335.12s)
your operating system. These are the two
[05:36] (336.96s)
install scripts that will set up all the
[05:38] (338.96s)
dependencies. Choose the one that
[05:40] (340.64s)
matches your OS. For me, it's Mac OS, so
[05:43] (343.20s)
I'm going to paste this and it'll
[05:44] (344.64s)
install all my dependencies. You might
[05:46] (346.56s)
get an error when you run this install
[05:48] (348.48s)
script just like I did. The error will
[05:50] (350.56s)
ask you to install Python version 3.10.
[05:53] (353.60s)
You'll need to install that specific
[05:55] (355.44s)
version for everything to work. The
[05:57] (357.12s)
method differs depending on your
[05:58] (358.72s)
operating system, but it shouldn't be
[06:00] (360.48s)
too difficult. For Mac OS, you install
[06:02] (362.48s)
it using this command. Once that's done,
[06:04] (364.72s)
you should be good to go. After Python
[06:06] (366.72s)
3.10 10 is installed and you've run the
[06:08] (368.72s)
installation script, everything should
[06:10] (370.40s)
be set up and you'll be ready to run the
[06:12] (372.24s)
agent. Let me guide you through the
[06:14] (374.24s)
configuration before running the agent
[06:16] (376.16s)
because that's quite important depending
[06:17] (377.84s)
on how you plan to use it. Next, go into
[06:19] (379.92s)
your terminal and open the directory in
[06:22] (382.16s)
any code editor you like. I'll open it
[06:24] (384.08s)
in cursor. Inside the codebase, you'll
[06:26] (386.16s)
find a file called
[06:28] (388.12s)
config.ini. This file contains the
[06:30] (390.16s)
configuration settings that the agent
[06:31] (391.92s)
uses while running. You'll need to
[06:33] (393.44s)
change a few things depending on how you
[06:35] (395.28s)
plan to use it. You can either run the
[06:37] (397.04s)
agent locally or use it with an external
[06:39] (399.36s)
API. If you want to run an LLM locally,
[06:42] (402.00s)
head over to the GitHub repository. It
[06:44] (404.16s)
has all the details listed. First, it
[06:46] (406.24s)
shows how to set things up for a locally
[06:48] (408.40s)
running LLM. To get good performance
[06:50] (410.48s)
with this agent, you need large models
[06:52] (412.72s)
running locally. Without those, it's
[06:54] (414.72s)
really not going to be useful. Running
[06:56] (416.56s)
14B models from Olama, Deepseek, or Quen
[06:59] (419.92s)
won't help much here because the
[07:01] (421.44s)
performance won't be great. You need at
[07:02] (422.96s)
least a 32B model for it to work well.
[07:05] (425.68s)
that requires an RTX 4090. Even the 14B
[07:09] (429.12s)
models I mentioned need at least an RTX
[07:11] (431.76s)
3060, which is cheaper than the 4090,
[07:14] (434.32s)
but still pretty expensive. If setting
[07:16] (436.24s)
this up locally isn't an option for you,
[07:18] (438.40s)
then move on to the API setup. Right
[07:20] (440.56s)
now, it supports APIs from OpenAI,
[07:23] (443.20s)
DeepS, Hugging Face, Together AI, and
[07:25] (445.92s)
Google. These are the providers it
[07:27] (447.76s)
currently works with. I hoped Claude
[07:29] (449.68s)
would work with it. Support for Claude
[07:31] (451.52s)
isn't available yet. Now go ahead and
[07:33] (453.52s)
change the values in the config. First
[07:35] (455.60s)
set is local to false. It's set to true
[07:38] (458.00s)
by default. Then define the provider
[07:40] (460.40s)
name. Choose any provider from the list.
[07:42] (462.56s)
I chose OpenAI, so I set that as mine.
[07:45] (465.28s)
Next, define the provider model. I
[07:47] (467.36s)
recommend using GPT40 instead of GPT4
[07:50] (470.48s)
mini. I tested both and GPT40 performed
[07:53] (473.44s)
a lot better. If you have access to
[07:55] (475.36s)
other providers, the performance may be
[07:57] (477.36s)
even better. I didn't have credits for
[07:59] (479.04s)
Deepseek or the others, so I just used
[08:01] (481.72s)
GPT40. It worked well and stronger
[08:04] (484.40s)
models would only improve it further.
[08:06] (486.32s)
These are the main settings you'll need
[08:07] (487.84s)
to update. Then open your ENV file. In
[08:10] (490.80s)
that file, paste your API key, whether
[08:13] (493.20s)
it's from OpenAI, DeepS, or whichever
[08:15] (495.84s)
provider you're using. Once that's done,
[08:18] (498.08s)
your configuration will be ready for
[08:19] (499.92s)
basic use. There are also some optional
[08:22] (502.00s)
settings. One of them is for a headless
[08:24] (504.00s)
browser. This means the browser window
[08:25] (505.76s)
won't actually open while the agent
[08:27] (507.60s)
runs. It'll still do everything but
[08:29] (509.68s)
quietly in the background. Leave this
[08:31] (511.44s)
set to true. You can also enable speak
[08:33] (513.52s)
and listen modes. These let the agent
[08:35] (515.60s)
talk back and listen to your voice.
[08:37] (517.36s)
You'll be able to have a real
[08:38] (518.56s)
conversation. Just turn both of those
[08:40] (520.56s)
options to true and it'll start working.
[08:42] (522.96s)
Before you start up the agent, there are
[08:44] (524.64s)
a few things you need to know. You'll
[08:46] (526.16s)
need to have Docker up and running for
[08:47] (527.84s)
the agent to start because it's
[08:49] (529.44s)
containerized. It fetches containers and
[08:51] (531.76s)
sets them up automatically. First, start
[08:54] (534.00s)
Docker. Once it's running, you can start
[08:56] (536.08s)
the services. Next, come back into the
[08:58] (538.16s)
GitHub repository. And here, you'll run
[09:00] (540.24s)
this command. It's different for Mac OS
[09:02] (542.16s)
and Windows. The Mac OS command also
[09:04] (544.24s)
works on Linux. So, just run that if
[09:06] (546.24s)
you're using Linux. If you don't have
[09:07] (547.84s)
your Python environment activated and
[09:10] (550.00s)
you've opened a new terminal, make sure
[09:11] (551.84s)
to run the environment activation
[09:13] (553.68s)
command first. This will start a few
[09:15] (555.68s)
backend services as well as the front
[09:17] (557.76s)
end. The structure they've built is a
[09:19] (559.76s)
bit confusing to me because usually
[09:21] (561.52s)
backend and front end are set up
[09:23] (563.28s)
separately. Here the setup is different.
[09:25] (565.04s)
After running that, you're going to
[09:26] (566.48s)
start the back end which is handled by
[09:28] (568.32s)
api. py. Let me show you. This is my
[09:30] (570.96s)
back end. These are the services that
[09:32] (572.64s)
are running. They're just Docker
[09:34] (574.08s)
containers. I use this command and now
[09:36] (576.08s)
they're all running. Another thing to
[09:37] (577.60s)
keep in mind, you need to run both the
[09:39] (579.44s)
front end and back end in separate
[09:41] (581.36s)
terminal windows. So, this is running
[09:43] (583.20s)
here and the Python API command is
[09:45] (585.44s)
running in another terminal. Something
[09:47] (587.04s)
else they got wrong is that it won't
[09:48] (588.72s)
start up if you just use Python or
[09:50] (590.64s)
Python 3. You have to explicitly write
[09:52] (592.96s)
Python 3.10 API. py for it to work.
[09:56] (596.40s)
Paste that in and it'll start the back
[09:58] (598.32s)
end. Then go to this address, localhost
[10:00] (600.80s)
3000, and you'll find your agent there,
[10:03] (603.04s)
ready to use. That brings us to the end
[10:05] (605.04s)
of this video. If you'd like to support
[10:06] (606.72s)
the channel and help us keep making
[10:08] (608.48s)
tutorials like this, you can do so by
[10:10] (610.64s)
using the super thanks button below. As
[10:12] (612.72s)
always, thank you for watching and I'll
[10:14] (614.64s)
see you in the next one.