[00:00] (0.08s)
Yesterday, Claude Code changed the game
[00:02] (2.08s)
again. They released agents that live in
[00:04] (4.96s)
your project and seem to be able to read
[00:06] (6.72s)
your mind and do exactly what you need,
[00:09] (9.12s)
even if you don't exactly ask for them.
[00:11] (11.68s)
Today, I'll show you how to build agents
[00:13] (13.20s)
in Cloud Code that really have a chance
[00:16] (16.08s)
to change your use of the tool. But
[00:19] (19.04s)
there are a few things to be aware of.
[00:21] (21.28s)
Okay, what exactly are agents? Think of
[00:23] (23.76s)
agents as little prompts that live in a
[00:27] (27.36s)
special folder waiting for you to call
[00:29] (29.68s)
them up or mention something that they
[00:32] (32.16s)
do. This is going to feel a little bit
[00:33] (33.92s)
like MCP and a little bit like agents
[00:36] (36.72s)
and a little bit like slash commands.
[00:38] (38.88s)
There's a lot of moving parts here under
[00:41] (41.20s)
something that seems so simple. So, I
[00:43] (43.76s)
really want to show you how to use them
[00:45] (45.28s)
today. First, let's look at what
[00:46] (46.96s)
Anthropic announced very simply. They
[00:49] (49.68s)
have a page for sub aents inside of
[00:51] (51.92s)
their their documentation. What I will
[00:54] (54.16s)
point out here, the only part that's
[00:55] (55.84s)
really meaningful to us at this point is
[00:58] (58.08s)
what are the key benefits of writing
[01:00] (60.32s)
these agents. Now, they call them sub
[01:02] (62.48s)
aents, but when you use the command,
[01:04] (64.48s)
there's a slash command to find them,
[01:06] (66.08s)
and that slash command is slash agents.
[01:08] (68.96s)
But the two things that are really
[01:10] (70.56s)
meaningful here is that they have their
[01:12] (72.48s)
own context, which means everything
[01:14] (74.64s)
inside of that agent area, that's why
[01:17] (77.28s)
they're sub agents, is they have their
[01:19] (79.52s)
own subcontext area that they might do
[01:22] (82.64s)
hundreds of steps and all of the context
[01:24] (84.80s)
that they build up to do all of their
[01:26] (86.40s)
moving parts stays within that agent. So
[01:28] (88.88s)
when it's complete, what it's done with,
[01:30] (90.72s)
it passes back out and everything else
[01:32] (92.96s)
is consumed and goes away within that
[01:35] (95.04s)
sub agent. Now, what's important about
[01:36] (96.72s)
that is it doesn't pollute your main
[01:39] (99.12s)
context. If you're having a chat and you
[01:41] (101.04s)
want something to occur, please write a
[01:43] (103.12s)
file or go research the best account
[01:46] (106.00s)
that I need to use for this sample. That
[01:48] (108.72s)
might need to go read a database or look
[01:51] (111.12s)
at the web or whatever it might need to
[01:52] (112.88s)
do. All of the extra moving that it's
[01:54] (114.72s)
doing in there, you don't need in your
[01:55] (115.92s)
main chat context. You're trying to work
[01:58] (118.00s)
through a problem, whatever that problem
[01:59] (119.52s)
may be. And so all of these little
[02:01] (121.28s)
subtasks that go off and do their own
[02:03] (123.04s)
thing need to kind of manage their own
[02:05] (125.60s)
memory space, if you will. Now, you're
[02:07] (127.92s)
sending in some information that it can
[02:09] (129.84s)
consume and work with, and it's sending
[02:11] (131.84s)
its response back out. And we'll take a
[02:13] (133.68s)
look at how I determined that in a
[02:15] (135.28s)
moment. But that's the one really major
[02:17] (137.76s)
thing that these things are actually
[02:19] (139.04s)
doing for us is kind of encapsulating
[02:21] (141.60s)
the space of of the context that they're
[02:24] (144.32s)
part of. The other thing that really is
[02:26] (146.08s)
truly important here is they have
[02:27] (147.92s)
flexible permissions. So this is a way
[02:29] (149.84s)
to say what can this agent do? We will
[02:32] (152.32s)
look very clearly at this though you so
[02:34] (154.16s)
you understand it but it's basically
[02:35] (155.68s)
saying can this agent write files? Can
[02:37] (157.92s)
it read files? Can it use this MCP or
[02:40] (160.80s)
that MCP? All of those kinds of things
[02:43] (163.04s)
can be described inside of the
[02:44] (164.72s)
definition of the agent itself. And that
[02:46] (166.56s)
way you can greatly limit or or even
[02:49] (169.04s)
enhance what any given subtask or sub
[02:51] (171.84s)
aent is doing for you and the tools that
[02:54] (174.00s)
they have to do it. All right. So with
[02:55] (175.68s)
that, let's dive in and take a look at
[02:57] (177.52s)
how you first interrogate the agents
[02:59] (179.76s)
that exist and then build our own. Okay,
[03:02] (182.00s)
let's build a dice rolling agent. It's
[03:04] (184.32s)
simple enough to understand and maybe
[03:06] (186.16s)
complex enough to really show off how to
[03:08] (188.08s)
build your own agent. For anybody that
[03:09] (189.92s)
doesn't know, I am running clawed code
[03:12] (192.32s)
inside of cursor. Really, that just
[03:14] (194.40s)
basically means you are seeing terminal
[03:17] (197.36s)
environment here and cursor on the
[03:19] (199.84s)
outside. And that really gives us the
[03:21] (201.44s)
ability to look at files. I always
[03:23] (203.68s)
really advise running cloud code in
[03:26] (206.00s)
something that you can interrogate the
[03:27] (207.60s)
project that it's that it's working
[03:29] (209.20s)
with. You can collapse the side and just
[03:31] (211.60s)
get back to cloud code itself if you
[03:33] (213.28s)
want. But enough of that just so that
[03:35] (215.04s)
you understand the environment that
[03:36] (216.32s)
we're looking at here. If you recall, it
[03:38] (218.16s)
told us we could use the agents command,
[03:40] (220.64s)
the slash command of agents that you can
[03:42] (222.48s)
see here. And there are no agents that
[03:44] (224.56s)
it found currently in this project. So
[03:46] (226.88s)
we can create our own agent. I will show
[03:49] (229.60s)
you that where this is going to go is in
[03:51] (231.60s)
our cloud folder orcloud folder. There's
[03:54] (234.16s)
an agents folder and inside of that
[03:55] (235.68s)
agents folder which is currently empty.
[03:57] (237.52s)
That's really what we're about to build.
[03:59] (239.36s)
We're going to let the system itself
[04:01] (241.44s)
build our agent. I think this is the
[04:03] (243.12s)
best way to get started and they really
[04:04] (244.64s)
advise that you always get started with
[04:06] (246.24s)
this because the file that it creates
[04:08] (248.08s)
has a little bit of a format. We'll look
[04:09] (249.76s)
at that. Here we go. So, you can change
[04:12] (252.32s)
where you want your agent. Do you want
[04:13] (253.92s)
your agent in this project or do you
[04:16] (256.24s)
want it? It says personal. I might think
[04:18] (258.32s)
of this more as global because it's
[04:20] (260.24s)
putting it in my top level. Claude
[04:22] (262.88s)
folder which is in my home. So if I put
[04:25] (265.84s)
it there, if I chose personal, it will
[04:28] (268.08s)
be available to every instance of Claude
[04:30] (270.80s)
that I pull up from this home folder
[04:32] (272.64s)
from this account. And that's really
[04:34] (274.24s)
useful. I'm very excited about that. I
[04:36] (276.64s)
don't have any installed now as you
[04:38] (278.32s)
could see, but that's really where I
[04:40] (280.16s)
think I'm going to put several of the
[04:41] (281.52s)
things that happen across many of my
[04:43] (283.36s)
projects. So that I think is a real
[04:45] (285.84s)
power here. At the same time, what you
[04:47] (287.76s)
might put inside of a project is also
[04:49] (289.76s)
really powerful because projects really
[04:51] (291.60s)
have their own concerns very often. And
[04:53] (293.76s)
this might really be a way to let things
[04:55] (295.68s)
always occur a certain way without every
[04:58] (298.24s)
user having to know how to ask for it.
[05:00] (300.00s)
But look, I'm getting ahead of myself.
[05:01] (301.60s)
Let's build an agent. All right, we are
[05:03] (303.28s)
going to generate with claude and tell
[05:05] (305.04s)
it that I want an agent that rolls dice.
[05:07] (307.44s)
Okay, so once it's gone through and of
[05:09] (309.52s)
course this is doing a full LLM thing.
[05:11] (311.28s)
We'll see this in a second when we see
[05:12] (312.72s)
the completed file. So it really is
[05:15] (315.20s)
using probably sonnet, I don't really
[05:17] (317.28s)
know to describe the best version of an
[05:19] (319.68s)
agent that it can. But we're at the
[05:21] (321.52s)
point of needing to tell it what tools
[05:23] (323.60s)
it has the ability to use. So if we turn
[05:26] (326.16s)
off all tools, it has no ability. That's
[05:28] (328.88s)
not entirely true because it's an agent.
[05:30] (330.80s)
it still has access to an LLM. And
[05:33] (333.20s)
really, frankly, that underlying LLM can
[05:35] (335.44s)
also do things. So, it's not really all
[05:37] (337.92s)
tools. Don't think of it as this isn't
[05:39] (339.52s)
going to do anything, but it's not using
[05:40] (340.96s)
any of your actionable tools that are
[05:43] (343.44s)
extraneous to the typical clawed
[05:45] (345.28s)
environment, if you will, or the typical
[05:47] (347.12s)
LLM interactions that you might have.
[05:49] (349.12s)
You can allow it just the readonly tools
[05:51] (351.36s)
or readon and edit tools. You can do
[05:53] (353.52s)
anything you want here. But if I show
[05:54] (354.80s)
the individual tools and say let's take
[05:57] (357.36s)
a look at what the readonly tools turned
[05:59] (359.68s)
on. You can see that it has some of the
[06:02] (362.24s)
different read mechanisms that you might
[06:04] (364.08s)
have at shell level and also it's got
[06:06] (366.64s)
its web search and notebook read
[06:08] (368.88s)
mechanism here. Let's try this. This one
[06:11] (371.28s)
we can just uh it's too big. Let me hide
[06:14] (374.00s)
that so that we can see. I'm going to
[06:15] (375.44s)
turn off all tools for this guy cuz it
[06:17] (377.20s)
really doesn't seem to make sense that
[06:18] (378.56s)
he needs any tools. So let's give him a
[06:20] (380.72s)
shot. He's just rolling some dice. And
[06:22] (382.32s)
now it says up here to continue. And now
[06:25] (385.12s)
you also get to give these things a
[06:26] (386.72s)
color. I think this is great. I'm glad
[06:28] (388.40s)
they've moved into this. I'm going to
[06:29] (389.84s)
make our dice roller blue. And that just
[06:32] (392.00s)
means when you see it running, you'll
[06:33] (393.68s)
see it very clearly called out in the
[06:35] (395.60s)
stack here in the conversation. Okay.
[06:37] (397.44s)
And so this is the prompt that it ended
[06:39] (399.60s)
up creating. You can't see it all here.
[06:41] (401.44s)
So we'll look at it as a file. So it's
[06:43] (403.52s)
saying press S or enter to save. Now
[06:46] (406.08s)
we've saved it and we can see it here.
[06:47] (407.92s)
Okay. So, let's exit out of this and
[06:50] (410.40s)
then come up and take a look at the file
[06:51] (411.84s)
that it created. The prompt is just down
[06:53] (413.84s)
here so that you can just come in and
[06:55] (415.36s)
write anything you want in the prompt.
[06:56] (416.80s)
Up at the top is where some of the magic
[06:58] (418.56s)
is happening. They describe this in the
[07:00] (420.24s)
documentation if you need to know more,
[07:01] (421.84s)
but really what's going on is there's a
[07:03] (423.52s)
special way for it to call out its name
[07:05] (425.84s)
and also description. Now, the
[07:07] (427.28s)
description one is probably the most
[07:08] (428.96s)
important. Tools are important as well,
[07:10] (430.48s)
but really description is the one that's
[07:12] (432.32s)
kind of the most critical. And you can
[07:14] (434.08s)
see that this has a very special format
[07:15] (435.92s)
to it with a little bit of extra markup
[07:17] (437.68s)
inside of it. That's why they're
[07:19] (439.04s)
advising or recommending that you use
[07:20] (440.88s)
their tool to get it started and then
[07:22] (442.72s)
come back and make changes. But this
[07:24] (444.40s)
description here, use this agent when
[07:26] (446.24s)
you need to simulate dice rolls for
[07:27] (447.92s)
games, probability calculations, random
[07:29] (449.92s)
number. So this is a very important
[07:32] (452.00s)
description. anybody that's that's used
[07:33] (453.92s)
MCPs, built MCPs, um, and even in some
[07:37] (457.04s)
cases agent tools, things like that,
[07:38] (458.72s)
depending upon what platform you're
[07:40] (460.16s)
using, what framework you're using, the
[07:41] (461.76s)
description is something that this agent
[07:43] (463.92s)
system, that Claude code is using to
[07:46] (466.24s)
interrogate and really go discover these
[07:48] (468.96s)
tools. And now, I can't say how
[07:50] (470.88s)
important that is. It's very difficult
[07:52] (472.16s)
to get that across in a very simple
[07:53] (473.84s)
statement like that. If you know what I
[07:55] (475.12s)
just said, you probably just went, "Oh,
[07:57] (477.28s)
what? Really?" These are not things that
[07:59] (479.60s)
the system necessarily knows about. It's
[08:02] (482.00s)
like all of the other MCP in MCP tools
[08:04] (484.72s)
that we've seen. It has a
[08:06] (486.48s)
self-discovery. So, it when it launches
[08:09] (489.12s)
looks at all the agents that it's aware
[08:10] (490.64s)
of and takes an inventory of all of
[08:12] (492.48s)
their descriptions. Those descriptions
[08:14] (494.48s)
become part of the tool set that it
[08:16] (496.80s)
gives to an LLM when it says, "Hey, the
[08:18] (498.72s)
user asked me to do X. Here's some tools
[08:20] (500.72s)
I know about. Maybe they're useful. Let
[08:22] (502.88s)
me know if you want me to call one of
[08:24] (504.24s)
those." So, it's using this as a tool
[08:26] (506.88s)
that the LLM might go, "Oh, yeah. You
[08:28] (508.96s)
know what you should do? you should roll
[08:30] (510.40s)
dice using that dice rolling agent that
[08:32] (512.24s)
was given to you because it looks like
[08:33] (513.92s)
in the description which is the most
[08:35] (515.60s)
critical aspect of this. They indicate
[08:37] (517.68s)
that they can roll dice. So let's give
[08:39] (519.52s)
this thing a shot now that we understand
[08:41] (521.68s)
what's going on here. And if of course
[08:43] (523.68s)
if we do agents we should see our dice
[08:45] (525.92s)
roller here roll a d6. Now of course not
[08:48] (528.96s)
fast. This is definitely not the most
[08:50] (530.40s)
efficient way to roll a d6 I wouldn't
[08:52] (532.32s)
think because it's going to an LLM. The
[08:54] (534.16s)
LLM is saying okay why don't you
[08:56] (536.16s)
actually call this tool that you told me
[08:57] (537.92s)
about. Then it's saying, "Oh, I'll call
[08:59] (539.92s)
this tool. Let me call the tool." The
[09:01] (541.68s)
tool's going to an LLM saying, "Hey, I'm
[09:03] (543.60s)
supposed to do all this." It's kind of
[09:04] (544.80s)
nuts to think that this is how you would
[09:06] (546.08s)
roll dice, but that's the idea of
[09:07] (547.52s)
agents. This is kind of a natural way
[09:09] (549.12s)
for agents to work. And you can see it
[09:10] (550.80s)
says you rolled a four on the d6.
[09:12] (552.56s)
Excellent. So, this is the output that's
[09:14] (554.64s)
actually coming out of this action here.
[09:17] (557.60s)
And if I go in and do a controlr, which
[09:20] (560.24s)
is a way that you can see the actions
[09:22] (562.16s)
that occurred, all the conversation that
[09:23] (563.92s)
was kind of collapsed, if you will, you
[09:26] (566.16s)
can actually see some of what was
[09:27] (567.60s)
happening. So, it started to call the
[09:29] (569.20s)
dice roller the first time. It doesn't
[09:31] (571.04s)
know about a slash roll command that it
[09:33] (573.20s)
tried to do. Here's the commands that
[09:34] (574.72s)
you're allowed to do. And it says, "Huh,
[09:36] (576.24s)
maybe that's not what I need to do. Let
[09:37] (577.52s)
me try with a simpler prompt." Calls the
[09:39] (579.28s)
dice roller and the dice roller goes to
[09:41] (581.44s)
the agent or to the LLM, does the
[09:43] (583.44s)
rolling, comes back with this kind of
[09:45] (585.12s)
output, and then what it finally
[09:47] (587.20s)
summarizes it to is this. So, that's all
[09:49] (589.52s)
of what's happening inside of this
[09:51] (591.12s)
agent. That's how agents work. You can
[09:53] (593.20s)
see we didn't have to say use the dice
[09:55] (595.76s)
roller agent. All we said was roll a d6.
[09:59] (599.04s)
That description is doing that work.
[10:01] (601.04s)
Remember this discovery thing is super
[10:03] (603.20s)
super important. And so if you are
[10:05] (605.44s)
writing these agents, what you want to
[10:07] (607.04s)
do is make sure the description is very
[10:08] (608.64s)
unique. You don't want something like
[10:10] (610.32s)
writes files or reads files because
[10:12] (612.48s)
you're going to be reading files a lot
[10:13] (613.92s)
and it could get confused and these
[10:16] (616.00s)
agents are always kind of in memory or
[10:18] (618.40s)
in access for the LLM. So if it just
[10:20] (620.88s)
sees something generic saying, "Oh, I
[10:22] (622.32s)
read a file." Well, at any given moment,
[10:24] (624.32s)
the LLM might send back a tool call
[10:26] (626.16s)
saying, "You need to read a file." And
[10:27] (627.68s)
it could choose the wrong one. So, just
[10:29] (629.60s)
be aware that you want these to be very
[10:31] (631.52s)
discreet and do work that's meaningful
[10:33] (633.68s)
for you and meaningful that you might
[10:35] (635.52s)
describe when you're asking for it. I
[10:37] (637.36s)
need to query my local database for X.
[10:39] (639.60s)
If you have a tool or an agent that is
[10:42] (642.16s)
locally queries the database using these
[10:44] (644.72s)
psql commands, blah blah blah, that's
[10:46] (646.72s)
going to be a very valuable kind of
[10:48] (648.40s)
interaction. If it's just can read
[10:50] (650.32s)
databases, that might be challenged.
[10:52] (652.80s)
You'd probably get there in most cases,
[10:54] (654.48s)
but I think you might get some false
[10:56] (656.00s)
positives in that case as well. All
[10:57] (657.52s)
right, so that's enough of we built our
[10:59] (659.84s)
first agent. Let's take a look at how I
[11:02] (662.32s)
can tell that they are passing
[11:03] (663.76s)
information when you chain them. Okay,
[11:05] (665.52s)
so I've dropped a few new agents in
[11:08] (668.00s)
here. And so one of them is we have our
[11:10] (670.48s)
dice roller as we saw before. We have
[11:12] (672.32s)
another one that summarizes and knows
[11:14] (674.40s)
how to write markdown kind of summary
[11:16] (676.96s)
from information that it's given and
[11:18] (678.96s)
another one that knows how to write
[11:21] (681.20s)
files that don't have a name. So if you
[11:23] (683.04s)
have a big string and you're just trying
[11:24] (684.24s)
to say save a file, hopefully this one
[11:26] (686.32s)
will kick up and be used and it'll call
[11:28] (688.32s)
the LLM with the content to try to
[11:30] (690.00s)
figure out what the value the name of
[11:31] (691.76s)
the file should be and what file type it
[11:33] (693.76s)
is. So we're going to try to chain these
[11:35] (695.52s)
together. So we saw calling dice roller
[11:38] (698.24s)
and it'll come back with its value. But
[11:40] (700.64s)
how about if we said okay use dice
[11:42] (702.56s)
roller and then save the information
[11:45] (705.04s)
directly as a file. So let's give that a
[11:48] (708.00s)
shot. Now this is one of those examples
[11:49] (709.84s)
I'm saying save the output to a file.
[11:52] (712.56s)
Hopefully my description is strong
[11:54] (714.48s)
enough to say if it doesn't have a name
[11:57] (717.04s)
and you have a string this is a good
[11:59] (719.36s)
agent to use to write a file. But it
[12:01] (721.68s)
might not. So it does seem that it's
[12:03] (723.52s)
picked it up here. Nameless writer and
[12:05] (725.52s)
dice roller. So it went and called the
[12:07] (727.44s)
dice roller. It's now calling the
[12:09] (729.20s)
nameless writer. So this all looks
[12:11] (731.20s)
successful. It also picked text. Maybe
[12:13] (733.92s)
it didn't turn it into markdown, which
[12:15] (735.92s)
is great. And that's great. So this is
[12:17] (737.76s)
probably output that we saw that would
[12:19] (739.92s)
have come out of the dice roller. Right?
[12:21] (741.92s)
So this is kind of proof that what we're
[12:23] (743.84s)
seeing is the dice rollers return value.
[12:26] (746.72s)
The string essentially that's being in
[12:28] (748.48s)
the response is then going in since I'm
[12:31] (751.20s)
chaining those two together. This is
[12:32] (752.96s)
kind of called chaining. When you're
[12:34] (754.80s)
mentioning two things in a row, roll
[12:37] (757.04s)
some dice and save the output. I'm
[12:39] (759.36s)
chaining two actions together.
[12:40] (760.96s)
Theoretically in a normal agent system,
[12:43] (763.28s)
that's a way that you would pass context
[12:44] (764.80s)
from one to the other. You're seeing
[12:46] (766.24s)
that happen here. So, let's try the same
[12:48] (768.24s)
thing. And I'm going to use the up arrow
[12:50] (770.24s)
here to give us roll some dice, save the
[12:52] (772.40s)
output, and summarize, and then finally
[12:55] (775.44s)
save the output to a file. So, let's see
[12:57] (777.28s)
what it comes with comes up with on this
[12:58] (778.96s)
one because we have a summarizer agent.
[13:00] (780.64s)
also uh to-do right tool to keep track
[13:03] (783.28s)
of these tasks. Okay, let's see if it
[13:05] (785.52s)
works. Dice roller. Excellent.
[13:07] (787.28s)
Summarize. Excellent. Using the
[13:09] (789.04s)
summarizer and you can see the colors
[13:10] (790.72s)
coming into play here so that you can
[13:12] (792.08s)
really tell what's going on that you're
[13:14] (794.00s)
using kind of a smart system or
[13:15] (795.92s)
potentially a very smart system that's
[13:17] (797.76s)
housed away in one of these agents. And
[13:19] (799.84s)
then hopefully we see the file writer
[13:21] (801.68s)
happening here at the end. Nameless
[13:23] (803.44s)
writer. I like it. Excellent. This is
[13:25] (805.28s)
working. Okay. So hopefully this will
[13:27] (807.20s)
give us a new dice roll result.
[13:28] (808.64s)
Hopefully it doesn't pick the exact same
[13:30] (810.64s)
title. Dice roll summary this time.
[13:32] (812.56s)
Perfect. And it's a markdown because if
[13:34] (814.72s)
you remember the summarizer turns
[13:36] (816.40s)
everything into a markdown file.
[13:38] (818.00s)
Excellent. So let's take a look at the
[13:39] (819.60s)
result. All right.
[13:41] (821.76s)
The executive summary of a series of
[13:44] (824.32s)
dice rolls. Uh total dice rolled 11.
[13:47] (827.76s)
Wow. We did just I think we just told it
[13:49] (829.92s)
to roll some dice. Rolled five sets of
[13:52] (832.00s)
dice. Created a comprehensive summary.
[13:54] (834.80s)
It does. Okay. And it's got five sets.
[13:56] (836.88s)
Great. Okay. So, we're going to call it
[13:59] (839.36s)
good. So, this is something that I want
[14:01] (841.04s)
to talk about and you can might be
[14:02] (842.48s)
hearing me as I'm saying, I hope it
[14:04] (844.40s)
calls this. I hope it does this other
[14:06] (846.00s)
thing. It is probabilistic. I will get
[14:08] (848.16s)
to that at the end. It's really a very
[14:10] (850.88s)
important thing to understand that
[14:12] (852.32s)
you're introducing here. And it is one
[14:14] (854.48s)
of the warnings that I would say comes
[14:16] (856.32s)
along with this kind of action. All
[14:18] (858.56s)
right. First, let's look at a much much
[14:21] (861.04s)
more advanced use of something like this
[14:23] (863.20s)
that has nothing to do with programming.
[14:25] (865.04s)
And by the way, if you haven't noticed,
[14:26] (866.88s)
we haven't written any programs yet.
[14:28] (868.64s)
It's a way that I've been using claude
[14:30] (870.16s)
code and I'm kind of going to advocate
[14:31] (871.92s)
you to think about using cloud code this
[14:33] (873.84s)
way as well. The next example also,
[14:36] (876.00s)
okay, I'm going to give you a little
[14:37] (877.20s)
peak behind the curtain on how this has
[14:39] (879.68s)
been happening. I've done this
[14:41] (881.20s)
traditionally in chat GPT even. I have
[14:44] (884.16s)
multiple custom GPTs. And by the way,
[14:46] (886.80s)
maybe you're starting to see that these
[14:48] (888.88s)
agents here in cloud code are pretty
[14:51] (891.76s)
similar to what you can do with a custom
[14:53] (893.68s)
GPT. And I would say that's very very
[14:55] (895.84s)
true. Now, of course, these you can use
[14:58] (898.40s)
while you're coding or doing other
[15:00] (900.08s)
things. They're in service of something
[15:01] (901.76s)
else. And a custom GPT is kind of the
[15:03] (903.92s)
end result. So, it's a little bit of a
[15:06] (906.16s)
different pattern just just in regards
[15:08] (908.24s)
to what it what action it's playing in
[15:10] (910.64s)
your pipeline. But here I will say I
[15:13] (913.44s)
have notes for the different videos that
[15:15] (915.76s)
I create. Okay. So, today this is
[15:18] (918.00s)
today's video Claude code agents and I
[15:20] (920.72s)
did a walk with myself. I do this very
[15:23] (923.28s)
frequently. That is kind of all of the
[15:25] (925.68s)
things that I want to talk about. I just
[15:27] (927.52s)
do a little meeting and I record it and
[15:29] (929.52s)
transcribe it. And so I've put it here.
[15:31] (931.52s)
And so what I'm going to tell this
[15:33] (933.12s)
system is I want to create an outline
[15:35] (935.36s)
for this video. And I have a very
[15:37] (937.36s)
explicit kind of definition of how
[15:39] (939.76s)
outlines should be created, what I need
[15:41] (941.92s)
from them, the format of them, whole
[15:43] (943.68s)
bunch of stuff that's really opinionated
[15:45] (945.36s)
and important to me so that I can kind
[15:47] (947.68s)
of shoot one of these videos. For
[15:49] (949.84s)
example, let me show you. pull this on
[15:52] (952.32s)
screen real quick. These are the notes
[15:54] (954.08s)
that I'm currently working from to film
[15:56] (956.56s)
this video just to keep me on track.
[15:58] (958.72s)
Okay, with that in mind, let's see if we
[16:01] (961.12s)
can have that happen. I will show you in
[16:03] (963.36s)
Claude up here in the agents, I have two
[16:05] (965.44s)
different agents. One is an outliner
[16:07] (967.12s)
agent and another one is the info agent.
[16:09] (969.60s)
I'll show you that one second. The
[16:11] (971.12s)
outliner agent is the one that we're
[16:12] (972.56s)
going to run. Get rid of all of that so
[16:14] (974.40s)
you can kind of see what's going on
[16:15] (975.76s)
here. And I'm going to reduce the size
[16:17] (977.76s)
of this for a second just so that you
[16:19] (979.68s)
can see what's going on in here. This is
[16:21] (981.68s)
that front matter, that YAML front
[16:23] (983.44s)
matter again that describes when it
[16:25] (985.44s)
should be used. You can see that I have
[16:26] (986.88s)
a hard thing here that's used used for
[16:29] (989.12s)
all video outlining. So that I don't
[16:31] (991.20s)
have to be explicit on how I ask for it.
[16:33] (993.20s)
If I ever ask for, hey, I kind of want a
[16:34] (994.88s)
video outline. That should be enough
[16:36] (996.64s)
clue into this. I don't have to name
[16:38] (998.32s)
video outliner agent. But it also, as
[16:40] (1000.72s)
you can tell, is a very big prompt. And
[16:43] (1003.44s)
this is a singleshot, which means I'm
[16:46] (1006.08s)
giving it a full example to service
[16:48] (1008.72s)
from. I'm saying, "Okay, here's some
[16:50] (1010.64s)
things to think about when you create
[16:52] (1012.96s)
this outline for me. I want you to be
[16:54] (1014.56s)
able to tell a story, help me kind of
[16:56] (1016.32s)
craft things in the right order, figure
[16:58] (1018.16s)
out how they flow together, give me some
[17:00] (1020.08s)
samples of things I might say, but more
[17:02] (1022.40s)
than that, I kind of need the
[17:04] (1024.00s)
organizational help." And so, that's
[17:05] (1025.84s)
what it's kind of doing. And I've
[17:07] (1027.20s)
written in a workflow or kind of a a
[17:10] (1030.08s)
report of an example that I like. And
[17:12] (1032.80s)
it's not a full example. It's not an
[17:14] (1034.48s)
actual example. It's telling it little
[17:16] (1036.40s)
notes in each one of these, but giving
[17:18] (1038.16s)
it the full layout of what I just pulled
[17:20] (1040.24s)
on screen. So that's kind of a
[17:21] (1041.76s)
singleshot example. If we come back here
[17:24] (1044.08s)
and say I'll pull this open so you know
[17:26] (1046.24s)
what I'm asking about. I'm going to ask
[17:28] (1048.00s)
about Claude Code agents. And so this is
[17:30] (1050.64s)
kind of a sophisticated use. I want to
[17:32] (1052.72s)
kind of describe this is a new way of
[17:34] (1054.56s)
using Claude Code. uh Anthropic has come
[17:36] (1056.72s)
out and said that they have use across
[17:39] (1059.44s)
their company across all areas not just
[17:41] (1061.92s)
engineering using cloud code. Other
[17:44] (1064.08s)
people have started sharing how they're
[17:45] (1065.76s)
using cloud code for things that have
[17:47] (1067.44s)
nothing to do with code. This is my note
[17:49] (1069.68s)
system. You might imagine, you know,
[17:51] (1071.28s)
Obsidian or something like that. This is
[17:53] (1073.36s)
essentially what's going on there,
[17:54] (1074.56s)
except I can keep I can keep things that
[17:56] (1076.56s)
are not files, not kind of markdown
[17:58] (1078.56s)
files. I can do a lot more here that
[18:00] (1080.56s)
programmatically I'm using an LLM to
[18:03] (1083.04s)
create the next notes for me. And so it
[18:05] (1085.36s)
knows I've said claude code agents. It's
[18:08] (1088.48s)
going to go and figure out where claude
[18:10] (1090.00s)
code agents is because of what I have in
[18:12] (1092.72s)
my my claude file. My claude file is
[18:15] (1095.52s)
kind of descriptive saying this is
[18:17] (1097.28s)
what's going on in this whole
[18:18] (1098.48s)
repository. This whole area is very
[18:20] (1100.24s)
specific. Here's where you'll find
[18:22] (1102.00s)
things in the raw notes form. That's
[18:24] (1104.48s)
where you want to read from. The reports
[18:26] (1106.40s)
is where you write to. You should never
[18:28] (1108.16s)
write to raw notes or update things
[18:29] (1109.76s)
there. All of those kinds of rules are
[18:31] (1111.76s)
in this claude MD file. And so that's
[18:34] (1114.08s)
how it knows how to pull this
[18:35] (1115.28s)
information out. And I'm just showing
[18:36] (1116.64s)
you all of this because I think this is
[18:38] (1118.48s)
a critical concept of saying, "Let me
[18:40] (1120.96s)
allow Claude code to be something that I
[18:43] (1123.44s)
just asked questions for." I highly
[18:45] (1125.04s)
advise go get a CSV from somewhere. The
[18:47] (1127.28s)
next time you end up with a little bit
[18:48] (1128.48s)
of data or something else, start a brand
[18:50] (1130.32s)
new folder, start Claude Code, save your
[18:52] (1132.64s)
file there, and just say, "Hey, Claude,
[18:54] (1134.32s)
tell me about this file. Can you tell
[18:56] (1136.08s)
me, you know, give me some some ideas?
[18:58] (1138.56s)
Give me some reports here. Tell me um
[19:00] (1140.56s)
kind of the information that's in that
[19:02] (1142.32s)
file. give me some insights that I can't
[19:04] (1144.24s)
tell from it. the things you might ask
[19:05] (1145.68s)
chat GPT because right after that you
[19:07] (1147.76s)
can then say oh why don't you write a
[19:09] (1149.20s)
file for me oh could you write a program
[19:10] (1150.80s)
that could show it to me how about a
[19:12] (1152.08s)
streamllet application that's a little
[19:13] (1153.44s)
dashboard around it oh could you publish
[19:15] (1155.44s)
this could you also send it to Instagram
[19:17] (1157.84s)
is there a way that you can push this
[19:19] (1159.20s)
over to my note all of this stuff
[19:20] (1160.96s)
becomes really possible and that's what
[19:22] (1162.96s)
they're really outlining and I think
[19:24] (1164.48s)
that's what agents is really for it's
[19:27] (1167.36s)
going to be really useful in the
[19:28] (1168.64s)
engineering practice as well of course
[19:30] (1170.72s)
but it's really valuable when you start
[19:32] (1172.72s)
saying hey I have an agent that knows
[19:34] (1174.32s)
how to send something crafted in the
[19:36] (1176.80s)
right way to notion, not just a
[19:39] (1179.04s)
connector that knows how to connect to
[19:40] (1180.64s)
notion. All the rules around the way
[19:42] (1182.64s)
that I like to store it, where it goes
[19:44] (1184.24s)
within my notion system, all of those
[19:46] (1186.08s)
other things. That's something that I'll
[19:47] (1187.60s)
be putting in an agent and it knows how
[19:49] (1189.36s)
to use the right MCP. So, it no longer
[19:52] (1192.00s)
are we here trying to actually do
[19:53] (1193.76s)
everything ourselves. We get to
[19:55] (1195.36s)
encapsulate a lot of that into what we
[19:57] (1197.20s)
call agents at this point. If you can't
[19:58] (1198.88s)
tell, I'm kind of excited about where
[20:00] (1200.72s)
this is going. All right. So, it is
[20:02] (1202.64s)
finished and we have two files.
[20:05] (1205.60s)
Interestingly, you'll see I have V 01
[20:07] (1207.60s)
and V 02 here. I have my own versioning
[20:10] (1210.32s)
mechanism that I outlined inside of my
[20:13] (1213.12s)
agent that said this is how you should
[20:15] (1215.20s)
save files. Here are the considerations
[20:17] (1217.04s)
for how you save and name files. I want
[20:19] (1219.76s)
to make sure that you're always using
[20:21] (1221.36s)
the YMD that you find on the folder as
[20:24] (1224.48s)
well as the video name which can be
[20:26] (1226.40s)
found on the folder. And then the number
[20:28] (1228.88s)
is a zero padded two-digit thing with an
[20:31] (1231.20s)
MD after it. That kind of thing. All of
[20:33] (1233.44s)
that is done so that I can just have
[20:35] (1235.20s)
conversation after conversation. And
[20:36] (1236.96s)
every edit that it makes, every time we
[20:38] (1238.80s)
make an update to this outline, it will
[20:41] (1241.12s)
just save another version right behind
[20:43] (1243.28s)
this one of the same name. To me, really
[20:45] (1245.76s)
useful that I can roll back to previous
[20:47] (1247.68s)
versions very, very easily and see the
[20:49] (1249.60s)
progression of the changes that we've
[20:50] (1250.88s)
made. All right, enough of that. Let's
[20:52] (1252.16s)
take a look. Did we get good enough
[20:53] (1253.84s)
notes? All right, hook, some out takes,
[20:56] (1256.32s)
promise. What are clawed code agents?
[20:59] (1259.04s)
Building your first agent. Yeah, this
[21:00] (1260.72s)
all looks really, really good. This
[21:02] (1262.32s)
looks exactly like what I need to work
[21:04] (1264.24s)
from. At first, the non-determinism
[21:06] (1266.32s)
freaked me out. Then I realized it might
[21:08] (1268.48s)
be the whole point. Hey, guess what? At
[21:10] (1270.48s)
first, the non-determinism freaked me
[21:12] (1272.16s)
out. And then I realized it might be the
[21:14] (1274.40s)
whole point. Okay, that's pretty funny
[21:17] (1277.52s)
because that's kind of from the notes,
[21:19] (1279.60s)
that talk that I told you that I
[21:21] (1281.76s)
recorded and used from my notes. So,
[21:23] (1283.68s)
that's kind of neat that it actually is
[21:25] (1285.44s)
something from an LLM that's from me
[21:27] (1287.20s)
that I have to read to give to you. It's
[21:28] (1288.96s)
very meta here. Sorry. Uh, but really,
[21:31] (1291.52s)
one of the things you need to be
[21:32] (1292.56s)
cautious about is very much like MCPs,
[21:36] (1296.16s)
these things are not something you have
[21:38] (1298.56s)
to name every time to call. Okay, that's
[21:41] (1301.20s)
really cool. Like you saw that my
[21:43] (1303.12s)
nameless file writer was writing files
[21:45] (1305.20s)
when there was no name to be found and
[21:47] (1307.20s)
it had to determine the name. So, it
[21:48] (1308.56s)
went sent the content off, got the name.
[21:50] (1310.56s)
Cool. That's kind of a really neat idea
[21:52] (1312.24s)
that I would have this thing sitting
[21:53] (1313.60s)
around. However, boy, I could get into
[21:56] (1316.16s)
trouble if while the system is normally
[21:58] (1318.88s)
doing coding for me, it sees that agent
[22:01] (1321.84s)
hanging out inside of my codebase and
[22:04] (1324.40s)
says, "Oh, wait a second. I don't really
[22:05] (1325.92s)
have a name for this, do I? Let me use
[22:08] (1328.00s)
the nameless file writer and all of a
[22:09] (1329.92s)
sudden it's starting to rename my files
[22:12] (1332.32s)
accidentally or duplicate them into new
[22:14] (1334.96s)
names and things really go haywire from
[22:17] (1337.20s)
there. I absolutely can see something
[22:19] (1339.04s)
like this occurring. There's no great
[22:20] (1340.64s)
guard rails for something like that
[22:22] (1342.64s)
because you're in an agentic calling
[22:24] (1344.72s)
process. So at any given moment, at
[22:26] (1346.88s)
least my assumption, I have not seen
[22:28] (1348.56s)
this happen in the wild, but I haven't
[22:30] (1350.24s)
used them that much yet. I could
[22:32] (1352.16s)
absolutely imagine that halfway through
[22:34] (1354.00s)
a process, it's determining what tool
[22:36] (1356.32s)
should I use because it's just giving
[22:38] (1358.08s)
all of these tools, as we talked about,
[22:40] (1360.00s)
to the LLM every time, saying, "Here's a
[22:42] (1362.64s)
a list of things that you can choose
[22:44] (1364.56s)
from to solve this problem." it
[22:46] (1366.72s)
absolutely could accidentally choose one
[22:49] (1369.60s)
of the tools that you're describing when
[22:52] (1372.08s)
you don't really want it to. So, we need
[22:54] (1374.96s)
to be cautious about that. I think
[22:56] (1376.56s)
writing your descriptions in a way to
[22:58] (1378.32s)
say absolutely only use this when never
[23:01] (1381.68s)
use this for those kinds of things might
[23:04] (1384.24s)
help and help the discovering LLM who's
[23:08] (1388.08s)
got this as kind of the definition of
[23:09] (1389.76s)
the tool to say, "Oh, yeah, maybe I
[23:11] (1391.60s)
shouldn't use that right now." it might
[23:13] (1393.36s)
make it harder to
[23:15] (1395.60s)
use them to make them happen
[23:17] (1397.28s)
accidentally. But one of the values here
[23:19] (1399.36s)
I think that I think would be very
[23:21] (1401.36s)
interesting is you saw me save these
[23:23] (1403.92s)
these scripts or these agents inside of
[23:26] (1406.80s)
the project itself, right? That would be
[23:28] (1408.96s)
in kind of your your version controlling
[23:31] (1411.84s)
system, which basically means the next
[23:33] (1413.52s)
person that checks out the project will
[23:35] (1415.28s)
also get this information and there's
[23:38] (1418.16s)
some real value in that. They're also
[23:40] (1420.72s)
just released. I am talking about just
[23:43] (1423.12s)
10 minutes ago or so released a way to
[23:45] (1425.68s)
be able to load your settings from JSON
[23:48] (1428.48s)
files. So you'll be able to denote where
[23:50] (1430.56s)
to go get your settings and load them in
[23:52] (1432.16s)
from. So they're really starting to lean
[23:54] (1434.16s)
into this idea of when you're in a
[23:56] (1436.24s)
shared environment, how do you load in
[23:58] (1438.24s)
the context that's useful that everyone
[24:00] (1440.64s)
on the team can get the same kind of
[24:02] (1442.32s)
context plus you can get your own
[24:04] (1444.24s)
personal tools. And I think that's what
[24:06] (1446.08s)
we're starting to see. It would be great
[24:07] (1447.68s)
if I wandered into somebody else's
[24:09] (1449.52s)
system, started doing a little bit of
[24:11] (1451.28s)
coding, and the tools that would help
[24:13] (1453.52s)
clean up the files for the way the team
[24:15] (1455.52s)
wants them are actually already in
[24:17] (1457.84s)
there, and the agent knows to say, "Oh,
[24:19] (1459.52s)
if you're about to do a a commit, go
[24:22] (1462.40s)
make sure all the files are XYZ." That
[24:25] (1465.28s)
could be an agent that knows all of the
[24:27] (1467.20s)
considerations that it needs to know to
[24:28] (1468.80s)
be able to do that and is just aware
[24:30] (1470.88s)
that commits are the things that it's
[24:32] (1472.64s)
looking for. So it is very interesting
[24:34] (1474.96s)
to think that we're going to be able to
[24:36] (1476.16s)
share these things gracefully. But I
[24:38] (1478.40s)
will say that non-determinism
[24:41] (1481.12s)
that really is a a bit of a concern that
[24:44] (1484.24s)
we are not sure when it's going to call
[24:46] (1486.24s)
some of these features. MCPs are in the
[24:48] (1488.88s)
exact same boat that you might have a
[24:50] (1490.80s)
tool that is callable and you don't go
[24:52] (1492.88s)
through the front door of saying call
[24:54] (1494.24s)
this MCP and use a tool. as that gets a
[24:57] (1497.52s)
little bit more traction inside of these
[24:59] (1499.28s)
systems, we might be making calls that
[25:01] (1501.60s)
we didn't intend or calling using tools
[25:04] (1504.16s)
that we necessarily didn't intend that
[25:06] (1506.32s)
could have artifacts that, you know,
[25:08] (1508.88s)
that we're not thinking of. I don't know
[25:10] (1510.48s)
how destructive they'll be, but you
[25:12] (1512.72s)
know, of course, they absolutely could
[25:14] (1514.16s)
be destructive. Okay, after spending a
[25:16] (1516.40s)
day with agents in Claude Code and kind
[25:19] (1519.36s)
of this new paradigm that they're trying
[25:21] (1521.36s)
to introduce,
[25:23] (1523.12s)
I'm kind of convinced that we're getting
[25:25] (1525.04s)
a glimpse into a new way of coding or a
[25:28] (1528.16s)
new way of interoperating with these
[25:30] (1530.00s)
tools in general, one where we describe
[25:33] (1533.04s)
what we want much more than we describe
[25:35] (1535.92s)
how to do it. Now, we've been moving
[25:37] (1537.92s)
that direction with kind of plan driven
[25:40] (1540.08s)
development and a couple other things
[25:41] (1541.36s)
like that. I will have another video
[25:43] (1543.76s)
shortly because that's what I was in the
[25:45] (1545.76s)
middle of recording when this hit. Um, I
[25:48] (1548.64s)
will have a video that kind of talks
[25:50] (1550.08s)
about plan driven development or hints
[25:52] (1552.16s)
at some of the ways you might be able to
[25:53] (1553.92s)
use that. Subscribe to get that. I think
[25:55] (1555.84s)
that's kind of an interesting set of
[25:57] (1557.68s)
some of the findings that I've I've had
[25:59] (1559.68s)
over using Claude over a course of a
[26:01] (1561.92s)
month and the things that I think are
[26:03] (1563.44s)
really useful. But I think this idea of
[26:06] (1566.40s)
instead of saying, "Oh, please go do
[26:08] (1568.08s)
this, update this file, I need you to
[26:09] (1569.92s)
use this database, use this schema."
[26:12] (1572.16s)
Those kinds of things, we only start
[26:14] (1574.16s)
saying those when we really have to. Or
[26:16] (1576.96s)
we put them in something like agents
[26:18] (1578.64s)
that guard us from ever making a
[26:20] (1580.72s)
different decision. I really think this
[26:22] (1582.64s)
is just the beginning, but this is
[26:24] (1584.64s)
really obviously something that we will
[26:27] (1587.44s)
start be start doing much more often
[26:30] (1590.24s)
when we're coding and in fact working
[26:32] (1592.56s)
with our notes and many many other
[26:35] (1595.12s)
things I believe. All right, this one
[26:37] (1597.52s)
was a kind of a quick one even though it
[26:39] (1599.60s)
probably wasn't terribly short. It was I
[26:42] (1602.40s)
had to turn it around very quickly. So,
[26:44] (1604.00s)
I hope it made a lot of sense. I hope it
[26:45] (1605.76s)
helps a little bit untie what's going on
[26:47] (1607.52s)
with these agents in clogged code. Where
[26:50] (1610.08s)
else are we going to see them? Because I
[26:51] (1611.36s)
know they're going to start popping up
[26:52] (1612.48s)
in a lot of places. Thanks for coming
[26:54] (1614.72s)
along for the ride on this one and I'll
[26:57] (1617.12s)
see you in the next one.