[00:00] (0.16s)
Let's just say you have a code base
[00:01] (1.84s)
right here like I do and you want to
[00:03] (3.52s)
make lots of changes to it. I have a
[00:05] (5.20s)
Nex.js app here and I messed up a lot in
[00:07] (7.68s)
it. I actually gave AI full control and
[00:10] (10.40s)
it built everything based on the
[00:11] (11.92s)
implementation plan but it built the
[00:13] (13.76s)
whole app wrong. And now there are a lot
[00:15] (15.68s)
of things I want to change. I found
[00:17] (17.28s)
something that I thought was pretty
[00:18] (18.48s)
cool. It's a GitHub repository called
[00:20] (20.40s)
Shotgun Code. And what it basically does
[00:22] (22.48s)
is allow you to give your entire
[00:24] (24.24s)
codebase to it. It generates a prompt
[00:26] (26.24s)
and then you give that prompt to any
[00:27] (27.76s)
good performing LLM like Gemini 2.5 Pro
[00:30] (30.72s)
and implement changes recursively as
[00:32] (32.72s)
I've written right here. You save your
[00:34] (34.40s)
context with Shotgun by providing it the
[00:37] (37.12s)
whole codebase and it creates a truly
[00:39] (39.52s)
gigantic prompt here. They're using
[00:41] (41.68s)
Google AI Studio and it's going to
[00:43] (43.68s)
return a massive patch for your code. It
[00:45] (45.76s)
will tell you what things to change and
[00:47] (47.76s)
exactly where to change them. Google AI
[00:50] (50.16s)
Studio gives 25 free prompts per day
[00:52] (52.56s)
which is pretty good. After that, you
[00:54] (54.32s)
just drop the change file or the diff
[00:56] (56.32s)
file into cursor or windsurf and apply
[00:58] (58.72s)
all the changes in a single request.
[01:00] (60.88s)
That's how powerful this is. I'm going
[01:02] (62.56s)
to show you how to install it, how to
[01:04] (64.40s)
get it up and running. And I'll also
[01:06] (66.08s)
show you how I took my app, which had a
[01:08] (68.08s)
lot of problems and completely
[01:09] (69.52s)
overhauled it. No implementation plan,
[01:11] (71.84s)
no proper documentation, just used
[01:14] (74.08s)
Shotgun and transformed my entire
[01:15] (75.84s)
project. First up, we have our
[01:17] (77.68s)
installation. Just for the
[01:18] (78.96s)
prerequisites, you need to have Go
[01:20] (80.88s)
installed. The Go programming language.
[01:23] (83.04s)
It's really easy to install. Just ask
[01:25] (85.12s)
chat GPT. It's one command. This command
[01:27] (87.76s)
right here will check if you already
[01:29] (89.44s)
have Go installed or not, and you can
[01:31] (91.20s)
use that to confirm. Then you need to
[01:33] (93.28s)
have Node.js installed, and you also
[01:35] (95.44s)
need to have the Wales CLI installed to
[01:38] (98.32s)
actually use the app. The installation
[01:40] (100.08s)
command for Wales is right here. I'll
[01:42] (102.00s)
leave it in the description below so you
[01:43] (103.76s)
can check that out. The actual install
[01:45] (105.52s)
is pretty much this entire block. You
[01:47] (107.44s)
just copy it, go into your terminal,
[01:49] (109.44s)
paste it in, and it will install the
[01:51] (111.20s)
shotgun code app for you. I already have
[01:53] (113.36s)
it installed, so I'm going to go inside
[01:55] (115.28s)
the directory. If we go back to the
[01:57] (117.04s)
GitHub repo, you don't need to do
[01:58] (118.80s)
anything else. You just paste this
[02:00] (120.48s)
command followed by this one. It is
[02:02] (122.40s)
already going to change the directory
[02:04] (124.00s)
for you and install the front-end
[02:06] (126.16s)
dependencies as well. Just copy this, go
[02:08] (128.88s)
back, and in the root of the project, as
[02:11] (131.04s)
you can see right here, you don't need
[02:12] (132.80s)
to manually cd into the shotgun
[02:15] (135.04s)
directory. That's already handled. Just
[02:17] (137.04s)
run this command and it will fire up the
[02:19] (139.28s)
app for you. Now you can see the app has
[02:21] (141.12s)
opened up and we can actually start
[02:22] (142.80s)
using it. Now that we've opened up the
[02:24] (144.88s)
app, let me give you a general overview
[02:26] (146.88s)
of how to use it and what's inside. The
[02:29] (149.36s)
first step is preparing the context.
[02:31] (151.36s)
This means selecting your project folder
[02:33] (153.28s)
and opening it. The app I told you
[02:35] (155.04s)
about, time track copy, is the original
[02:37] (157.20s)
app and time track is the version I made
[02:39] (159.52s)
edits to. For now, let's just open the
[02:41] (161.60s)
original app because we're not going to
[02:43] (163.12s)
make any changes. When we open it,
[02:45] (165.20s)
you'll see an error saying there was a
[02:46] (166.96s)
problem generating the context. The size
[02:49] (169.12s)
has obviously exceeded the limit. What
[02:51] (171.04s)
we need to do is only select the
[02:52] (172.88s)
necessary folders. You'll often find a
[02:55] (175.04s)
lot of stuff in your projects that you
[02:56] (176.56s)
don't actually need. It's just
[02:57] (177.92s)
dependency files. Let's just say we open
[03:00] (180.40s)
the main project folder with the
[03:02] (182.00s)
front-end app. You'll see a next folder.
[03:04] (184.40s)
Uncheck it. Now, we're good to go. Since
[03:06] (186.48s)
we're only selecting what's necessary,
[03:08] (188.40s)
we can also deselect this cursor rules
[03:10] (190.48s)
file. Then we have these MD files. These
[03:13] (193.04s)
are just my own documentation, so I'll
[03:14] (194.88s)
uncheck those as well. The package.json
[03:17] (197.44s)
files define the dependencies, so I'll
[03:19] (199.52s)
keep them. Now that the generated
[03:21] (201.12s)
project context is ready, we can move on
[03:23] (203.20s)
to the next step, composing the actual
[03:25] (205.36s)
prompt. This is where you write the task
[03:27] (207.36s)
you want to give the AI. Let's just say
[03:29] (209.68s)
I want the entire codebase, which was
[03:31] (211.76s)
originally built using custom React
[03:33] (213.76s)
components to now use Shad CN
[03:35] (215.76s)
components. If you look at the prompt
[03:37] (217.36s)
section, you'll see it now includes our
[03:39] (219.36s)
user task, which is what we just typed.
[03:41] (221.36s)
The prompt is structured in a way that's
[03:43] (223.28s)
easier for LLMs to read because it's
[03:45] (225.36s)
clearly divided and includes additional
[03:47] (227.52s)
elements that help them understand it
[03:49] (229.28s)
better. We also have a section for
[03:51] (231.04s)
custom rules. If you're using specific
[03:53] (233.28s)
rules for your project, you just drop
[03:55] (235.12s)
them in here. Mine aren't that important
[03:57] (237.04s)
in this case, but if your cursor rules
[03:59] (239.12s)
file includes something specific, like
[04:01] (241.20s)
avoiding a certain library, you can
[04:03] (243.20s)
paste that here. After you've written
[04:04] (244.96s)
your task and added your custom rules,
[04:06] (246.96s)
your finalized prompt will appear right
[04:08] (248.88s)
here. You can hit copy all and it will
[04:10] (250.80s)
copy the whole thing. Next, we move on
[04:12] (252.88s)
to executing the prompt. This step
[04:14] (254.72s)
involves going to Google AI Studio and
[04:16] (256.80s)
pasting in your prompt. I'll show you
[04:18] (258.64s)
exactly how to do that later. Then we
[04:20] (260.56s)
have the apply patch step, which isn't
[04:22] (262.40s)
available yet. The developer mentioned
[04:24] (264.24s)
that these two steps will be added to
[04:26] (266.08s)
the app in the future. He will probably
[04:28] (268.00s)
use the Gemini API key here and
[04:30] (270.24s)
implement a coding agent to run the
[04:32] (272.16s)
changes directly. In my opinion, using
[04:34] (274.56s)
cursor or wind surf, whichever you
[04:36] (276.56s)
prefer, as the task executive works
[04:38] (278.48s)
really well. That's a general overview
[04:40] (280.24s)
of the application. Now, let's move on
[04:42] (282.16s)
to what to do with the prompt you just
[04:44] (284.16s)
copied. The next step is going to Google
[04:46] (286.16s)
AI Studio and actually getting the diff
[04:48] (288.48s)
prompt from the Gemini model. First,
[04:50] (290.48s)
make sure you have Gemini 2.5 Pro
[04:53] (293.04s)
selected, not Flash, the Pro model.
[04:55] (295.44s)
After selecting it, change the
[04:56] (296.96s)
temperature to 0.1. The temperature is a
[04:59] (299.68s)
measure of how much creativity the model
[05:01] (301.76s)
is allowed to use in its output. At one,
[05:04] (304.24s)
it's in normal mode. When you force it
[05:06] (306.08s)
to 0.1, there is basically no creativity
[05:09] (309.04s)
involved at all, meaning it will follow
[05:11] (311.04s)
the instructions exactly as given. Paste
[05:13] (313.36s)
your entire prompt right here. If you
[05:15] (315.12s)
scroll up, you'll see the prompt we
[05:16] (316.72s)
generated earlier. Just paste it here.
[05:18] (318.72s)
You don't need to add anything extra.
[05:20] (320.64s)
Everything needed for the output is
[05:22] (322.32s)
already included in the prompt,
[05:23] (323.84s)
including what to generate and how to do
[05:25] (325.76s)
it. Then run it. it will start
[05:27] (327.52s)
processing and provide you with a
[05:29] (329.28s)
complete diff. This takes a little time,
[05:31] (331.12s)
but it gives you a very solid set of
[05:33] (333.12s)
changes, which when implemented will
[05:35] (335.28s)
accurately make the changes you want in
[05:37] (337.20s)
your app. If you're enjoying the video,
[05:39] (339.12s)
I'd really appreciate it if you could
[05:40] (340.80s)
subscribe to the channel. We're aiming
[05:42] (342.56s)
to hit 25,000 subscribers by the end of
[05:45] (345.28s)
this month, and your support really
[05:46] (346.72s)
helps. We share videos like this three
[05:48] (348.80s)
times a week, so there's always
[05:50] (350.40s)
something new and useful for you to
[05:52] (352.00s)
check out. After getting the prompt from
[05:53] (353.84s)
Gemini Studio, I came back into cursor,
[05:56] (356.40s)
typed apply diff, and pasted the prompt
[05:58] (358.64s)
right here. This prompt includes all the
[06:00] (360.64s)
changes with references to the relevant
[06:02] (362.64s)
files that need updating. Gemini
[06:04] (364.56s)
processed the entire codebase and gave
[06:06] (366.64s)
us exactly the files that need to be
[06:08] (368.40s)
changed. Now, we're going to execute our
[06:10] (370.40s)
changes using cursor. I have three chats
[06:12] (372.80s)
in total, five chats overall, and I made
[06:15] (375.36s)
three implementation changes. The reason
[06:17] (377.52s)
there are more chats is because in one I
[06:19] (379.76s)
had a network error and in another I
[06:21] (381.92s)
switched the model from Gemini 2.5.
[06:24] (384.96s)
Gemini 2.5 Pro was getting too slow. It
[06:28] (388.08s)
had already done the thinking. But
[06:29] (389.68s)
another thing Gemini does in cursor
[06:31] (391.60s)
which I haven't seen in Windsurf is that
[06:33] (393.68s)
it frequently stops and asks the user
[06:36] (396.16s)
questions even when it's not specified
[06:38] (398.40s)
in the rules file. That's why I switched
[06:40] (400.32s)
to clawed 3.5 sonnet and the changes to
[06:43] (403.20s)
the code became even faster. This is
[06:45] (405.44s)
exactly what I did. Let me show you the
[06:47] (407.52s)
changes I implemented. First of all, I
[06:50] (410.08s)
improved the speed of the site. The site
[06:52] (412.08s)
was very slow. It lagged a lot. The
[06:54] (414.24s)
animations were slow and the movement
[06:56] (416.32s)
was very jittery. Not a good experience
[06:58] (418.40s)
at all. That was the first thing I
[07:00] (420.16s)
fixed. I used simple shaden components
[07:02] (422.88s)
and for some reason that made the web
[07:04] (424.80s)
app look really bad. I asked it to use
[07:07] (427.20s)
aseternity UI components only which look
[07:09] (429.84s)
really minimal and I also changed the
[07:11] (431.76s)
overall workflow or user flow of the app
[07:14] (434.00s)
a bit. Here's the original app,
[07:16] (436.04s)
timetrack.dev, and you'll notice the
[07:18] (438.00s)
slow speed right away. There's visible
[07:19] (439.92s)
lag even while scrolling. The loading
[07:21] (441.84s)
times are slow, and even hovering over
[07:24] (444.00s)
something feels sluggish. Definitely not
[07:26] (446.00s)
a good experience. That was the first
[07:27] (447.76s)
thing I needed to change. Gemini gave me
[07:29] (449.84s)
a prompt that told me exactly what to
[07:31] (451.76s)
modify. Here's the new version of the
[07:33] (453.60s)
site. As I hover over the tables, you
[07:35] (455.84s)
can clearly see it's much smoother.
[07:37] (457.68s)
Scrolling is smooth with no jitter at
[07:39] (459.84s)
all. Even when adding something via a
[07:41] (461.68s)
pop-up, it slides in smoothly. That was
[07:43] (463.76s)
the first major improvement performance.
[07:45] (465.92s)
Next was the use of aceternity UI. I
[07:48] (468.56s)
initially asked it to use shad CN
[07:50] (470.56s)
components and it came out looking
[07:52] (472.24s)
really horrible. I didn't like it at
[07:54] (474.08s)
all, especially the glowing look. I
[07:55] (475.92s)
really hated it. If I had just asked
[07:57] (477.76s)
cursor to implement the changes
[07:59] (479.52s)
directly, it would have only done half
[08:01] (481.36s)
the work. The rest of the code base
[08:03] (483.12s)
would have been left for me. I'd have to
[08:05] (485.04s)
go back and manually tell cursor what
[08:07] (487.20s)
parts were missing. It would have been a
[08:08] (488.88s)
whole process. But here you can see a
[08:10] (490.80s)
clean minimal UI using acetity UI. It
[08:14] (494.24s)
looks really good. We have dark and
[08:15] (495.92s)
light mode switching. There is some
[08:17] (497.68s)
repeated UI, but that's just one prompt
[08:19] (499.84s)
to clean up. Not a big deal. Down here,
[08:22] (502.00s)
we have the settings to toggle the
[08:23] (503.60s)
format, and overall, it just looks
[08:25] (505.36s)
great. The next thing I had to improve
[08:27] (507.12s)
was the user flow. Previously, I had
[08:29] (509.36s)
prompted it to build what's now working
[08:31] (511.28s)
well, but in the earlier version, the
[08:33] (513.20s)
pop-up was even worse. I didn't like it.
[08:35] (515.68s)
I don't know why the UI came out that
[08:37] (517.68s)
bad. It basically allocated one task to
[08:40] (520.56s)
each hour, which didn't make sense to
[08:42] (522.32s)
me. I reworked the entire workflow. Now,
[08:44] (524.56s)
I just go ahead and add a time period.
[08:46] (526.56s)
Let's just say the category is work.
[08:48] (528.80s)
Start time is 4 p.m. End time is 5:00
[08:51] (531.60s)
p.m. Hit save activity. And you can see
[08:54] (534.24s)
we now have a nice looking panel and the
[08:56] (536.40s)
activity is recorded. This is the
[08:58] (538.24s)
refactoring. These are the changes I
[09:00] (540.16s)
made to the code. The website looks
[09:01] (541.92s)
amazing now. I'm really loving it. The
[09:03] (543.92s)
tool is a new way of doing things that
[09:06] (546.00s)
hasn't really been seen before. You give
[09:07] (547.92s)
the LLM one task, it scans the entire
[09:10] (550.56s)
codebase, sees all the connections, and
[09:12] (552.80s)
figures out how to achieve that task.
[09:14] (554.64s)
It's a nice alternative way to get
[09:16] (556.48s)
things done. There are a lot of other
[09:18] (558.16s)
approaches like making proper
[09:19] (559.68s)
documentation and all that, but if you
[09:21] (561.68s)
don't have it, you can just use Shotgun
[09:23] (563.76s)
and shotgun your codebase. That brings
[09:25] (565.84s)
us to the end of this video. If you'd
[09:27] (567.68s)
like to support the channel and help us
[09:29] (569.52s)
keep making tutorials like this, you can
[09:31] (571.60s)
do so by using the super thanks button
[09:33] (573.60s)
below. As always, thank you for watching
[09:35] (575.68s)
and I'll see you in the next