[00:00] (0.08s)
So, you want to build apps and right now
[00:02] (2.08s)
most YouTube videos you see are focused
[00:04] (4.32s)
on building really simple apps that
[00:06] (6.24s)
don't use any external integrations.
[00:08] (8.40s)
That's mostly because those integrations
[00:10] (10.32s)
are tough to configure and connect
[00:12] (12.00s)
properly. Now, what do you actually need
[00:13] (13.60s)
for those external integrations? You
[00:15] (15.36s)
need APIs. APIs are basically
[00:17] (17.44s)
connections that websites or services
[00:19] (19.36s)
offer. So, you can plug into their app
[00:21] (21.44s)
and use their features inside your own
[00:23] (23.28s)
app. Take ChatGpt as an example. You've
[00:25] (25.68s)
probably seen that a lot of apps now
[00:27] (27.44s)
come with AI builtin, but they're not
[00:29] (29.36s)
running any AI model themselves. It's
[00:31] (31.52s)
just the chat GPT API. You send a
[00:33] (33.76s)
request to it and it sends a response
[00:35] (35.52s)
back. You're basically using chat GPT as
[00:37] (37.92s)
a service inside your app. The problem
[00:39] (39.60s)
is a lot of these API services are
[00:41] (41.84s)
tricky to set up in your own apps. Tools
[00:44] (44.08s)
like Lovable, Bolt, or even Firebase
[00:46] (46.64s)
Studio often don't fetch external API
[00:49] (49.20s)
documentation automatically. And every
[00:51] (51.20s)
API needs some kind of documentation to
[00:53] (53.60s)
explain how it works. For instance, if
[00:55] (55.44s)
you're building a chat GPT app, there's
[00:57] (57.28s)
a full set of docs you either need to
[00:59] (59.28s)
read yourself or feed into your AI model
[01:01] (61.92s)
so it knows how to properly integrate
[01:03] (63.84s)
that API into your code. Most of these
[01:06] (66.08s)
tools use pretty simple AI models that
[01:08] (68.64s)
actually perform really well on things
[01:10] (70.48s)
like React and TypeScript, but they
[01:12] (72.32s)
don't have any external knowledge.
[01:13] (73.84s)
That's where TempoLabs has come in with
[01:15] (75.52s)
a pretty solid fix. All right, so what
[01:17] (77.92s)
they've done is introduce this new MCP
[01:20] (80.32s)
app store. And right now I'm in a new
[01:22] (82.56s)
project by TempoLabs. I'm going to show
[01:24] (84.64s)
you how we can build a project using
[01:26] (86.56s)
these MCP integrations. What they've
[01:28] (88.96s)
basically done is take these APIs, these
[01:31] (91.28s)
external integrations, and put them on
[01:33] (93.28s)
an MCP server. The app just needs to
[01:35] (95.52s)
access that MCP server, and the full
[01:37] (97.76s)
documentation for the external
[01:39] (99.28s)
integration is already built in. So, the
[01:41] (101.36s)
AI knows exactly how to connect to each
[01:43] (103.68s)
individual service, and that's the fun
[01:45] (105.28s)
part. You just install it, and the
[01:46] (106.96s)
integration is ready to go. No need to
[01:48] (108.96s)
read the documentation or explain
[01:50] (110.64s)
anything to the AI. Just install it,
[01:52] (112.96s)
drop in your API keys because those are
[01:55] (115.04s)
still required and it works. It really
[01:57] (117.12s)
is that simple. First, you need to
[01:58] (118.88s)
connect it to Superbase to get started.
[02:00] (120.96s)
Just go ahead and connect to Superbase.
[02:02] (122.88s)
If you haven't signed up yet, you'll
[02:04] (124.56s)
need to create an account, make an
[02:06] (126.24s)
organization like we've done here, and
[02:08] (128.08s)
then authorize TempoLabs. Once you're
[02:10] (130.08s)
connected, you can pick any of your
[02:11] (131.68s)
projects like I picked test and connect
[02:13] (133.68s)
that. You'll see that Superbase is now
[02:15] (135.76s)
linked and ready to use. Now, let's go
[02:17] (137.60s)
back to the app store. You'll see a
[02:19] (139.12s)
browse section where you can look
[02:20] (140.56s)
through all the external integrations
[02:22] (142.32s)
you might want to use. There are a bunch
[02:24] (144.00s)
of them like Firecrawl. In their demo,
[02:26] (146.08s)
TempoLabs showed how they built a web
[02:28] (148.16s)
crawler by pasting a link to any website
[02:30] (150.56s)
and it automatically crawled that site.
[02:32] (152.56s)
They also connected an LLM and you can
[02:34] (154.56s)
do that too. For example, you can
[02:36] (156.32s)
connect Gemini to process the crawl
[02:38] (158.24s)
data, summarize it and give you the main
[02:40] (160.64s)
points from any site or if you just want
[02:42] (162.64s)
the raw data, firecrawl works great.
[02:45] (165.04s)
It's a solid web crawler. Installing it
[02:47] (167.20s)
is super easy. Just grab your API key
[02:49] (169.68s)
from the Firecrawl dashboard, paste it
[02:51] (171.68s)
in, click connect, and you're done.
[02:53] (173.52s)
After that, you can tell the AI that the
[02:55] (175.44s)
app is installed, and it'll start using
[02:57] (177.28s)
it. For this demo, we're going to
[02:58] (178.88s)
install 11 Labs and actually build
[03:01] (181.04s)
something with it. So, let's click on
[03:02] (182.72s)
install. You'll see we need to paste our
[03:04] (184.64s)
API key here. I've got mine saved
[03:06] (186.88s)
somewhere safe. So, I'll paste that in,
[03:08] (188.72s)
hit connect, and then I'll show you what
[03:10] (190.48s)
we're about to build. Okay, so this is
[03:12] (192.56s)
our PRD right here. And right now I'm
[03:15] (195.04s)
building a quotes app that uses the 11
[03:17] (197.36s)
Labs API to read our daily codes aloud.
[03:20] (200.24s)
This is just to show how it uses the
[03:22] (202.08s)
external MCP apps from the app store.
[03:24] (204.48s)
Based on the PRD, it's already generated
[03:26] (206.80s)
the user flow. And if you expand it,
[03:28] (208.80s)
you'll see the full mermaid diagram laid
[03:30] (210.88s)
out. If you want to change anything, you
[03:32] (212.80s)
can just edit your PRD or add more
[03:35] (215.04s)
detail to it. Over on the right, you'll
[03:36] (216.88s)
see we've got our Nex.js starter
[03:39] (219.12s)
template kit, which also includes
[03:40] (220.96s)
authentication. They've given us a
[03:42] (222.64s)
template that starts with a landing page
[03:44] (224.56s)
and then continues into the app we
[03:46] (226.48s)
actually want to build. Now, I'm just
[03:47] (227.92s)
going to prompt it to keep going and
[03:49] (229.44s)
start building the app based on the PRD
[03:51] (231.68s)
and the user flow. If you're enjoying
[03:53] (233.52s)
the video, I'd really appreciate it if
[03:55] (235.44s)
you could subscribe to the channel.
[03:56] (236.88s)
We're aiming to hit 25,000 subscribers
[03:59] (239.44s)
by the end of this month, and your
[04:00] (240.96s)
support really helps. We share videos
[04:02] (242.88s)
like this three times a week, so there's
[04:04] (244.80s)
always something new and useful for you
[04:06] (246.56s)
to check out. Okay, so it has finished
[04:08] (248.88s)
building the basic app now and I've just
[04:11] (251.12s)
prompted it to add a test mode that lets
[04:13] (253.52s)
users sign in anonymously. While that's
[04:15] (255.76s)
happening, we can take a look at what it
[04:17] (257.60s)
has built so far on the landing page.
[04:19] (259.76s)
This is what it's generated using the
[04:21] (261.44s)
starter template and we can see that
[04:23] (263.20s)
it's using Shaden components with a
[04:25] (265.44s)
clean minimalistic style. If we scroll
[04:27] (267.60s)
down to the bottom, we can also see a
[04:29] (269.60s)
preview of what the inner UI will look
[04:31] (271.52s)
like. It's not functional yet, but
[04:33] (273.28s)
they've given us a glimpse of how the
[04:35] (275.04s)
component guards are going to work.
[04:36] (276.80s)
We'll have our quote displayed right
[04:38] (278.48s)
here, the name of the person who said
[04:40] (280.24s)
it, and an option to listen to the quote
[04:42] (282.16s)
as well. I'm probably going to add a
[04:43] (283.68s)
drop down, too, so we can switch between
[04:45] (285.68s)
voices and pick from the different
[04:47] (287.20s)
options that 11 Labs provides. Okay, so
[04:50] (290.08s)
as you can see, we're here on the
[04:51] (291.52s)
dashboard right now. It's pretty clean
[04:53] (293.52s)
and minimal. Tempo is really a React
[04:55] (295.92s)
builder, so it knows how to write React
[04:58] (298.08s)
code. And honestly, this looks amazing.
[05:00] (300.56s)
On the left, we've got multiple
[05:02] (302.16s)
sections, but to be honest, I don't
[05:03] (303.92s)
think most of them are working right
[05:05] (305.36s)
now. The only one that works is the
[05:07] (307.28s)
quote generator. And at the moment,
[05:08] (308.88s)
we're not generating any new quotes
[05:10] (310.72s)
since we already have some saved. So, if
[05:12] (312.72s)
I click this, it won't generate a new
[05:14] (314.32s)
quote. It'll just fetch one from the
[05:15] (315.84s)
database. We can easily add that
[05:17] (317.76s)
functionality, too, since we've got MCP
[05:19] (319.92s)
stores for OpenAI, Tropic, and Gemini.
[05:22] (322.40s)
We just need to plug in our API keys for
[05:24] (324.56s)
those, and we'll be able to generate
[05:26] (326.24s)
quote ourselves. It's that simple to add
[05:28] (328.40s)
new features directly into your app.
[05:30] (330.48s)
Now, I haven't tested this fully yet,
[05:32] (332.24s)
but let's go ahead and give it a listen.
[05:34] (334.00s)
The best way to predict the future is to
[05:35] (335.76s)
create it. You can see we got a
[05:37] (337.36s)
notification that the quote was being
[05:39] (339.12s)
played and that's because I added some
[05:41] (341.36s)
error logging earlier. There were issues
[05:43] (343.20s)
with voice generation and I added that
[05:45] (345.36s)
logging to help figure out what was
[05:47] (347.12s)
wrong. But now that it's fixed, we got
[05:48] (348.96s)
the transcription and it looks great.
[05:50] (350.88s)
One thing that's still missing is the
[05:52] (352.64s)
ability to switch between multiple
[05:54] (354.56s)
voices and choose from a drop- down
[05:56] (356.40s)
menu. So, let's go ahead and ask Tempo
[05:58] (358.48s)
to add that in, too. Okay. So, if you
[06:00] (360.88s)
look right here, you'll see that I gave
[06:02] (362.72s)
it the prompt to add the ability to
[06:04] (364.96s)
choose multiple voices provided by 11
[06:07] (367.52s)
Labs. I also asked it to include a
[06:09] (369.44s)
drop-down and explained how the entire
[06:11] (371.76s)
multiple voices setup should work. This
[06:13] (373.76s)
is the prompt I gave it. Now, if we go
[06:15] (375.60s)
back to our app, you can see that we
[06:17] (377.20s)
have this beautiful drop- down menu with
[06:19] (379.36s)
multiple voices to choose from. It
[06:21] (381.28s)
pulled in a lot of voices. I believe it
[06:23] (383.20s)
fetched everything available in my 11
[06:25] (385.20s)
Labs library. Let's go ahead and give it
[06:27] (387.12s)
a listen. The only limit to our
[06:28] (388.96s)
realization of tomorrow will be our
[06:30] (390.96s)
doubts of today. This is the default
[06:33] (393.28s)
voice that gets selected when the app
[06:35] (395.20s)
opens. Okay, that was pretty good. Now,
[06:37] (397.36s)
let's switch to another one. Let's say
[06:39] (399.20s)
we pick the voice of George. The only
[06:41] (401.44s)
limit to our realization of tomorrow
[06:43] (403.44s)
will be our doubts of today. The only
[06:45] (405.36s)
limit to our realization of tomorrow
[06:47] (407.36s)
will be our doubts of today. That works,
[06:49] (409.60s)
too. So, you can see it's working really
[06:51] (411.28s)
well. We can switch to any voice we
[06:53] (413.04s)
want. Honestly, the voice integration
[06:54] (414.72s)
with 11 Labs was super seamless. I
[06:56] (416.96s)
didn't even have to check the API
[06:58] (418.52s)
documentation. I haven't explored the
[07:00] (420.56s)
rest of the app yet, but I noticed that
[07:02] (422.32s)
the user flow also generated a share
[07:04] (424.56s)
button. So, if I click that, it actually
[07:06] (426.64s)
brings up the default Mac OS sharing
[07:08] (428.88s)
menu. That means I can share the quote
[07:10] (430.72s)
or even the audio. That's a really nice
[07:12] (432.64s)
touch. We also have a save button that
[07:14] (434.56s)
lets us store the quotes. And when I go
[07:16] (436.40s)
into the saved quotes section, I can see
[07:18] (438.40s)
that the quote has been added. That's
[07:20] (440.08s)
pretty impressive. So, this is a really
[07:21] (441.76s)
cool feature that Tempo Labs has
[07:23] (443.44s)
introduced. It makes integrating tools
[07:25] (445.68s)
like these into your app incredibly
[07:27] (447.76s)
simple. You just tell the AI what you
[07:29] (449.68s)
want and it handles everything. You
[07:31] (451.52s)
don't need to read documentation or load
[07:33] (453.52s)
it into the AI because it's already
[07:35] (455.36s)
built in. Right now, they've got a solid
[07:37] (457.28s)
collection of these external connections
[07:39] (459.20s)
available and I think even more are on
[07:41] (461.20s)
the way. That brings us to the end of
[07:42] (462.96s)
this video. If you'd like to support the
[07:44] (464.80s)
channel and help us keep making
[07:46] (466.32s)
tutorials like this, you can do so by
[07:48] (468.32s)
using the super thanks button below. As
[07:50] (470.40s)
always, thank you for watching and I'll
[07:52] (472.16s)
see you in the next one.