[00:00] (0.12s)
Hello, my name is Roy.
[00:01] (1.42s)
I founded this company called Cluely,
and I just got kicked out of Columbia for
[00:05] (5.09s)
building this tool called Interview Coder.
[00:07] (7.09s)
It's like a cheating tool
for software engineering.
[00:09] (9.43s)
Technical interviews.
[00:10] (10.38s)
I used the technology
to build a much bigger company.
[00:12] (12.97s)
Cluely, the desktop app that lets
you cheat on everything right now.
[00:16] (16.60s)
We just launched about a month ago.
[00:18] (18.64s)
We're closing in on $5 million
in annual recurring revenue, and we also
[00:22] (22.27s)
just closed a $5.3 million seed round led
by Abstract Ventures and Susa Ventures.
[00:32] (32.45s)
I was a pretty wild kid.
[00:34] (34.24s)
I got in a lot of trouble,
but I was pretty smart.
[00:36] (36.33s)
I think I was pretty good at math.
[00:37] (37.62s)
I did a bit of math competition
when I was younger.
[00:40] (40.04s)
I was on the debate team.
I played some cello.
[00:42] (42.58s)
I loved girls, you know,
every year I had a new girlfriend.
[00:45] (45.50s)
Yeah, my mom made me do a lot of studying,
but I hated studying.
[00:48] (48.80s)
I was always trying to go out
and like, have fun,
[00:51] (51.26s)
play with my friends.
[00:52] (52.38s)
I've always been very competitive.
[00:53] (53.72s)
When I was a kid, I had an older
brother who was two years older than me,
[00:56] (56.93s)
and I always wanted to be
smarter than him.
[00:58] (58.85s)
I always wanted to do better
and get better grades at school than him.
[01:01] (61.23s)
I'm always trying to win.
I'm trying to beat βJuhn-Gyo Eel-Deungβ at everything.
[01:04] (64.19s)
Top of the class
[01:05] (65.73s)
You know, in the world of software engineering, if you want
[01:07] (67.48s)
to get a job at a big tech company,
[01:09] (69.07s)
you have to answer these sort
of riddle-esque questions
[01:11] (71.86s)
that are called LeetCode questions.
[01:13] (73.78s)
And pretty much every developer you know
at a big company has gone through
[01:16] (76.62s)
the gauntlet of memorizing 300-600 riddles
and just sort of memorizing the solutions
[01:21] (81.87s)
and regurgitating them in interviews.
[01:23] (83.58s)
I'm very, very competitive.
[01:25] (85.04s)
So the second I knew that there
was a global ranking on LeetCode,
[01:28] (88.46s)
I knew I had to be one of the best.
[01:31] (91.22s)
So I spent hundreds of hours studying,
grinding the riddles,
[01:34] (94.22s)
even though I don't care
about LeetCode, I didn't enjoy it.
[01:36] (96.81s)
I didn't really have a good time,
but I was just competitive.
[01:38] (98.97s)
So I thought, if there's a ranking,
then I've got
[01:40] (100.72s)
to be on the top of the ranking.
[01:42] (102.10s)
But I mean, it just ended up
with me wasting a bunch of hours.
[01:44] (104.98s)
LeetCode just has nothing to do
with what you do on the job.
[01:47] (107.86s)
It's like the modern day equivalent
of asking how many balloons
[01:50] (110.69s)
fit in the Empire State Building.
[01:52] (112.19s)
It's supposed to test your critical
thinking, but the questions are online
[01:55] (115.74s)
to the extent that rather than
practicing critical thinking,
[01:58] (118.62s)
you just practice memorizing the riddles.
[02:00] (120.29s)
You're going to sit through and memorize
all the 1000 questions, because it
[02:02] (122.87s)
means you get a $200K a year job.
[02:04] (124.67s)
This is not good for anybody.
[02:06] (126.13s)
You don't learn anything
from practicing these riddles,
[02:08] (128.79s)
and you just end up wasting time
when you should be programming.
[02:11] (131.21s)
I thought this was pretty stupid
and this has been going on
[02:13] (133.34s)
for around 20 years right now.
[02:14] (134.80s)
The technology was there to develop
this tool that would invisibly let
[02:18] (138.39s)
you use AI to cheat on these interviews.
[02:20] (140.51s)
So I built the tool.
[02:21] (141.47s)
I publicly recorded myself
using it on the Amazon interview.
[02:24] (144.60s)
I got the job and I posted this
everywhere saying, look how easy it is
[02:27] (147.86s)
to hack these interviews.
[02:29] (149.23s)
Eventually this got me in some trouble.
[02:31] (151.40s)
But, the impetus of everything was when I
decided that it's just a stupid industry
[02:36] (156.20s)
practice and I wanted to change it.
[02:39] (159.28s)
A lot of people think it was explosive
from the start, but it took about a month
[02:43] (163.16s)
before it really started to take off,
and for a month I posted everything.
[02:46] (166.83s)
Amazon was getting mad at me,
[02:48] (168.46s)
and Columbia saw it
and was getting mad at me,
[02:50] (170.38s)
and everyone was just upset and it didn't
really go that viral for about a month.
[02:53] (173.72s)
And during that time
I was really stressed.
[02:55] (175.38s)
I just gave up my entire career
and my entire education
[02:58] (178.39s)
for the hope of a company.
[02:59] (179.68s)
But it didn't even go viral
like I did all this for like 15,000 views.
[03:02] (182.89s)
And I was really worried.
[03:03] (183.81s)
Everybody in my life, including
even my co-founders, were telling me like,
[03:06] (186.48s)
hey, we should probably stop.
[03:07] (187.31s)
We should probably shut this down.
But I don't know.
[03:09] (189.10s)
There's just a voice in my head
that said,
[03:10] (190.98s)
this has potential.
[03:12] (192.57s)
Like I have to keep going.
And I did keep going.
[03:14] (194.44s)
And then at one point it did go viral,
like super viral.
[03:17] (197.74s)
And everybody in tech saw it.
At that point, I was safe.
[03:21] (201.24s)
Virality protected me
from further punishment
[03:24] (204.04s)
from Columbia.
[03:24] (205.00s)
It made the path to entrepreneurship
a lot easier and clearer.
[03:27] (207.83s)
I've sort of committed my life
to building companies.
[03:30] (210.54s)
Once I made that decision,
it was very easy for me
[03:32] (212.88s)
to decide to leave Columbia.
[03:34] (214.34s)
And once I did leave Columbia, I
knew I'm going to do the only thing that I
[03:37] (217.68s)
can do now, which is build companies.
[03:39] (219.55s)
I think it also helps position me.
I'm a very unique person now
[03:43] (223.26s)
who got kicked out of an Ivy League.
[03:44] (224.68s)
And as a result, there's a lot
of interest from Silicon Valley
[03:48] (228.06s)
about what I'm going to do next.
[03:49] (229.73s)
At the time, it felt like things
were moving really slow.
[03:52] (232.31s)
As soon as Interview Coder went viral,
I knew that I had to capitalize
[03:55] (235.94s)
on the moment because the attention
wasn't going to be there for long.
[03:58] (238.66s)
The product was a product designed to die,
the product to cheat
[04:01] (241.87s)
on technical interviews.
[04:02] (242.82s)
So the second companies change
technical interviews, the product dies.
[04:05] (245.87s)
Meaning I have a few short moments
before my spot in the limelight dies
[04:10] (250.58s)
and I thought, I have to raise a round.
[04:12] (252.17s)
I have to start a bigger company that's
more sustainable and defensible long term.
[04:16] (256.09s)
So for me, it felt like things were moving
so slow. I had to push back my fundraise
[04:20] (260.26s)
like two weeks.
[04:21] (261.13s)
I was going to fundraise two weeks earlier
than I did. To me at the time,
[04:24] (264.18s)
I felt like I was in such a time crunch
and I had to wrap things up and do the next thing,
[04:30] (270.10s)
and it felt like things were moving super slow
[04:31] (271.90s)
But I guess I was under the pressure of the situation.
[04:36] (276.19s)
I mean, we planned for me
to use Interview Coder to
[04:39] (279.36s)
cheat on a bunch of big tech interviews
and get the jobs, and we thought
[04:42] (282.32s)
that was going to be a viral moment.
[04:43] (283.78s)
So in that sense, we planned it.
[04:45] (285.29s)
But for the last three months I've made
probably like maybe a thousand
[04:48] (288.16s)
tweets, and since then, I've figured
out how to make tweets that will go viral.
[04:51] (291.87s)
How to make tweets that will be more
controversial and get more engagement.
[04:55] (295.55s)
I think for X especially,
[04:57] (297.30s)
I've only really cracked X,
because I think people on Twitter
[05:00] (300.59s)
are a very unique type of people.
[05:02] (302.55s)
They love controversy,
they love drama, they love attention,
[05:05] (305.51s)
and they love to either dog on people
or watch people get dogged on.
[05:09] (309.31s)
I think every single time
you tweet something, if you don't think
[05:12] (312.06s)
half the people in the world
would feel very negative about this,
[05:15] (315.52s)
then it's probably not going to be viral
[05:17] (317.07s)
as a tweet.
[05:17] (317.69s)
All of your tweets that you're
planning on making go viral,
[05:21] (321.11s)
they need to have a very strong,
controversial twist that makes people
[05:24] (324.28s)
pause and be like, what the fuck?
[05:25] (325.49s)
And this is not the case for Instagram,
TikTok or LinkedIn or whatever,
[05:29] (329.16s)
but it is the case for Twitter.
[05:30] (330.70s)
And yeah, I think Twitter is -
the more controversial
[05:32] (332.87s)
you're willing to make your tweet,
the better and the more viral you will go.
[05:35] (335.92s)
Interview Coder on Twitter
was received so positively.
[05:39] (339.13s)
It was just like this scrappy young kid
who was trying to fight back
[05:42] (342.93s)
against big industry, big tech, and
everyone on Twitter was very supportive.
[05:47] (347.35s)
I think as I got bigger and my
account grew, people grew less supportive,
[05:51] (351.18s)
which is like to be expected.
[05:52] (352.69s)
I'm generally very good
at receiving hate and criticism.
[05:55] (355.69s)
I'm a very polarizing personality,
and I do a lot of crazy stuff
[05:59] (359.19s)
throughout my life.
[05:59] (359.78s)
I've always had people giving me hate.
[06:01] (361.61s)
None of the negative comments really
stood out, but I was very surprised to see
[06:04] (364.53s)
how positively Interview Coder was received
on Twitter when I first launched it.
[06:07] (367.49s)
I think people are often so worried that
they're going to say something bad online,
[06:11] (371.12s)
and it's just going to get back to them
and their reputation is over.
[06:14] (374.62s)
And I don't know,
like it's going to bury them.
[06:16] (376.71s)
But I think in reality βall press is good pressβ
[06:19] (379.21s)
I say a ton of super controversial stuff.
[06:22] (382.38s)
And in every video I'm in,
there's like a bunch of comments saying,
[06:25] (385.09s)
oh, this guy's evil.
[06:26] (386.05s)
This guy's like, this and that,
and there's always some hate.
[06:28] (388.43s)
But it's like,
this stuff really doesn't matter.
[06:30] (390.72s)
I mean, I've learned
that it really doesn't matter if everybody
[06:33] (393.94s)
in the world just doesn't like you.
[06:35] (395.35s)
Well, actually,
it's done pretty much the opposite.
[06:37] (397.23s)
I've realized that even if I say
extremely crazy shit online,
[06:40] (400.40s)
it will just make people more
interested in me and the company,
[06:43] (403.53s)
and it'll just drive more downloads
and conversions
[06:45] (405.61s)
and get more eyeballs onto Cluely.
[06:47] (407.28s)
If anything, I've learned that I need
to become crazier online
[06:50] (410.49s)
so that people will keep funneling
attention towards the core product.
[06:54] (414.75s)
I think it's very, very rare
that you will say something online
[06:58] (418.79s)
that translates to something negative
[07:00] (420.42s)
happening in the real world,
like online is not real life.
[07:03] (423.30s)
I'm a pretty chill person in real life,
but online I'm crazy because it gets
[07:07] (427.68s)
me engagement and attention.
[07:09] (429.22s)
When you are so polarizing
and controversial online, you need to very
[07:14] (434.27s)
clearly distinguish this is my real world
life and this is the online life.
[07:18] (438.86s)
And in my real world life, there are
very few people who I trust fully
[07:23] (443.36s)
and who I think I love and love me.
[07:25] (445.49s)
My parents will always be on my side,
no matter what crazy shit I do online
[07:29] (449.07s)
and my future wife and kids
[07:30] (450.24s)
They will always be on my side
no matter what crazy shit I do online.
[07:32] (452.95s)
And I think it is very,
very important to distinguish.
[07:35] (455.75s)
This is my real life family and friends
and they love me unconditionally
[07:39] (459.29s)
and I love them
and everything online is just noise.
[07:41] (461.84s)
Even if everyone online
or everyone outside this box hates me,
[07:45] (465.21s)
it doesn't matter because the most
important people are in this box, and the
[07:48] (468.63s)
people that I love and love me back.
[07:51] (471.80s)
Interview Coder is a tool designed
to let you cheat on technical interviews,
[07:56] (476.35s)
but what we realized as we built
Interview Coder is that the idea of an AI
[08:00] (480.77s)
that sees your screen and hears your audio
sort of shows itself
[08:04] (484.19s)
as this translucent screen overlay.
[08:05] (485.78s)
This has never really
been attempted before.
[08:07] (487.82s)
This is a completely novel user
experience, and it's very shortsighted to
[08:10] (490.99s)
think that this is only good for cheating.
[08:12] (492.70s)
Ultimately, what we're building is we're
building for a future where models are
[08:16] (496.41s)
multimodal and the models are not there
yet, and they're probably not going
[08:19] (499.17s)
to be there yet for another three years.
[08:20] (500.83s)
Nobody's really thought of what happens
when chatbots are no longer relevant.
[08:24] (504.80s)
What happens when you don't want
to prompt GPT anymore?
[08:27] (507.55s)
And it just knows what you want?
Then how will you interact with AI?
[08:31] (511.18s)
Nobody's really attempted this before,
and I think Interview Coder was the first
[08:35] (515.31s)
proof of concept of a user experience
that could work in this world.
[08:38] (518.60s)
So we realized that.
[08:39] (519.81s)
And that's what we're building Cluely.
[08:42] (522.10s)
I mean, Cluley is the new way
you will use AI in five years.
[08:46] (526.07s)
Hopefully if we do things right
then in two years.
[08:48] (528.11s)
The phrase "cheat on everything"
is intentionally ambiguous.
[08:51] (531.32s)
Like what is cheat on everything?
[08:52] (532.32s)
Like, I know what cheat on test means,
but I don't know what cheat
[08:54] (534.37s)
on everything means.
[08:55] (535.16s)
It's left to be sort of confusing and make
you sit on it and reflect for a moment.
[08:59] (539.25s)
When you see someone
using AI for everything,
[09:01] (541.50s)
it makes you think this is unfair.
[09:02] (542.79s)
They're not supposed to be doing that.
They're cheating. In reality,
[09:05] (545.42s)
if you can use this for everything,
like what is
[09:08] (548.09s)
cheating on a meeting look like?
[09:09] (549.47s)
It's not really a thing.
[09:10] (550.17s)
It's just our gut human reaction to think,
this is so different.
[09:12] (552.88s)
This is such a big advantage
that it's unfair.
[09:15] (555.10s)
And what we hope to do is we hope
to give everyone this advantage.
[09:18] (558.22s)
When every single person is using AI
to cheat on meetings,
[09:21] (561.43s)
then it's not that you're cheating anymore.
[09:23] (563.81s)
this is just how humans
will operate and think in the future.
[09:26] (566.94s)
I think when you can use AI,
you should use AI. If it helps you,
[09:30] (570.95s)
then you should use it.
[09:32] (572.11s)
If using a calculator will help you,
then you should use it.
[09:34] (574.28s)
If using spellcheck will help you,
then you will use it.
[09:36] (576.16s)
Eventually, the spell check will teach you
how to spell the right words, because
[09:39] (579.16s)
you'll get used to it so much or you just
won't need to know how to spell anymore.
[09:43] (583.21s)
You'll just need to know what the word is.
[09:45] (585.17s)
If you can use AI to help,
then you should.
[09:47] (587.59s)
And if you can already do the job,
then you'll never need
[09:49] (589.63s)
to do the job in the future.
[09:50] (590.88s)
Assuming AI is everywhere,
which it will be.
[09:54] (594.30s)
All technical interviews need to
change, not just in software engineering,
[09:57] (597.72s)
but everywhere in the world.
[09:58] (598.93s)
If you get asked the question and AI can
answer the question, then you should
[10:02] (602.52s)
probably get that out of your interview.
[10:04] (604.35s)
I think interviews
will be a lot more holistic,
[10:06] (606.77s)
and I really question whether we even need
job interviews at all in the future.
[10:10] (610.28s)
If there is an AI that knows everything
about you, everything you're good at,
[10:13] (613.20s)
why do you need a one hour interview
to assess anything other than culture fit?
[10:17] (617.16s)
I already know all the work you've done,
or at least the AI already knows
[10:19] (619.95s)
the work you've done.
[10:20] (620.58s)
It knows how good it is.
[10:21] (621.45s)
It knows what skills you're good at,
and if there is a skill match,
[10:24] (624.08s)
then I should just be
able to match you directly to the job.
[10:26] (626.71s)
Assuming that we get along
after like a 30 minute conversation.
[10:30] (630.05s)
I really don't know that there is
a need for interviews in today's age, but
[10:33] (633.13s)
right now what we use
is really just a conversation.
[10:36] (636.55s)
We check if you're a culture fit,
we talk about past work you've done,
[10:39] (639.51s)
and that's pretty much it.
[10:40] (640.64s)
The whole point of Cluley is to get
everybody used to the fact, or used to a
[10:44] (644.35s)
life where they use AI for everything.
[10:46] (646.14s)
Once everybody uses AI in every
instance possible, there's going
[10:49] (649.23s)
to be a lot of jobs that get replaced,
[10:50] (650.82s)
and there's going to be a lot of people
who are able to do
[10:53] (653.44s)
so much more than they previously were.
[10:55] (655.32s)
If every scientist decided one day,
like today, I'm going to start using AI
[10:58] (658.99s)
as much as possible,
[10:59] (659.95s)
they will be 100 times more productive.
When scientists are
[11:02] (662.37s)
a hundred times more productive,
[11:03] (663.37s)
we cure cancer ten years earlier.
We cure Alzheimer's ten years earlier.
[11:06] (666.50s)
Everyone lives to 400 years old
and we're on the next flight
[11:09] (669.09s)
to fucking Mars in like two years.
[11:10] (670.96s)
The rate of societal progression will just
expand and exponentiate significantly
[11:16] (676.17s)
once everyone gets along
to the fact that we're all using AI now.
[11:20] (680.26s)
And that's what Cluley hopes to achieve,
is to get everybody used to
[11:23] (683.60s)
"we're all using AI now."
[11:28] (688.52s)
I think for the user experience,
I spent a lot of time making
[11:32] (692.19s)
the user experience very seamless.
[11:33] (693.98s)
It's less of a technical challenge,
I think, and more of a taste challenge.
[11:37] (697.70s)
The concept of a translucent
screen overlay is something that really
[11:41] (701.74s)
has never been attempted before.
[11:43] (703.24s)
And it's something that I tried,
and I think I only got to it after like
[11:47] (707.37s)
dozens of iterations of different tools
[11:49] (709.50s)
that would be a more seamless
use of AI in your life.
[11:52] (712.59s)
Yeah, I think that was probably
the biggest technical challenge,
[11:55] (715.46s)
just figuring out what exactly is the best user experience
[11:58] (718.63s)
for someone using this tool.
[11:59] (719.88s)
I mean, latency,
response speed and accuracy
[12:03] (723.05s)
are the two biggest things.
[12:04] (724.93s)
This is what every model,
like OpenAI, is working
[12:07] (727.56s)
to improve: latency and accuracy.
[12:09] (729.89s)
There's ways that we can get
to a much faster response, for example,
[12:13] (733.73s)
if we host models on our own servers.
[12:15] (735.94s)
This eliminates a lot of latency
that comes from the load balancing
[12:19] (739.28s)
and request handling that
is just inherent in OpenAI's servers.
[12:23] (743.53s)
That's probably what we will end up doing.
[12:25] (745.45s)
There's ways that we can cache the input
and sort of parameterize the inputs
[12:29] (749.87s)
so that you get
the same condensed info.
[12:33] (753.04s)
Same information, but just condensed
in a smaller way and the smaller the input
[12:37] (757.34s)
size, the faster the time to first token.
[12:39] (759.34s)
Also, generally, accuracy can be improved
by specific system prompts.
[12:43] (763.35s)
We're developing custom evals in-house
based on a lot of the analytics
[12:47] (767.43s)
and usage that we're seeing.
[12:48] (768.73s)
And like everything is getting better.
[12:50] (770.06s)
Like every single day,
the Cluely gets more accurate and faster.
[12:53] (773.69s)
At a certain point, we're going to know
exactly what type of responses you prefer
[12:58] (778.49s)
as an individual, what sort
of conversations that you're in, and we
[13:02] (782.74s)
can use all that data to generate a very,
[13:05] (785.24s)
very hyper specific, personalized,
fine-tuned model for you that knows that,
[13:09] (789.12s)
hey, I'm a media reporter,
I conduct these sorts of interviews, and I
[13:12] (792.21s)
generally want these types of responses.
[13:14] (794.25s)
The tonality of my emails is this.
[13:15] (795.92s)
So I would like you
to respond in this way.
[13:17] (797.71s)
And we can just get the most
personalized model in the world.
[13:20] (800.30s)
And once we have that data as like a moat,
defending us from
[13:24] (804.01s)
the other big tech companies,
then we'll pretty much be unstoppable.
[13:26] (806.72s)
More so than the data,
I think the user experience
[13:29] (809.56s)
is just interesting, untapped and novel.
[13:31] (811.77s)
If we're correct about this,
then we'll be the first to market.
[13:34] (814.40s)
And there's a huge first mover advantage
when you're trying
[13:37] (817.77s)
a new form of UX. And if we can
capture the market quickly enough
[13:41] (821.90s)
by going viral enough sufficiently,
[13:43] (823.86s)
then I think it will be
very hard to compete with us.
[13:49] (829.62s)
The entire way we're going
to think will be changed.
[13:52] (832.58s)
Every single one of my thoughts
is formulated by the information
[13:55] (835.67s)
I have at this moment.
[13:57] (837.25s)
But what happens when that information
I have isn't just what's in my brain,
[14:00] (840.63s)
but it's everything that humanity
has ever collected and put online, ever.
[14:04] (844.22s)
What happens when AI literally helps
me think in real time,
[14:07] (847.51s)
the entire way that humans will interact
with each other, with the world,
[14:10] (850.31s)
all of our thoughts will be changed.
[14:11] (851.56s)
Like what happens when I know about every
single post you've made online ever.
[14:15] (855.02s)
And I use that to distill
down into a condensed blurb
[14:19] (859.69s)
of everything about you ever.
[14:21] (861.65s)
What does our interaction look like then?
[14:23] (863.61s)
It's really hard to say, but I think
this is a turning point for humanity,
[14:27] (867.66s)
and it will fundamentally change
the way that we think
[14:30] (870.16s)
and the way that we behave as humans.
[14:31] (871.75s)
Well, if you're not building a company
in AI right now, then you're
[14:33] (874.00s)
probably not doing the right thing.
[14:35] (875.46s)
AI just enables you to build such cool
stuff, and it's such a new technology
[14:40] (880.84s)
that even if you're 19 and you've
been playing with it for two months,
[14:43] (883.67s)
you are one of the brightest minds.
[14:46] (886.26s)
You are one of the pioneers of the field.
[14:47] (887.93s)
It's not like biology where if you haven't
studied for ten, 20 years, then you don't,
[14:51] (891.43s)
then you're not an expert in biology.
[14:52] (892.85s)
You can study AI for two months
and you'll be an expert in AI. This
[14:55] (895.81s)
technology is so gigantic, and it's so new
that you can be really, really young.
[15:00] (900.40s)
And you can know it more deeply
than anyone else, and you'll have
[15:03] (903.74s)
the opportunity to build like a billion,
$10 billion company out of it.
[15:07] (907.07s)
I would say take bigger risks.
[15:08] (908.95s)
This is the only advice
I have for anyone, really.
[15:11] (911.33s)
You are smart enough.
[15:12] (912.33s)
You're capable enough,
you're hard working enough.
[15:13] (914.00s)
Just take bigger risks.
[15:15] (915.33s)
If you take bigger risks
and force yourself into positions
[15:17] (917.75s)
where you have to make it,
[15:18] (918.83s)
you'll find that you're a lot more hard
working than you thought you were,
[15:21] (921.67s)
and you'll also find
that life gets a lot more interesting.
[15:24] (924.17s)
And very often the downside risk is much
smaller than you think, and the upside
[15:27] (927.97s)
of risk is much bigger than you think.
[15:29] (929.55s)
I wasn't really like this five months ago.
[15:31] (931.72s)
I mean, literally like, half
a year ago, I was thinking, I just want
[15:35] (935.14s)
to get a job at a big tech company.
[15:36] (936.68s)
That's all I want to do.
[15:37] (937.77s)
And it wasn't until very recently
that I thought, oh, I actually want
[15:40] (940.56s)
to build companies and go all in on this,
taking risks initially.
[15:44] (944.36s)
It means being willing to
get rid of the constructs and limiting
[15:48] (948.53s)
beliefs in your mind that make you think,
hey, when I graduate, I have to be an
[15:51] (951.12s)
engineer or lawyer or doctor or whatever.
[15:52] (952.95s)
Just being willing to see
what would happen if you didn't do it.
[15:55] (955.75s)
Every risk starts very small.
[15:57] (957.83s)
Right now, I think people look at me
and think I'm a crazy risk taker.
[16:00] (960.17s)
But it didn't start like this.
It started with me taking smaller risks.
[16:02] (962.59s)
Like the first risk I took was,
hey, what if I built this tool
[16:05] (965.67s)
and told nobody about it?
[16:06] (966.72s)
Then the next risk I took was what if I
post it online but made it free and didn't
[16:09] (969.76s)
really associate with it so that people,
[16:11] (971.35s)
more people would see it and eventually
the risk just snowballed and snowballed.
[16:13] (973.97s)
Until now, I'm like fully posting
whatever confidential document Columbia
[16:17] (977.89s)
gives me because like, I don't care.
[16:19] (979.27s)
Like the risk is not that much of a risk anymore,
[16:21] (981.56s)
and I've grown used to it.
So grow used to taking bigger risks.
[16:27] (987.53s)
I feel like my life is very easy.
[16:30] (990.95s)
My life has been very easy.
I mean, my mom.
[16:33] (993.37s)
I've got loving parents, my mom, she
made me study even when I didn't want to.
[16:37] (997.75s)
She would just say,
Roy, you should go study.
[16:39] (999.16s)
So I studied and as a result, I did well
in school and I hung out with smart kids
[16:42] (1002.75s)
and they helped me do better in life.
[16:44] (1004.63s)
Like, I have two amazing, great parents
and I come from a great family.
[16:47] (1007.92s)
I don't really feel like my life
has been all that challenging.
[16:51] (1011.05s)
Getting kicked out of Columbia?
[16:52] (1012.30s)
It's not that challenging
when you're out there building companies,
[16:54] (1014.76s)
and I was going to drop out anyways.
Getting rescinded from Harvard?
[16:57] (1017.60s)
This is also not that challenging
when you have a loving family at home.
[17:00] (1020.56s)
There's kids out there
who are starving in Uganda
[17:03] (1023.31s)
and like my life is not that hard.
[17:04] (1024.77s)
Really.
[17:05] (1025.36s)
A lot of everything that you should think
is you should just try
[17:08] (1028.19s)
and think more positively about life
and be more optimistic about things.
[17:11] (1031.45s)
It's very rare
that you're going to be in America,
[17:13] (1033.82s)
you're going to have the opportunity
to go to college, and you're really
[17:16] (1036.91s)
in an actually challenging situation.
[17:19] (1039.20s)
In reality, we're in the most
interesting time in history.
[17:21] (1041.75s)
If you live in America
and you're not in poverty
[17:23] (1043.96s)
and your parents aren't crackheads,
[17:25] (1045.21s)
you have the opportunity to make billions
of dollars and make generational wealth
[17:30] (1050.13s)
and do the most interesting thing ever.
[17:31] (1051.80s)
There's very few challenging situations
that are so challenging
[17:34] (1054.76s)
that you're just, like, limited right now.
[17:36] (1056.76s)
Anybody can do anything, and you
should just try and take risks and be bold
[17:40] (1060.73s)
because, you're very privileged right now
to be living in this world.
[17:45] (1065.98s)
Success is having a wife, having 12 kids
and having people remember me.
[17:51] (1071.15s)
I think Steve Jobs and Elon Musk are very
cool in that everyone has a strong opinion
[17:57] (1077.33s)
about them, whether it's good or bad.
[17:58] (1078.95s)
Everyone has something to say about Elon
Musk, and I think that's really cool.
[18:02] (1082.79s)
We're all gonna die eventually
and nobody's gonna remember us
[18:05] (1085.38s)
in a thousand years.
[18:06] (1086.38s)
I might as well be remembered
as strongly as possible for
[18:09] (1089.71s)
the time that I'm here.
[18:10] (1090.88s)
I think the biggest thing is confidence.
[18:12] (1092.92s)
Like, truly, you hear all the time that,
the people that build big companies
[18:16] (1096.43s)
are not geniuses.
[18:17] (1097.43s)
They're not smarter than you.
[18:18] (1098.60s)
They're just they take more risks
than you, and they're hard work,
[18:22] (1102.39s)
harder workers than you.
[18:23] (1103.35s)
And I think this is generally true.
[18:25] (1105.02s)
Five months ago, I was just some random
student at some random school and I
[18:28] (1108.07s)
didn't really have anything going for me.
[18:29] (1109.86s)
And now I just raised $5 million,
and I'm in this giant office,
[18:33] (1113.11s)
and I'm building a company
that I hope will change the world one day.
[18:35] (1115.57s)
And very little has changed about me
except the fact that I took a risk.
[18:39] (1119.28s)
Even moving forward,
[18:40] (1120.29s)
if I do end up becoming
the next trillionaire,
[18:42] (1122.62s)
like as big as Mark Zuckerberg, there
will be nothing about me that changed.
[18:46] (1126.21s)
It'll just be a series of well calculated
risks that I took that will lead me there.
[18:50] (1130.30s)
And I think the gap between Mark
Zuckerberg and your average human,
[18:53] (1133.30s)
it's really not that big.
[18:54] (1134.55s)
And if you just have the confidence
to take bigger risks,
[18:57] (1137.18s)
then very often you will win.