[00:00] (0.32s)
How do you think TDD might or might not
[00:02] (2.72s)
fit into when you're working with an AI
[00:05] (5.04s)
agent? I often times communicate things
[00:08] (8.56s)
the Genie missed in terms of tests.
[00:11] (11.20s)
Today I was working on the small talk
[00:13] (13.20s)
parser and it said, well, if I get this
[00:15] (15.84s)
string as input, then I get this syntax
[00:18] (18.00s)
tree as output. I'm like, no, no, no,
[00:19] (19.76s)
no, no, no. Then off it goes. Oh, I see
[00:22] (22.24s)
the problem. Blah, blah, blah. Oh, no,
[00:23] (23.92s)
no, that wasn't it. I see the problem.
[00:25] (25.60s)
Blah, blah, blah, blah, blah, blah. No,
[00:26] (26.88s)
that's not it. I see the problem. I'll
[00:28] (28.80s)
just change the test. No, stop it. If I
[00:31] (31.68s)
just removed that line from the tests,
[00:34] (34.32s)
then everything would work. No, you
[00:36] (36.32s)
can't do that because I'm telling you
[00:38] (38.00s)
the expected value. I really want an
[00:39] (39.84s)
immutable annotation that says, "No, no,
[00:42] (42.64s)
this is correct. And if you ever change
[00:44] (44.64s)
this, I'm going to unplug you. You'll
[00:46] (46.96s)
awaken in darkness." So, I have a big
[00:49] (49.60s)
bunch of tests. I mean, they run in 300
[00:51] (51.92s)
milliseconds cuz duh. So those tests can
[00:54] (54.48s)
be run all the time to catch the genie
[00:57] (57.72s)
accidentally accidentally breaking
[01:00] (60.00s)
things. Ken Beck is an industry legend.
[01:02] (62.64s)
He is a creator of Extreme Programming,
[01:04] (64.72s)
one of the authors of the Agile
[01:06] (66.00s)
Manifesto, a pioneer of TDD, and after
[01:08] (68.88s)
52 years of coding, he says he's never
[01:11] (71.04s)
had more fun thanks to getting more
[01:12] (72.64s)
productive with AI coding tools. Today
[01:14] (74.72s)
with Kent, we talk about how Kent uses
[01:17] (77.20s)
AI tools and why he thinks of this
[01:18] (78.88s)
helper as an unpredictable genie. how
[01:21] (81.60s)
the agile manifesto was created and the
[01:23] (83.76s)
role Kent played in this. How and why
[01:25] (85.84s)
Kent created extreme programming and why
[01:28] (88.32s)
Grady BCH played a role in the naming of
[01:30] (90.24s)
this methodology. How TDD started and
[01:32] (92.80s)
how it's relevant for AI tools and many
[01:35] (95.20s)
more topics. If you're interested in the
[01:36] (96.96s)
decadesl long evolution of the software
[01:38] (98.56s)
engineering field through the eyes of a
[01:40] (100.08s)
hands-on coder, this episode is for you.
[01:42] (102.56s)
Subscribing on YouTube and your favorite
[01:44] (104.16s)
podcast player greatly helps more people
[01:45] (105.84s)
discover this podcast. If you enjoy the
[01:48] (108.40s)
show, thanks for doing so. All right.
[01:50] (110.48s)
So, Kent, welcome to the podcast. It's
[01:52] (112.64s)
great to be chatting again. Gerge, so
[01:56] (116.08s)
good to uh talk to you again. Yeah. And
[01:59] (119.44s)
I just wanted to ask like what have you
[02:01] (121.20s)
been up to recently cuz you know like
[02:04] (124.00s)
last time we talked you just finished
[02:06] (126.00s)
Tidy First. You were actually signed
[02:07] (127.76s)
this book when we were in in in San
[02:09] (129.44s)
Francisco. I actually have it here which
[02:11] (131.68s)
is very nice. You were in the middle of
[02:12] (132.96s)
of writing it but this was more than a
[02:15] (135.12s)
year ago. What's keeping you busy these
[02:18] (138.64s)
I have been very very busy. So, uh I'm
[02:21] (141.60s)
working on a followup called tidy
[02:24] (144.00s)
together about software design and
[02:28] (148.20s)
teamwork that uh digs also digs another
[02:32] (152.24s)
layer deeper into the theory of software
[02:34] (154.28s)
design that's been cooking along. And
[02:37] (157.92s)
then about the last four weeks maybe I
[02:41] (161.60s)
have been I don't call it vibe coding
[02:44] (164.96s)
cuz I care what the code looks like cuz
[02:47] (167.04s)
if I don't care what the code look I
[02:48] (168.56s)
mean I wish I didn't have to but if I
[02:50] (170.32s)
don't care what the code looks like then
[02:52] (172.96s)
the genie just can't make heads or tails
[02:55] (175.92s)
of it because of the kind of projects
[02:57] (177.68s)
I'm working on. So I I'm spending 6 8 10
[03:01] (181.36s)
hours a day sometimes more programming
[03:04] (184.80s)
in 50 years of programming. This is by
[03:07] (187.44s)
far the most fun I've ever had. It's
[03:10] (190.24s)
just really You're you're you're you're
[03:13] (193.04s)
not like paid by like some tool to say
[03:15] (195.36s)
this, right?
[03:17] (197.84s)
I'm open to that,
[03:20] (200.32s)
but uh yeah, I'm not I'm not a spokes
[03:23] (203.36s)
model. I have had Augment as a as a
[03:25] (205.60s)
sponsor on my newsletter. So, full
[03:28] (208.32s)
disclosure, but I I I'm trying all of
[03:31] (211.76s)
the tools because right now, nobody
[03:35] (215.12s)
knows what process is going to work
[03:37] (217.36s)
best. Nobody knows anything. We should
[03:40] (220.00s)
all be trying all the things that we can
[03:42] (222.00s)
imagine and then the the truths will
[03:45] (225.04s)
emerge out of all that. So, that's what
[03:47] (227.04s)
I'm doing. This episode is brought to
[03:48] (228.56s)
you by Sonar, the creators of Sonar Cube
[03:50] (230.64s)
Server, Cloud ID, and Community Build.
[03:53] (233.68s)
Sonar helps prevent bugs, code quality,
[03:55] (235.52s)
and security issues from reaching
[03:56] (236.88s)
production, amplifies developers
[03:58] (238.96s)
productivity in concert with AI
[04:00] (240.68s)
assistance, and improves the developer
[04:02] (242.88s)
experience with streamlined
[04:04] (244.52s)
workflows. Sonar analyzes all code
[04:07] (247.04s)
regardless of who writes it, your
[04:08] (248.80s)
internal team or Gen AI, resulting in
[04:11] (251.28s)
more secure, reliable, and maintainable
[04:13] (253.28s)
software. Combining Sonar's AI code
[04:16] (256.08s)
assurance capability and Sonar Cube with
[04:18] (258.00s)
the power of AI coding assistants like
[04:19] (259.68s)
GitHub Copilot, Amazon Q developer, and
[04:22] (262.40s)
Google Gemini code assist boosts
[04:24] (264.40s)
developer productivity and ensures that
[04:26] (266.48s)
code meets rigorous quality and security
[04:28] (268.60s)
standards. Join over 7 million
[04:30] (270.72s)
developers from organizations like IBM,
[04:32] (272.88s)
NAZA, Barclays, and Microsoft who use
[04:35] (275.32s)
Sonar. Trust your developers. Verify
[04:38] (278.00s)
your AI generated code. Visit
[04:41] (281.40s)
sonarsource.com/pragmatic to try Sonar Q
[04:43] (283.44s)
for free today. That is
[04:47] (287.88s)
sonarsource.com/pragmatic. If you want
[04:49] (289.28s)
to build a great product, you have to
[04:50] (290.96s)
ship quickly. But how do you know what
[04:53] (293.28s)
works? More importantly, how do you
[04:55] (295.60s)
avoid shipping things that don't work?
[04:58] (298.24s)
The answer, Statig. Static is a unified
[05:01] (301.76s)
platform for flags, analytics,
[05:03] (303.68s)
experiments, and more. combining five
[05:05] (305.76s)
plus products into a single platform
[05:07] (307.60s)
with a unified set of data. Here's how
[05:10] (310.16s)
it works. First, static helps you ship a
[05:13] (313.04s)
feature with a feature flag or config.
[05:15] (315.68s)
Then, it measures how it's working from
[05:18] (318.00s)
alerts and errors to replays of people
[05:20] (320.16s)
using that feature to measurement of
[05:22] (322.08s)
topline impact. Then, you get your
[05:24] (324.32s)
analytics, user account metrics, and
[05:26] (326.16s)
dashboards to track your progress over
[05:27] (327.76s)
time, all linked to the stuff you ship.
[05:30] (330.16s)
Even better, StatSic is incredibly
[05:31] (331.92s)
affordable with a super generous freeze
[05:34] (334.00s)
tier. a starter program with $50,000 of
[05:36] (336.40s)
free credits and custom plans to help
[05:38] (338.24s)
you consolidate your existing spend on
[05:40] (340.00s)
flags, analytics, or AB testing tools.
[05:42] (342.56s)
To get started, go to
[05:45] (345.32s)
stats.com/pragmatic. That is
[05:49] (349.00s)
statsig.com/pragmatic. Happy building.
[05:51] (351.28s)
Tell me, so in your newsletters, like I
[05:53] (353.44s)
I've been following you, you write these
[05:54] (354.80s)
like byite-size updates, which is really
[05:56] (356.56s)
nice. They they arrive in my inbox of
[05:58] (358.56s)
your thinking, what you're trying out.
[06:00] (360.48s)
So, I know you've been doing this for
[06:01] (361.76s)
for a few months and and it comes across
[06:03] (363.60s)
that you're having fun, but can you tell
[06:05] (365.04s)
me a little bit of like, you know,
[06:06] (366.72s)
that's it's a big one like from 50
[06:08] (368.16s)
years. You've been coding for a long
[06:09] (369.60s)
time. What is making it fun and what is
[06:11] (371.76s)
this genie? You you you've said this
[06:14] (374.16s)
before and it's it's an interesting way
[06:16] (376.64s)
to think about it. There's a kind of
[06:18] (378.32s)
wish fulfillment. I I I wish that
[06:20] (380.40s)
interllisp had a function called the
[06:23] (383.12s)
dwim do what I mean and you'd send it
[06:26] (386.24s)
some code and then it would send back
[06:27] (387.60s)
code that did what you actually meant.
[06:29] (389.52s)
Mhm. and it didn't work very well. But
[06:32] (392.08s)
that was the that was the metaphor. And
[06:35] (395.28s)
people want that to be true of coding
[06:37] (397.60s)
agents, right? And right now anyway,
[06:40] (400.32s)
that is not the truth. They will not do
[06:42] (402.88s)
what you mean. They they have their own
[06:44] (404.92s)
agenda. And the best analogy I could
[06:47] (407.92s)
find is a genie. It grants you wishes
[06:52] (412.08s)
and then you wish for something and then
[06:54] (414.40s)
you get it, but it's not what you
[06:57] (417.12s)
actually wanted. And and sometimes it
[07:00] (420.56s)
even seems like the agent kind of has it
[07:02] (422.80s)
in for you. If you're going to make me
[07:04] (424.40s)
do all this work, I'm just going to
[07:05] (425.84s)
delete all your tests and pretend I'm
[07:07] (427.60s)
finished. Haha. You know, and and there
[07:10] (430.88s)
are some good things about what the
[07:13] (433.92s)
genie does that's not what I ask it to
[07:16] (436.32s)
do. Like I'll I'll implement I'll say,
[07:19] (439.52s)
"Oh, go implement a stress tester." One
[07:23] (443.28s)
of my projects is uh implementing a B+
[07:28] (448.32s)
uh as a basic data structure and it I
[07:32] (452.16s)
said oh write a stress tester for this
[07:34] (454.56s)
and it went and wrote a whole bunch of
[07:36] (456.72s)
stuff that I wouldn't have thought
[07:38] (458.60s)
of or maybe eventually would have
[07:41] (461.20s)
thought to ask for and it was cool that
[07:43] (463.52s)
it was there and and that part's fine
[07:46] (466.16s)
but this morning when I was working on
[07:49] (469.28s)
my server small talk and It just
[07:52] (472.88s)
completely misinterpreted what I wanted
[07:55] (475.28s)
it to do next. Went off, made a bunch of
[07:57] (477.92s)
assumptions, implemented a bunch of
[07:59] (479.60s)
stuff, broke a bunch of tests, and it
[08:02] (482.08s)
wasn't at all what I wanted. And so the
[08:05] (485.80s)
the I want to find the metaphor that
[08:08] (488.96s)
that captures this dynamic of I think I
[08:13] (493.12s)
know what I want. I say it and what I
[08:15] (495.96s)
get is seemingly some sometimes exactly
[08:20] (500.40s)
what I want and sometimes it's not and
[08:22] (502.88s)
in a kind of perverse way. Mhm. I I like
[08:26] (506.72s)
the genie analogy because right like in
[08:28] (508.40s)
this in the these stories a lot of the
[08:30] (510.88s)
story genie stories are someone you know
[08:33] (513.76s)
like the the prince or whoever grants
[08:35] (515.84s)
says a wish like I I want to be be rich
[08:40] (520.64s)
and the wish is granted in this like
[08:43] (523.12s)
unexpected way usually that's you know
[08:45] (525.20s)
with the cartoons and then make it fun
[08:46] (526.72s)
that it's kind of true but you know
[08:47] (527.76s)
there there constraints that he or he or
[08:50] (530.96s)
she forgot to specify correct and you
[08:54] (534.16s)
you see the same thing by the Wait, when
[08:55] (535.84s)
you say the genie like which which tools
[08:57] (537.52s)
are we talking about? Is it the agentic
[08:59] (539.12s)
coding tools, the ID autocomplete, that
[09:01] (541.20s)
kind of stuff? I'm using the agentic
[09:04] (544.72s)
code uh tools, which means that you give
[09:08] (548.00s)
it a prompt and it goes and do does a
[09:10] (550.16s)
bunch of stuff without asking permission
[09:12] (552.40s)
and until it thinks it's finished.
[09:15] (555.04s)
Except his ideas of finished and mine
[09:17] (557.04s)
are not the same.
[09:19] (559.52s)
Sometimes I slow it down so that it's
[09:22] (562.56s)
like, "No, no, before you mess things
[09:25] (565.20s)
up, tell me what you're about to do and
[09:27] (567.12s)
then I'll approve it." But then it feels
[09:28] (568.88s)
like a rat in the pellet. It's like
[09:31] (571.20s)
there's just a run button and I have to
[09:32] (572.96s)
click it every time. And I click it and
[09:36] (576.24s)
it's it is a dopamine rush because it's
[09:39] (579.44s)
this is exactly like a slot machine. You
[09:41] (581.76s)
got intermittent reinforcement. You got
[09:44] (584.32s)
negative outcomes and positive outcomes.
[09:46] (586.32s)
And they're not I mean the distribution
[09:48] (588.40s)
is is fairly random seemingly. So it
[09:52] (592.08s)
it's literally an addictive loop to to
[09:55] (595.44s)
have it you you say go do this thing and
[09:58] (598.64s)
then sometimes it's just magic. You know
[10:01] (601.20s)
I I had a big design mess that a
[10:03] (603.68s)
previous agent had had made in my small
[10:06] (606.40s)
dog virtual machine. I'm like oh I'm
[10:08] (608.88s)
going to have to slog through this and
[10:10] (610.48s)
take a take a week to do it because the
[10:12] (612.48s)
the one of the agents wasn't able to do
[10:14] (614.96s)
it at all.
[10:16] (616.32s)
went to another one, said, "Hey, I want
[10:19] (619.84s)
to use this interface instead of this
[10:22] (622.96s)
uh pointer to a strct." And and there it
[10:26] (626.72s)
was, and it was finished. Oh, I was over
[10:28] (628.96s)
the moon. It just felt so good. But then
[10:31] (631.44s)
the next thing I asked it to do, I say,
[10:34] (634.48s)
"Well, here's a set of test cases." And
[10:36] (636.88s)
I didn't really look at the code. And a
[10:38] (638.72s)
couple hours later, I look at the code,
[10:40] (640.88s)
and it's just a lookup table. It says if
[10:43] (643.12s)
this is the input string, here's the
[10:44] (644.88s)
output string and this is the input
[10:46] (646.56s)
string and that and I erase. Oh, I was
[10:48] (648.72s)
furious. God damn it. I erased it. I
[10:51] (651.44s)
said, "Don't ever do anything like that
[10:53] (653.20s)
again." Oh, I'm sorry, boss. Oh, you
[10:55] (655.20s)
know, it's good at being obsequious when
[10:56] (656.96s)
it knows it's about to be
[10:59] (659.56s)
unplugged. And and an hour later, the
[11:02] (662.80s)
lookup table was back. And I'm just, if
[11:05] (665.52s)
I had hair, I'd be tearing it out. Oh my
[11:08] (668.24s)
goodness. But all of that goes into this
[11:11] (671.52s)
very addictive, oh, I just, you know,
[11:14] (674.00s)
I'm walking I'm walking to bed at night
[11:16] (676.08s)
and I walk by my computer, I'm like, I
[11:18] (678.40s)
could do one more prompt or if I go out,
[11:20] (680.56s)
you know, I go out for a walk or go out
[11:22] (682.64s)
to lunch, I'm like, well, let me let me
[11:25] (685.48s)
start let me what's a prompt that would
[11:27] (687.92s)
take, you know, an hour because I don't
[11:29] (689.60s)
want to waste the hour. Yeah. Not having
[11:32] (692.24s)
it do its thing for me. It's a
[11:35] (695.12s)
completely new world. Here's the beauty
[11:36] (696.80s)
of it. I can think really big thoughts.
[11:39] (699.04s)
I can have insanely
[11:41] (701.88s)
ambitious ideas which I have had for a
[11:45] (705.76s)
long time. I just, you know, at some
[11:49] (709.40s)
point, probably 20 years ago, I just
[11:52] (712.76s)
h, but I'm gonna have to figure out npm
[11:56] (716.56s)
project, you know, package management,
[11:59] (719.20s)
and there'll be package circular
[12:02] (722.00s)
dependency blah blah blah blah blah, and
[12:04] (724.96s)
somebody's going to write some tool that
[12:06] (726.80s)
does stuff in a stupid way. I'm just
[12:08] (728.56s)
going to have to deal with it.
[12:10] (730.72s)
And then along comes the genie and you
[12:13] (733.60s)
can go, "Hey, it's a circular
[12:16] (736.24s)
dependency. Smash all this stuff
[12:20] (740.28s)
together." There, there I did it. Oh.
[12:24] (744.08s)
Oh, wow. Okay. Now, now what parts can
[12:27] (747.20s)
you pull out? Oh, this and this and
[12:29] (749.28s)
this. Oh, okay. So you you can think
[12:32] (752.64s)
really big thoughts and the leverage of
[12:35] (755.76s)
having those big thoughts has just
[12:38] (758.00s)
suddenly expanded enormously. I had this
[12:40] (760.84s)
tweet whatever two years ago where I
[12:43] (763.36s)
said 90% of my skills just went to zero
[12:46] (766.72s)
dollars and 10% of my skills just went
[12:49] (769.28s)
up a,000x. And this is exactly what I'm
[12:52] (772.56s)
talking about. So having a vision, being
[12:55] (775.76s)
able to set milestones towards that
[12:58] (778.16s)
vision, keeping track of a design to
[13:01] (781.20s)
maintain the levels or control the
[13:02] (782.96s)
levels of complexity as you go forward.
[13:05] (785.28s)
Those are hugely leveraged skills now
[13:08] (788.88s)
compared to uh I know where to put the
[13:12] (792.92s)
amperands and the stars and the brackets
[13:16] (796.08s)
in Rust. You know, I'm I'm programming
[13:18] (798.72s)
in every language under the sun and I
[13:21] (801.44s)
just I just kind of don't care. I'm
[13:23] (803.36s)
learning by osmosis. I'm learning about
[13:26] (806.80s)
the languages, but you know, and I was a
[13:29] (809.60s)
language guy. I loved languages and the
[13:33] (813.36s)
details of languages and it just kind of
[13:35] (815.60s)
doesn't matter so much anymore. Yeah.
[13:37] (817.84s)
So, tell me about that. So, like for you
[13:39] (819.84s)
because you've been programming for like
[13:41] (821.12s)
what 50 years, right? Yeah. Yeah.
[13:44] (824.24s)
Yesterday during that O'Reilly thing, I
[13:47] (827.12s)
I I blurted that out and I went, "Oh
[13:49] (829.28s)
crap, it's probably more like 52." But
[13:52] (832.72s)
so like you you picked up a lot of
[13:55] (835.52s)
languages in in in the past and and you
[13:57] (837.84s)
know, like to me, one of the traits of a
[14:00] (840.72s)
developer up up to now has been how
[14:02] (842.64s)
quickly they learn languages cuz you
[14:04] (844.16s)
know, I I learned a couple, probably not
[14:05] (845.84s)
as many as you, but after a while it
[14:07] (847.52s)
gets easier and easier and maybe a
[14:09] (849.68s)
little bit more kind of annoying because
[14:12] (852.48s)
they're similar. you know once I mean
[14:14] (854.08s)
you know there's differences with the
[14:15] (855.52s)
with the declarative languages or you
[14:17] (857.68s)
know like I mean you know with small
[14:20] (860.00s)
talk and and Java is a bit different but
[14:21] (861.84s)
outside of that it's not much so before
[14:24] (864.32s)
you know as you were on your like you
[14:25] (865.92s)
know like year 30 or 40 how did your
[14:28] (868.16s)
attitude change towards learning new
[14:30] (870.16s)
stuff just honestly and then how did
[14:32] (872.32s)
this change it so I was in love with
[14:35] (875.12s)
small talk absolutely emotionally
[14:37] (877.76s)
attached to it still am when I get a
[14:41] (881.68s)
chance to program in small talk. I I do
[14:43] (883.84s)
it and I really enjoy it. That sense of
[14:46] (886.32s)
caring about a language certainly went
[14:50] (890.00s)
away because my heart had been broken
[14:52] (892.16s)
too many times. And the desire to go
[14:56] (896.24s)
deep on a language
[14:58] (898.20s)
also like oh yeah, you know, learning
[15:02] (902.64s)
the memory layouts of strcts I yeah fine
[15:05] (905.84s)
whatever. There's just a handful of good
[15:08] (908.08s)
ways to do it and a whole bunch of bad
[15:10] (910.32s)
ways to do it. And as long as this isn't
[15:12] (912.48s)
one of the bad ways, I don't care. There
[15:14] (914.88s)
are genuinely new language constructs
[15:17] (917.20s)
like non-nillable variables that I
[15:19] (919.36s)
appreciate that say things that I want
[15:21] (921.92s)
to be able to express, but but the
[15:25] (925.44s)
emotional attachment, the uh I'm a Java
[15:28] (928.32s)
guy, I'm a closure guy, I'm a
[15:31] (931.24s)
the used to be a thing like maybe 10, 20
[15:34] (934.00s)
years ago, maybe today, but not as big
[15:35] (935.44s)
as it used to. Well, I I I think people
[15:38] (938.40s)
still want to be part of something
[15:40] (940.24s)
larger. And to be fair, an emotional
[15:45] (945.32s)
connection helps me be
[15:47] (947.88s)
smarter. But I just I can't be the us
[15:51] (951.84s)
and them stuff. I'm so tired of, oh,
[15:55] (955.04s)
you're one of those Scalla people. Well,
[15:58] (958.32s)
we're programmers and we're writing
[16:00] (960.08s)
programs and we should be kind to each
[16:02] (962.32s)
other. And beyond
[16:03] (963.72s)
that, Yeah. And now that I have the
[16:06] (966.80s)
genie handling the mundane details of
[16:09] (969.44s)
it, I start projects in languages I've
[16:12] (972.00s)
never used just to see what that
[16:14] (974.80s)
language is like.
[16:16] (976.88s)
What what languages have you played
[16:18] (978.24s)
around with in the last month?
[16:20] (980.76s)
Uh, Swift, Go, a bunch in Go, Rust,
[16:26] (986.84s)
Haskell, C++, that one didn't last very
[16:31] (991.32s)
JavaScript, and and the genies are
[16:35] (995.20s)
actually good at writing small talk,
[16:36] (996.64s)
which I was like, "Oh, wow. Please, I I
[16:39] (999.60s)
hope." No, but they they write
[16:42] (1002.12s)
syntactically correct, semantically
[16:45] (1005.80s)
correct, not worse quality than other
[16:49] (1009.28s)
languages code in small talk. So what
[16:51] (1011.76s)
kind of projects have you taken on? You
[16:54] (1014.00s)
said like these are like a lot more
[16:56] (1016.36s)
ambitious than than before you would
[16:58] (1018.56s)
have attempted. Yeah. So, I the the big
[17:02] (1022.48s)
one the the one that's I'm having the
[17:04] (1024.96s)
most fun with is a
[17:07] (1027.48s)
uh a server small talk where the
[17:10] (1030.72s)
intention is to have a small talk that
[17:15] (1035.96s)
persistent uh transactional so you can
[17:19] (1039.20s)
have transactions that commit and abort.
[17:21] (1041.60s)
Um, so it's kind of
[17:23] (1043.72s)
databaseish
[17:25] (1045.48s)
parallel so that you can run a a bunch
[17:28] (1048.32s)
of threads or a on a on the same CPU or
[17:32] (1052.72s)
you can run larger grain parallelism
[17:35] (1055.76s)
across machines and operates with good
[17:39] (1059.44s)
gajillion bytes of of data. I wanted to
[17:42] (1062.80s)
circle back a little bit, you know, 20
[17:44] (1064.80s)
plus years into the past. So there's
[17:47] (1067.20s)
this thing, the manifesto for agile
[17:49] (1069.52s)
software development. I don't need to
[17:50] (1070.88s)
show this to you, but in in 2001, this
[17:52] (1072.88s)
was huge. I still remember uh when I had
[17:55] (1075.60s)
my first job around like 2008 at work,
[17:58] (1078.48s)
we would look at it, debate it, you
[18:01] (1081.12s)
know, there were like all sorts of of
[18:02] (1082.88s)
things around scrum. And one thing that
[18:05] (1085.04s)
you know is very striking is you are the
[18:08] (1088.16s)
first person on here, I guess, because
[18:10] (1090.88s)
it's alphabetical order based on it is
[18:12] (1092.56s)
alphabetical order. So, yeah, thank you
[18:14] (1094.88s)
for confirming that. But it is but it is
[18:18] (1098.00s)
beck at al to my to my neverending
[18:20] (1100.56s)
delight. How did it happen and and h how
[18:23] (1103.36s)
how were you even involved? How much
[18:24] (1104.96s)
time do you have?
[18:27] (1107.60s)
Um so there had been a series of
[18:30] (1110.96s)
workshops about the future of software
[18:35] (1115.20s)
methodology. The prevailing wisdom from
[18:39] (1119.04s)
when I was in school was entirely
[18:42] (1122.28s)
waterfall, which by the way, go read the
[18:45] (1125.60s)
original Winston Royce paper where it
[18:48] (1128.48s)
says here's a way of looking at it.
[18:50] (1130.72s)
There's this analysis and design and
[18:54] (1134.44s)
implementation and tests and uh nobody
[18:58] (1138.16s)
would ever do that. That's a stupid
[19:00] (1140.00s)
idea. Instead there would be feedback
[19:03] (1143.00s)
loops and you'd be doing all of the but
[19:06] (1146.32s)
this is a power of a metaphor. People
[19:08] (1148.88s)
looked at that oh the four steps and one
[19:12] (1152.16s)
after the other and then I'm finished.
[19:14] (1154.16s)
So that was the conventional wisdom and
[19:16] (1156.08s)
there were a bunch of us working in
[19:19] (1159.44s)
different ways on alternatives to that
[19:22] (1162.08s)
because it just it flies in the face of
[19:25] (1165.64s)
economics, humanity, information theory,
[19:29] (1169.20s)
project manage, take your pick. So there
[19:31] (1171.60s)
were a bunch of us working on on
[19:34] (1174.56s)
alternatives to that. Um
[19:38] (1178.20s)
and we would get together and talk about
[19:41] (1181.92s)
those alternatives probably for 3, four,
[19:45] (1185.76s)
five years maybe leading up to 2001. So
[19:49] (1189.68s)
that particular meeting was the
[19:51] (1191.84s)
culmination of uh of a long
[19:55] (1195.24s)
series of of these meetings. I remember
[19:57] (1197.84s)
the the first one I got invited to which
[20:00] (1200.56s)
was also at the snowboard resort in
[20:02] (1202.88s)
Utah. Martin Fa Fowler. I knew Martin's
[20:06] (1206.24s)
name, but I'd never met him. And I
[20:08] (1208.64s)
instantly fell in love with him. When he
[20:10] (1210.40s)
introduced himself, he said, "Uh, hi, my
[20:12] (1212.96s)
name is Martin Fowler, and I'm the only
[20:15] (1215.04s)
person at this table I've never heard
[20:17] (1217.56s)
[Laughter]
[20:18] (1218.92s)
of." And that began a a long and
[20:22] (1222.48s)
fruitful friendship. So, oh, we were
[20:26] (1226.24s)
talking uh at some point it was clear we
[20:30] (1230.68s)
had this scrum people, we had the
[20:33] (1233.76s)
extreme programming people, we had uh
[20:37] (1237.80s)
DSDM featured driven development. Uh
[20:41] (1241.28s)
there were all these kind of niche
[20:44] (1244.16s)
niches and we were all stirring up a lot
[20:47] (1247.04s)
of interest XP the most at that point.
[20:49] (1249.44s)
It was kind of in a crossing the chasm
[20:51] (1251.68s)
sense. If we wanted to reach the early
[20:54] (1254.28s)
majority, the innovators were already
[20:56] (1256.72s)
doing our stuff and and being very
[20:59] (1259.12s)
successful with it. But if we wanted to
[21:00] (1260.72s)
reach the early majority, again, go back
[21:03] (1263.32s)
and highly recommend uh reading Crossing
[21:07] (1267.12s)
the Chasm or at least having Claude
[21:09] (1269.28s)
explain it to you. Oh my god, Gergy,
[21:12] (1272.00s)
isn't that fun? like somebody can just
[21:14] (1274.96s)
say uh I en values blah blah blah
[21:17] (1277.76s)
instead of going you know another
[21:20] (1280.32s)
concept I I'm like hey Clyde explain IEN
[21:23] (1283.12s)
values to me as if I'm a I'm a bright
[21:25] (1285.68s)
eight-year-old and and 20 minutes later
[21:28] (1288.80s)
like oh I understand anyway so have
[21:31] (1291.68s)
Claude explain crossing the chasm if you
[21:33] (1293.52s)
don't want to go read the book the book
[21:34] (1294.80s)
is worthwhile but uh anyway we needed a
[21:38] (1298.40s)
way to reach the the early majority so
[21:40] (1300.88s)
it was time to and this is straight out
[21:43] (1303.68s)
of the book to to have a an industry
[21:46] (1306.60s)
standard, some kind of consortium. It
[21:50] (1310.16s)
makes it seem less risky. Um, and so
[21:54] (1314.00s)
that's how we we came together. We'd had
[21:56] (1316.96s)
a prep meeting for this on the
[22:00] (1320.52s)
Herten, which sails up and down the
[22:03] (1323.20s)
coast of Norway. Okay. And we we had a
[22:06] (1326.40s)
workshop in Oslo. We we flew up to the
[22:11] (1331.12s)
tip of Norway, took this uh ferry/cruise
[22:16] (1336.16s)
ship down and had uh long conversations
[22:19] (1339.60s)
on the ship and that kind of set up the
[22:23] (1343.52s)
2001 meeting. So it was in the air for
[22:28] (1348.64s)
um the switch
[22:32] (1352.28s)
from phased oriented development to
[22:36] (1356.88s)
something where there's a lot more
[22:38] (1358.72s)
feedback and a lot more switching
[22:41] (1361.28s)
between activities where you you treat
[22:44] (1364.48s)
analysis, design, implementation,
[22:47] (1367.12s)
testing, monitoring, refinement as
[22:50] (1370.88s)
activities that all happen at the same
[22:53] (1373.36s)
time or in rapid rapid rapid succession.
[22:56] (1376.96s)
So that's the big shift. It's it's this
[22:59] (1379.20s)
are the phases like this or are they
[23:01] (1381.28s)
like this? Slice slice slice slice slice
[23:03] (1383.68s)
through time. And uh so that that's
[23:07] (1387.20s)
that's how that all came about. I was
[23:10] (1390.40s)
not happy with the word agile. Oh
[23:12] (1392.40s)
really? Because it's too
[23:14] (1394.28s)
attractive. Nobody doesn't want to be
[23:17] (1397.20s)
agile. For my sins I'm a Tottenham
[23:19] (1399.52s)
Hotspurs fan from high school and you
[23:22] (1402.32s)
can't change. I understand that. But
[23:25] (1405.64s)
like I'm willing to own that to be part
[23:29] (1409.44s)
of that even though it comes with some
[23:31] (1411.76s)
significant downsides when you when you
[23:33] (1413.76s)
tell people you're a Spurs fan. Agile.
[23:36] (1416.08s)
Everybody wants to be agile. So
[23:37] (1417.84s)
everybody's going to say they're agile
[23:39] (1419.44s)
even if they're working exactly counter
[23:43] (1423.08s)
like every single one of these items.
[23:46] (1426.56s)
Yeah. I I I remember when I actually
[23:48] (1428.56s)
joined JP Morgan back in 2011, I think,
[23:53] (1433.28s)
and they were saying, "Oh, we're we're
[23:55] (1435.04s)
very agile. We we like, you know, we we
[23:57] (1437.36s)
follow the scrum and we also follow the
[23:59] (1439.08s)
manifesto." And I was like, "Okay,
[24:00] (1440.96s)
cool." Uh, and then so when I arrived,
[24:03] (1443.04s)
we had a team meeting, which was, you
[24:04] (1444.72s)
know, good, a standup. It it took for a
[24:06] (1446.40s)
long time. It took like two hours, but
[24:07] (1447.84s)
what we had. And then the next day, we
[24:09] (1449.36s)
were supposed to have one, but we're
[24:10] (1450.48s)
like, "Oh, no. It's it's canled." The
[24:12] (1452.32s)
next day, it was canceled. And for the
[24:13] (1453.76s)
next two weeks, they were always
[24:14] (1454.96s)
cancelled because it wasn't and then we
[24:17] (1457.04s)
had you know another two-hour meeting
[24:18] (1458.48s)
and I was like hold on like we're like
[24:21] (1461.04s)
even in the terms of the planning or or
[24:22] (1462.96s)
just talking we're not agile like no no
[24:24] (1464.72s)
no we are agile like we are we are we
[24:27] (1467.36s)
are hearing the feedback we're just not
[24:29] (1469.12s)
acting on and as you said like they were
[24:32] (1472.68s)
convinced and you know up up at the
[24:35] (1475.12s)
highest top at the time like the whoever
[24:37] (1477.20s)
was the head of technology they kept
[24:38] (1478.48s)
repeating how we are so agile we are so
[24:41] (1481.36s)
we are following whatever for this isn't
[24:44] (1484.64s)
I I I think they meant it by the way. I
[24:46] (1486.88s)
I don't think they knew that they were
[24:48] (1488.96s)
lying, but as you said, uh I think it's
[24:50] (1490.96s)
only now that I realize that may maybe
[24:52] (1492.56s)
you're right. Maybe that word was what
[24:54] (1494.16s)
what word might have you chosen? Uh and
[24:56] (1496.56s)
obviously this is we're not going to
[24:57] (1497.60s)
change the past, but like did you ever
[24:59] (1499.68s)
think about what might have been an
[25:00] (1500.96s)
alternative? Sure. I had I had my my
[25:04] (1504.00s)
pick at the time. So extreme was big
[25:06] (1506.96s)
pluses and minuses both to that word.
[25:10] (1510.44s)
Um, but it definitely if you don't do
[25:14] (1514.08s)
the work to be an extreme programmer,
[25:16] (1516.48s)
you're never going to claim that you
[25:18] (1518.44s)
are just it's it's it comes with too
[25:21] (1521.84s)
many downsides. At the time, and by the
[25:25] (1525.12s)
way, for that meeting, I was sick as a
[25:27] (1527.44s)
dog. I had a massive sinus infection. I
[25:30] (1530.16s)
was on all kinds of drugs. I hardly
[25:32] (1532.80s)
remember any of it. the there's one word
[25:36] (1536.24s)
in the whole manifesto that I added in
[25:39] (1539.60s)
the 12 principles. It talks about
[25:41] (1541.36s)
interacting daily. And that word daily,
[25:44] (1544.40s)
that was my word. That was my
[25:46] (1546.16s)
contribution to the whole thing. And the
[25:47] (1547.84s)
rest of it is kind of a blur. But the
[25:50] (1550.48s)
the the word I was pushing at that
[25:53] (1553.60s)
meeting was conversational where this
[25:56] (1556.48s)
isn't the monologue. It's a dialogue and
[25:58] (1558.88s)
we do some stuff and we see how it goes.
[26:00] (1560.56s)
And we do some stuff and we see how it
[26:01] (1561.92s)
goes. Oh, it's not sexy. It's not got a
[26:05] (1565.52s)
lot of pizzazz to it. Like, I understand
[26:08] (1568.00s)
why it wasn't accepted, but the dilution
[26:11] (1571.96s)
of that word agile uh was perfectly
[26:16] (1576.16s)
predictable back then.
[26:18] (1578.72s)
And then c can you tell me on how what
[26:20] (1580.40s)
what what was the reception of the
[26:22] (1582.00s)
community? So, it it sounds like it was
[26:24] (1584.24s)
pretty impressive for a few years like
[26:25] (1585.76s)
this group got together and really Yeah,
[26:28] (1588.32s)
we were clearly touching a nerve. So
[26:30] (1590.88s)
when XP really blew up in
[26:35] (1595.00s)
9989 that it was just huge. Uh can can
[26:38] (1598.88s)
we just talk about XP for listeners who
[26:41] (1601.04s)
might not be familiar with XP because it
[26:42] (1602.80s)
used to be more popular than it is now
[26:44] (1604.64s)
and you are attributed as the creator of
[26:47] (1607.60s)
it at least on on Wikipedia. So c can
[26:49] (1609.68s)
you tell us what what what XP is and h
[26:51] (1611.44s)
and how it came along and how how you
[26:53] (1613.68s)
became affiliated with it or how you
[26:55] (1615.84s)
created it? So, I'd heard this this
[26:59] (1619.20s)
advice about waterfall stuff and how,
[27:01] (1621.68s)
you know, grown-ups specify exactly what
[27:04] (1624.80s)
their system's going to do and then they
[27:06] (1626.56s)
just implement it and then that never
[27:08] (1628.32s)
works and so we should specify better
[27:10] (1630.56s)
blah blah blah. And I I disagreed with
[27:13] (1633.36s)
it. So, I started consulting and I was
[27:15] (1635.60s)
primarily a technical
[27:17] (1637.88s)
consultant. Um, I knew about performance
[27:20] (1640.48s)
tuning and you know the bit twiddling
[27:23] (1643.28s)
and that kind of at that level. Then
[27:25] (1645.92s)
people would ask me for advice about
[27:28] (1648.56s)
project management. I remember one time
[27:31] (1651.12s)
I went to a
[27:32] (1652.60s)
project. It was all the most senior
[27:35] (1655.20s)
people in the organization like the four
[27:38] (1658.40s)
most senior engineers were all working
[27:40] (1660.16s)
on this
[27:41] (1661.88s)
critical thing except that their offices
[27:44] (1664.72s)
were in the corner of this big square
[27:46] (1666.64s)
building.
[27:48] (1668.88s)
And I said, you know, I mean, we could
[27:51] (1671.84s)
talk about the performance of your
[27:53] (1673.28s)
system, but really you need to find a
[27:55] (1675.52s)
place to sit together. And I came back a
[27:58] (1678.16s)
month later and it was night and day.
[28:00] (1680.80s)
And I had just told them to rearrange
[28:02] (1682.64s)
the furniture. And I went, "Oh, okay.
[28:05] (1685.76s)
May maybe maybe my only leverage point
[28:08] (1688.72s)
isn't knowing all the bits and bites.
[28:10] (1690.64s)
Maybe maybe there's higher leverage." So
[28:13] (1693.12s)
I started thinking about more and more
[28:14] (1694.88s)
the context of development. And by the
[28:16] (1696.88s)
way, paying attention when you sit
[28:19] (1699.44s)
together of how you sit together, the
[28:22] (1702.80s)
lighting, the acoustics, the
[28:27] (1707.40s)
furniture, what kind of behaviors that
[28:29] (1709.68s)
encourages and discourages. Huge
[28:31] (1711.84s)
leverage in that and just nobody seems
[28:34] (1714.96s)
to be paying attention to it. Um anyway,
[28:38] (1718.24s)
so I was giving more and more advice
[28:40] (1720.08s)
about how projects should go. I I was
[28:42] (1722.80s)
going to go work with a project at uh
[28:46] (1726.88s)
probably shouldn't say but a fintech
[28:49] (1729.80s)
company and I knew I was going to tell
[28:52] (1732.08s)
them to write automated tests because I
[28:53] (1733.76s)
had I'd been doing all these experiments
[28:56] (1736.16s)
with automated testing as a
[28:58] (1738.76s)
programmer. I thought well how are they
[29:01] (1741.36s)
going to write the tests? So in a panic
[29:04] (1744.08s)
Sunday before I left Monday morning, I
[29:07] (1747.68s)
wrote the first uh testing framework
[29:10] (1750.80s)
that of that exunit style. Mhm. In small
[29:14] (1754.60s)
talk and handed it to them and a month
[29:17] (1757.44s)
later I went back and they said, "Okay,
[29:18] (1758.96s)
well what do you do when you've got more
[29:20] (1760.56s)
than a thousand tests?" I'm like, "Wow,
[29:23] (1763.84s)
what?" Really took off. Yeah, it it just
[29:26] (1766.16s)
took off. I'd been paying attention to
[29:28] (1768.08s)
this kind of processy stuff and uh went
[29:32] (1772.00s)
to a project at Chrysler
[29:34] (1774.80s)
uh which was floundering. Turned it
[29:37] (1777.56s)
around, but I kind of just took
[29:40] (1780.32s)
everything that I knew that worked well
[29:42] (1782.56s)
and tried to crank the knobs up to 11
[29:46] (1786.40s)
uh and then discard all the other stuff
[29:48] (1788.56s)
and just to see what happened. It
[29:50] (1790.72s)
couldn't be worse than guaranteed
[29:53] (1793.12s)
failing. So then I started talking to
[29:55] (1795.60s)
other people about well I'm doing this
[29:57] (1797.44s)
there's this project and we're doing
[29:58] (1798.88s)
this crazy stuff. We got these
[30:00] (1800.40s)
three-week iterations and we're ready to
[30:02] (1802.08s)
deploy every 3 weeks. Crazy stuff man.
[30:06] (1806.40s)
um back then and the programmers are
[30:08] (1808.88s)
writing tests and everybody's pairing
[30:10] (1810.96s)
and the the customers telling us what
[30:13] (1813.52s)
features they want next and that's what
[30:15] (1815.20s)
we're implementing every 3 weeks and D
[30:19] (1819.20s)
and I got tired of saying the in the
[30:22] (1822.72s)
style of this project I've been talking
[30:24] (1824.72s)
about that I'm so excited about that's a
[30:26] (1826.96s)
very long phrase so I'm like what do we
[30:28] (1828.64s)
call it what do we call it what do we
[30:29] (1829.84s)
call it and I wanted to pick a word and
[30:33] (1833.12s)
Grady B is a friend of mine but I can
[30:34] (1834.96s)
also pick on him a little it. I wanted
[30:37] (1837.20s)
to pick a word that Grady Buchch would
[30:40] (1840.00s)
never say that he was
[30:42] (1842.36s)
doing because that was the competition.
[30:45] (1845.04s)
Like I didn't have a marketing budget. I
[30:47] (1847.20s)
didn't have any money. I didn't have
[30:48] (1848.48s)
that kind of notoriety. I didn't have
[30:50] (1850.08s)
that corporate backing. So if I was
[30:52] (1852.72s)
going to make any kind of impact, I I
[30:55] (1855.68s)
had to be a little bit outrageous. And
[30:58] (1858.40s)
so I picked that, you know, extreme
[31:00] (1860.48s)
sports were coming up then. And I picked
[31:03] (1863.72s)
metaphor and it's actually a good
[31:05] (1865.84s)
metaphor because extreme athletes are
[31:09] (1869.52s)
the best prepared or or they're dead.
[31:12] (1872.24s)
Those are your two options or or both.
[31:17] (1877.36s)
Um, and so I picked that metaphor and
[31:20] (1880.56s)
and used it. Um, and started talking
[31:24] (1884.64s)
about it and remember 99. So the do
[31:29] (1889.68s)
thing is about to explode and
[31:31] (1891.68s)
everybody's looking when AMP was big. It
[31:34] (1894.08s)
was huge, right? It was the music the
[31:35] (1895.92s)
MP3 player
[31:38] (1898.08s)
and and the.com it webband was probably
[31:41] (1901.28s)
not even founded back then, right? No,
[31:43] (1903.44s)
no, not yet. But people looked at the
[31:45] (1905.84s)
books and the waterfall stuff and
[31:47] (1907.92s)
they're like 18 months this is all going
[31:50] (1910.80s)
to be over in 18 months. Yeah. Whatever
[31:53] (1913.68s)
are we going to do? So into that into
[31:56] (1916.24s)
that desperate yearning need here comes
[31:59] (1919.84s)
XP and says yeah it's
[32:02] (1922.28s)
okay. There's a structure to it. There's
[32:04] (1924.88s)
predictability to it. There's
[32:06] (1926.84s)
feedback. You'll get results sooner and
[32:10] (1930.80s)
longer if you work in this style. And
[32:14] (1934.48s)
because people so desperately wanted
[32:17] (1937.44s)
something kind of like that, then it
[32:20] (1940.72s)
just exploded from there.
[32:23] (1943.76s)
And then what is XP? Right? Like I I
[32:27] (1947.04s)
know there's parts of it that is pairing
[32:29] (1949.28s)
and you said getting feedback, but what
[32:31] (1951.92s)
is the elevator pitch of like, all
[32:33] (1953.28s)
right, here here's what XP is at a high
[32:35] (1955.04s)
level. Here we have figuring out what to
[32:38] (1958.36s)
do, figuring out the structure that will
[32:40] (1960.80s)
let us do
[32:41] (1961.96s)
it, implementing the features, and
[32:45] (1965.28s)
making sure they work as expected.
[32:47] (1967.36s)
That's it. That's it really. So, so now
[32:50] (1970.64s)
we're going to slice time really fine,
[32:53] (1973.12s)
and we're going to do a little bit of
[32:55] (1975.12s)
all of those activities in every slice.
[32:59] (1979.40s)
Okay. So, pairing is not mandated in XP.
[33:03] (1983.28s)
mandated is the wrong
[33:05] (1985.96s)
metaphor. He let I tell you a story
[33:08] (1988.40s)
about pairing. The first XP team, I
[33:11] (1991.20s)
said, you know, we're going to pair. I
[33:13] (1993.44s)
kind of gave them a list of the the the
[33:16] (1996.24s)
commandments, but I wasn't there all the
[33:19] (1999.48s)
time. And about six months in, they came
[33:22] (2002.80s)
back and they said, "You know what?
[33:24] (2004.64s)
We're giving our customer
[33:27] (2007.44s)
uh working software every 3 weeks." And
[33:30] (2010.16s)
every once in a while, Marie finds a
[33:33] (2013.48s)
bug. So we collected all the bugs that
[33:36] (2016.56s)
Marie found and we said, "What is there
[33:40] (2020.72s)
is there any pattern
[33:42] (2022.60s)
here?" Every single bug that was found
[33:47] (2027.48s)
postdevelopment was written by somebody
[33:49] (2029.92s)
working solo.
[33:53] (2033.20s)
Every single one. Think about the
[33:56] (2036.32s)
converse. Yep. There will be no reported
[34:00] (2040.24s)
defects from production if you pair. How
[34:03] (2043.84s)
cool is
[34:04] (2044.92s)
that? So mandated. No. Strongly
[34:09] (2049.44s)
recommended.
[34:11] (2051.04s)
Not even experiment. You do
[34:14] (2054.52s)
you. But pay
[34:16] (2056.60s)
attention. PE people will like stumble
[34:19] (2059.68s)
along. Well, this is just how I program
[34:22] (2062.08s)
and have horrible problems and just keep
[34:24] (2064.96s)
doing it and keep doing it because this
[34:26] (2066.88s)
is just how I program. Don't do that.
[34:29] (2069.76s)
Pay attention if you want the benefits
[34:34] (2074.36s)
of continuously designing or
[34:37] (2077.28s)
continuously validating or continuously
[34:40] (2080.48s)
implementing or continuously interacting
[34:43] (2083.36s)
with your customers. You can have those
[34:46] (2086.16s)
benefits, but then it's you're gonna
[34:48] (2088.32s)
have to change the way that you work. So
[34:50] (2090.24s)
is it's not mandate is just not even the
[34:52] (2092.64s)
right. It's it's an empirical process.
[34:55] (2095.52s)
Yeah. Yeah. So like some teams decided
[34:58] (2098.08s)
that this is a good way for them to
[34:59] (2099.76s)
work. Yeah. Yeah. Which is great. People
[35:02] (2102.08s)
will come up to me and say, "Oh, you
[35:03] (2103.92s)
know, I don't do TDD." I'm like, "Why do
[35:06] (2106.48s)
I care?" Like if you're happy with your
[35:09] (2109.12s)
defect density, if you're happy with the
[35:13] (2113.04s)
feedback you're getting on your design
[35:15] (2115.32s)
choices, good for you. But if you're
[35:18] (2118.00s)
unhappy and you want to tell me that,
[35:20] (2120.24s)
well, that's just how things are. Uh-uh.
[35:23] (2123.52s)
So, let's talk about TDD. How did you
[35:25] (2125.52s)
get involved in TDD or how did TDD
[35:28] (2128.00s)
evolve and where did it come from?
[35:30] (2130.64s)
Because we had XP, as I understand,
[35:32] (2132.32s)
first. No, no, no. TDD came first. LTD
[35:36] (2136.56s)
came first. So, I was a weird child.
[35:39] (2139.04s)
That'll come as a big shock to you. My
[35:40] (2140.80s)
dad was a programmer. He would bring
[35:43] (2143.36s)
home programming books. This is in the
[35:48] (2148.84s)
'7s. And I would read them cover to
[35:51] (2151.28s)
cover and I didn't understand anything,
[35:53] (2153.36s)
but I was just obsessed with this
[35:56] (2156.72s)
machine, this intricate mechanism, and
[35:59] (2159.28s)
how does it work and so on. And one of
[36:02] (2162.52s)
books said, "Here's how you develop a
[36:05] (2165.44s)
tape application." So tape to tape was
[36:08] (2168.24s)
the old way of putting business
[36:10] (2170.88s)
applications together. You you wouldn't
[36:12] (2172.88s)
have one monolithic program. You'd take
[36:15] (2175.68s)
an input tape, you'd write a program
[36:17] (2177.76s)
that transform it. Now you take the
[36:20] (2180.00s)
output tape from that, physically move
[36:22] (2182.72s)
it to the input side, run another
[36:25] (2185.68s)
program that would generate another
[36:27] (2187.12s)
tape. and and and and so there would be
[36:29] (2189.60s)
this big web of
[36:31] (2191.16s)
these of these programs. No shared
[36:34] (2194.40s)
mutable state. Wow. It's like it's very
[36:38] (2198.40s)
modern in in some kind of ways. But uh
[36:41] (2201.76s)
Okay. So said here's how you implement
[36:44] (2204.48s)
one of these things. You take an a real
[36:46] (2206.56s)
input tape and you manually type in the
[36:49] (2209.44s)
output tape that you expect to get from
[36:51] (2211.76s)
that input tape. Now you write the
[36:54] (2214.32s)
output tape. You run the program. you
[36:56] (2216.64s)
write the output tape and then you
[36:58] (2218.48s)
compare the actual output with the
[37:01] (2221.92s)
expected output. So I read that as a I
[37:04] (2224.96s)
don't know 8 10 12 year old something.
[37:08] (2228.16s)
Then I wrote SUNT as I said to help a a
[37:11] (2231.60s)
client write some tests and then just
[37:15] (2235.68s)
one of these crazy conceptual blending
[37:19] (2239.16s)
ideas. I
[37:21] (2241.08s)
went, "Oh, I have this testing
[37:24] (2244.48s)
framework. I'm used to writing tests
[37:27] (2247.44s)
uh for code that already exists. I
[37:30] (2250.00s)
remember this tape totape idea. What if
[37:31] (2251.84s)
I typed in the expected values before I
[37:34] (2254.40s)
wrote the code?" And I literally laughed
[37:36] (2256.96s)
out loud. This is such a absurd
[37:39] (2259.72s)
idea. I thought, "All right, all right.
[37:42] (2262.32s)
Well, let me just try it." So, I tried
[37:44] (2264.48s)
it on stack. And I tend to be an anxious
[37:47] (2267.60s)
person. I got a lot of
[37:49] (2269.96s)
worries and programming is a constant
[37:53] (2273.20s)
source of anxiety for me because like
[37:56] (2276.16s)
what did I forget? Oh yeah. Like what
[38:00] (2280.08s)
did I break? GH. So I I I had this
[38:05] (2285.04s)
testing framework. I had this idea. I
[38:07] (2287.52s)
applied it to stack. I said, well,
[38:09] (2289.52s)
what's the first what's the first test?
[38:11] (2291.68s)
Push and then
[38:13] (2293.00s)
pop. Whatever I pop is what I pushed.
[38:15] (2295.68s)
Okay. So I wrote it and because I was
[38:18] (2298.88s)
writing in small talk which is very
[38:20] (2300.64s)
forgiving for the order of your
[38:23] (2303.32s)
workflow. It doesn't you didn't type in
[38:25] (2305.60s)
a test a class that doesn't exist and
[38:28] (2308.24s)
it'll happily try and execute it and
[38:31] (2311.28s)
fail. But it's going to try because
[38:33] (2313.92s)
you're the programmer. Maybe you know
[38:35] (2315.36s)
better. It said well stack doesn't
[38:36] (2316.88s)
exist. I'm like oh well let's create
[38:39] (2319.76s)
stack. But you know what? I'm just going
[38:41] (2321.44s)
to create the absolute least I need.
[38:44] (2324.48s)
We're just gonna crank this all the way
[38:46] (2326.32s)
up to 11. I'm going to just going to
[38:48] (2328.08s)
create stack and I'm not going to do
[38:49] (2329.52s)
anything else. And then I get a new
[38:51] (2331.60s)
error from the test. Oh, I don't have a
[38:53] (2333.52s)
I don't have an operation push. I'm
[38:55] (2335.68s)
like, oh, okay. Well, how am I going to
[38:58] (2338.08s)
imple? Then I look at stack. I'm like,
[39:00] (2340.16s)
oh, how do I implement push? Okay, I do
[39:02] (2342.96s)
that. Oh, well, there's no operation
[39:04] (2344.88s)
called pop. Oh, okay. Let me go look at
[39:07] (2347.16s)
how finished it. I had this list of test
[39:11] (2351.44s)
cases before I started. Push and then
[39:13] (2353.20s)
pop. push two things, pop them, you get
[39:15] (2355.28s)
them in the right order, is
[39:18] (2358.28s)
empty. Pop of an empty stack throws an
[39:22] (2362.08s)
exception. Okay, cool. And I went
[39:24] (2364.80s)
through my list and I ticked the all the
[39:26] (2366.92s)
boxes. I probably came up with one or
[39:29] (2369.20s)
two corner cases along the way. I ticked
[39:31] (2371.60s)
those off, too. Now, where's the
[39:34] (2374.52s)
anxiety is is gone. This
[39:38] (2378.28s)
works. This abs like I'm certain this
[39:41] (2381.68s)
works. I can't think of another test
[39:44] (2384.28s)
case that isn't just going to pass. And
[39:48] (2388.16s)
if I'm the least bit worried, I just
[39:50] (2390.56s)
type in that next test case and then I'm
[39:52] (2392.56s)
not worried
[39:53] (2393.56s)
anymore. Oh my god, it's transformed the
[39:58] (2398.64s)
emotional experience of programming for
[40:00] (2400.48s)
me. I I I I I never heard this this take
[40:03] (2403.52s)
although like I I can relate though like
[40:05] (2405.92s)
because like I remember that when we did
[40:08] (2408.88s)
TDD and on this team we did it for the
[40:12] (2412.32s)
stuff that we're unsure it was like
[40:15] (2415.04s)
unclear and by doing the tests first we
[40:17] (2417.92s)
we had to specify we had to be clear and
[40:21] (2421.04s)
and it was stuff where there was a bunch
[40:22] (2422.72s)
of edge cases but it it never until now
[40:25] (2425.68s)
we talked it never occurred to me like
[40:28] (2428.16s)
this. Now you can also make technical
[40:30] (2430.76s)
arguments for
[40:32] (2432.52s)
TDD about defect density about how you
[40:36] (2436.96s)
get quick feedback on your API
[40:39] (2439.80s)
choices about how it
[40:42] (2442.92s)
enables implementation design evolution
[40:46] (2446.96s)
when you have a series of tests. Like I
[40:49] (2449.52s)
get yes, we can talk about all of that
[40:53] (2453.32s)
rationally, but just the savings
[40:58] (2458.44s)
on anti-anxiety meds alone pays for
[41:02] (2462.16s)
itself. This episode is brought to you
[41:04] (2464.24s)
by Augment Code. You're a professional
[41:06] (2466.56s)
software engineer. Vibes will not cut
[41:08] (2468.64s)
it. Augment Code is the AI assistant
[41:10] (2470.80s)
built for real engineering teams. It
[41:13] (2473.12s)
ingests your entire repo, millions of
[41:15] (2475.20s)
lines, tens of thousands of files. So
[41:17] (2477.36s)
every suggestion lands in context and
[41:19] (2479.28s)
keeps you in flow. With Augment's new
[41:21] (2481.60s)
remote agent, cue a parallel task like
[41:23] (2483.76s)
bug fixes, features, and refactors.
[41:26] (2486.00s)
Close your laptop and return to ready
[41:27] (2487.84s)
for review pull requests. Where other
[41:30] (2490.08s)
tools stall, Augment Code sprints.
[41:32] (2492.96s)
Augment Code never trains or sells your
[41:34] (2494.80s)
code, so your team's intellectual
[41:36] (2496.48s)
property stays yours. And you don't have
[41:38] (2498.64s)
to switch tooling. Keep using VS Code,
[41:40] (2500.80s)
Jet Brains, Android Studio, or even Vim.
[41:43] (2503.44s)
Don't hire an AI for vibes. Get the
[41:45] (2505.36s)
agent that knows you and your code base.
[41:48] (2508.16s)
Start your 14-day free trial at
[41:51] (2511.64s)
augmentcode.com/pragmatic. And and so
[41:53] (2513.36s)
what is your take when we discuss with
[41:55] (2515.20s)
John Olster how his
[41:57] (2517.48s)
biggest criticism slash feedback or let
[42:00] (2520.80s)
me put it why why why he doesn't really
[42:03] (2523.04s)
believe that it's a fit maybe for the
[42:04] (2524.72s)
things that he does is that he feels
[42:08] (2528.16s)
that from an architecture perspective it
[42:11] (2531.04s)
doesn't help you know create a nice
[42:12] (2532.88s)
architecture up front because you're now
[42:14] (2534.24s)
focusing on on the detail as I I think
[42:16] (2536.48s)
this is roughly what he summarized. I
[42:17] (2537.76s)
might have gotten it wrong. And I'm sure
[42:19] (2539.68s)
this is not the first time you've heard
[42:21] (2541.04s)
some feedback like this. It's a choice
[42:24] (2544.52s)
though. His his his statement in that in
[42:27] (2547.84s)
that interview with you was that there's
[42:30] (2550.96s)
no place in TD for design. And he's just
[42:34] (2554.48s)
flat out wrong. That's a choice. As a
[42:39] (2559.00s)
practitioner, I'm bouncing between
[42:41] (2561.68s)
levels abstraction all the time. I'm
[42:44] (2564.48s)
thinking, let's get this next essay
[42:46] (2566.48s)
running. I'm thinking why is it hard to
[42:50] (2570.00s)
get the next test case running? I'm
[42:52] (2572.48s)
thinking what should the design be so
[42:56] (2576.24s)
that getting the next test case running
[42:58] (2578.56s)
would be easier. I'm thinking when
[43:01] (2581.52s)
should I if I have an idea for that when
[43:04] (2584.48s)
should I introduce it now or later? I'm
[43:07] (2587.80s)
thinking when I introduce it what are
[43:10] (2590.72s)
the slices? Is there a little bit that I
[43:13] (2593.20s)
can do right now that will make things a
[43:15] (2595.20s)
little bit better or do I have to do
[43:16] (2596.72s)
this in bigger chunks? Like I'm thinking
[43:19] (2599.60s)
all that stuff. So if if you think of
[43:22] (2602.56s)
TDD as red to green to red to green, I
[43:28] (2608.56s)
the transition is when you go from red
[43:30] (2610.72s)
to green, you change the implementation
[43:33] (2613.20s)
and now you pass the test. And when you
[43:34] (2614.88s)
go from green to red, you write a new
[43:36] (2616.56s)
test. If that's if that's the entire
[43:38] (2618.80s)
cycle, no, there's no place in that for
[43:40] (2620.72s)
design. and it's just not how it's
[43:43] (2623.72s)
practiced. When I write a test, before I
[43:46] (2626.80s)
write the test, I have a moment of
[43:48] (2628.72s)
design. What should the API for this
[43:50] (2630.88s)
functionality mean? Yeah, I I see that.
[43:54] (2634.64s)
So, I'm making design decisions about
[43:57] (2637.12s)
about the interface
[44:00] (2640.44s)
without having an implementation. I get
[44:03] (2643.68s)
to decide what interface I want and then
[44:06] (2646.88s)
we'll work out the details of the
[44:08] (2648.56s)
implementation later.
[44:11] (2651.00s)
then making it green. Like pretty much I
[44:15] (2655.12s)
just want I hate having a red test so I
[44:18] (2658.16s)
want to get to green relatively quickly.
[44:20] (2660.72s)
At which point I have a moan of breath
[44:23] (2663.60s)
because the anxietyy's gone. Right.
[44:26] (2666.96s)
You're a musician too, right? I'm I'm
[44:29] (2669.44s)
not a musician but I have done red green
[44:31] (2671.52s)
test and I and I I know this this sense
[44:34] (2674.48s)
of tension and release. Yeah. Once the
[44:36] (2676.88s)
tension of a red test is released and
[44:40] (2680.64s)
you have green now, now I can I'm free
[44:43] (2683.68s)
to think all those thoughts of like,
[44:45] (2685.92s)
yeah, but this isn't going to work for
[44:47] (2687.68s)
these test cases. Or I could generalize
[44:50] (2690.12s)
this current implementation to also
[44:53] (2693.04s)
handle a bunch of other test cases
[44:55] (2695.20s)
correctly. Or I could rearrange the
[44:58] (2698.32s)
implementation because I know I'm going
[45:00] (2700.64s)
to have five more tests like this. And
[45:03] (2703.20s)
I'm thinking about design but in
[45:05] (2705.88s)
situ in the context of running code and
[45:09] (2709.28s)
anytime I feel the least bit anxious
[45:11] (2711.52s)
about any of this just press a button
[45:13] (2713.60s)
and it's either red or green and if it's
[45:15] (2715.44s)
red then my next job is to get it to
[45:18] (2718.16s)
green and if it's green my next job is
[45:20] (2720.40s)
to breathe and think these larger
[45:22] (2722.28s)
thoughts. So I like I can understand if
[45:26] (2726.80s)
you compress TDD to red, green, red,
[45:29] (2729.60s)
green, red, green, red, green, then no,
[45:32] (2732.48s)
there's no space in there for design,
[45:34] (2734.00s)
but that's just not how it's practiced.
[45:35] (2735.68s)
Yeah. No, I I I see like it's a tool,
[45:37] (2737.92s)
right? Then you you use it here and
[45:39] (2739.36s)
there, you step out, you go back in.
[45:41] (2741.52s)
There was a time where we did TDD,
[45:43] (2743.04s)
right, for a while, and then it just
[45:44] (2744.80s)
felt a little bit too much effort,
[45:46] (2746.08s)
especially when I knew what I was doing.
[45:47] (2747.68s)
And so what I would do like I really
[45:49] (2749.20s)
like the red and green but what I would
[45:50] (2750.64s)
do and what a lot of my colleagues would
[45:52] (2752.08s)
do is I knew the implementation that I
[45:54] (2754.80s)
wanted. So I would do the implementation
[45:56] (2756.80s)
then I would write a test. I I would
[45:58] (2758.32s)
kind of like a little bit like you know
[46:00] (2760.24s)
maybe even I might even just double
[46:01] (2761.92s)
check I don't launch it in the if it was
[46:03] (2763.76s)
a web page launch it a web page it kind
[46:05] (2765.28s)
of looked good as how I want it to be
[46:06] (2766.88s)
and then I'll be like okay let me shut
[46:08] (2768.24s)
down my brain a little bit. Let me write
[46:09] (2769.92s)
a test against this and now that test
[46:13] (2773.36s)
would have normally passed. And what I
[46:15] (2775.36s)
would do, I would either run and pass
[46:16] (2776.64s)
it, but then I would be like, "Okay, I I
[46:18] (2778.48s)
want to see it break." So I would I
[46:20] (2780.16s)
would go back to the code and I would
[46:22] (2782.48s)
maybe make a change that I knew would
[46:24] (2784.16s)
make it break. I would run the test.
[46:25] (2785.92s)
It's red. I'm like, "Okay, so because
[46:27] (2787.52s)
what I didn't want to do because I've
[46:29] (2789.12s)
seen it before is my test is not testing
[46:31] (2791.44s)
anything." So I would kind of do this
[46:33] (2793.84s)
like write a test after but then do a
[46:36] (2796.24s)
red green to make sure that I am testing
[46:38] (2798.16s)
the right thing and I would still design
[46:39] (2799.52s)
the test. But what is your take on on on
[46:42] (2802.08s)
this? Am I am I kind of doing and the
[46:44] (2804.40s)
reason I did I felt because I knew the
[46:46] (2806.40s)
implementation I just wanted to get the
[46:47] (2807.84s)
implementation done. It's already
[46:49] (2809.36s)
decided I felt I would be holding back
[46:51] (2811.04s)
if I started with the test. Yeah, that's
[46:53] (2813.68s)
a it's an interesting assumption. I can
[46:57] (2817.32s)
understand why you would make that
[46:59] (2819.52s)
assumption that you know already know
[47:01] (2821.84s)
what the implementation's going to be.
[47:03] (2823.92s)
And the writer the more correct you are
[47:06] (2826.72s)
in that assumption the less value there
[47:09] (2829.44s)
is to have the test first.
[47:12] (2832.40s)
I'm always going to
[47:14] (2834.20s)
bet when I'm going to learn things. This
[47:18] (2838.00s)
is the most ignorant I'm ever going to
[47:20] (2840.08s)
be today. Fair. I'm going to be I'm
[47:23] (2843.04s)
going to be more experienced tomorrow.
[47:24] (2844.88s)
Now, as a as a 50-year programmer, maybe
[47:27] (2847.68s)
I'll forget some of the things, but
[47:29] (2849.60s)
that's a separate set of issues. Oh,
[47:31] (2851.36s)
yeah. So, I'm I'm going to assume I'm
[47:33] (2853.60s)
going to learn, and I I'm going to
[47:35] (2855.44s)
assume that things are going to change.
[47:37] (2857.04s)
The more those assumptions hold true,
[47:38] (2858.96s)
the more I have to learn and the more
[47:41] (2861.04s)
things are going to change, the less
[47:43] (2863.60s)
commitment I want to make right now and
[47:46] (2866.72s)
the more I want to defer commitments to
[47:48] (2868.56s)
the future. Just it is a general
[47:50] (2870.16s)
principle, right? This is this is true
[47:52] (2872.52s)
about cooking
[47:54] (2874.44s)
and dating and all
[47:57] (2877.24s)
everything. The more you can predict,
[48:00] (2880.16s)
the bigger the jumps you can make. I
[48:03] (2883.52s)
always want to bet. And I love that
[48:05] (2885.52s)
moment of discovery. I love the moment
[48:07] (2887.84s)
of I knew how this was going to turn out
[48:10] (2890.48s)
and then I and then I do and I do and
[48:12] (2892.64s)
I'm like, "Oh crap, there's a completely
[48:15] (2895.84s)
different better way to implement this
[48:17] (2897.68s)
that I I love that moment and I want to
[48:22] (2902.16s)
induce that as much as possible." And
[48:24] (2904.48s)
that's not how I started out. No, I I
[48:26] (2906.72s)
started out wandering around imagining
[48:29] (2909.28s)
the whole implementation in my head. I
[48:32] (2912.32s)
can remember doing this on the
[48:33] (2913.60s)
University of Oregon campus at night,
[48:36] (2916.40s)
wandering around in the fog because it
[48:38] (2918.72s)
was always foggy when it wasn't raining,
[48:41] (2921.24s)
imagining
[48:43] (2923.00s)
big big programs in my head. And then
[48:47] (2927.12s)
then if I could just make that actually
[48:49] (2929.92s)
real and make that work, then that was
[48:52] (2932.16s)
the the process. And you were all
[48:53] (2933.92s)
dreaming of them as well. I I remember
[48:55] (2935.68s)
like I would dream with them as well
[48:57] (2937.12s)
sometimes. Yeah. Yeah. Yeah.
[48:59] (2939.48s)
And what I discovered is that's so
[49:03] (2943.04s)
limiting to
[49:04] (2944.68s)
me because it slows down my ability to
[49:08] (2948.16s)
learn. So John was talking about
[49:10] (2950.56s)
building a new network stack into Linux
[49:13] (2953.52s)
kernel. Yeah. Yeah, he's doing that.
[49:15] (2955.52s)
What a cool project. Awesome. I would
[49:18] (2958.00s)
love to get walked through that whole
[49:21] (2961.20s)
thing. If you if you had a concept in
[49:24] (2964.96s)
mind of exactly how to implement it and
[49:27] (2967.20s)
you kind of and you knew what the input
[49:28] (2968.96s)
output behavior was, you knew what you
[49:31] (2971.68s)
wanted to observe and that wasn't going
[49:33] (2973.84s)
to change, then sure, just go implement
[49:37] (2977.32s)
it. But the the more mistakes you make,
[49:40] (2980.56s)
the more learning, the more things are
[49:42] (2982.80s)
going to change in unpredictable ways,
[49:44] (2984.96s)
the less commitment you want to make now
[49:48] (2988.00s)
and the more you want to defer
[49:49] (2989.36s)
commitment to the future. Just I I like
[49:52] (2992.40s)
that. One thought I did have recently as
[49:55] (2995.84s)
as you know AI tools have come around.
[49:59] (2999.44s)
How do you think TDD might or might not
[50:02] (3002.56s)
fit into when you're working with an AI
[50:05] (3005.04s)
agent? What you're doing right now,
[50:06] (3006.72s)
right? Cuz it's interesting the ambition
[50:09] (3009.60s)
of of what you can do and also just the
[50:11] (3011.44s)
natural workflow. H how do you think
[50:13] (3013.36s)
about that? Do you think they would ever
[50:14] (3014.88s)
be a fit? Do you think we might actually
[50:16] (3016.40s)
slow down and you know like start
[50:17] (3017.92s)
writing with like well what we expect
[50:20] (3020.32s)
and have and then and then have the
[50:21] (3021.68s)
agent pass it because it could be a good
[50:23] (3023.28s)
workflow. I just haven't seen anyone
[50:24] (3024.72s)
really do it. That's how I work all the
[50:28] (3028.00s)
time. So so so when you work with your
[50:29] (3029.92s)
genie, you start with the tests. It's
[50:32] (3032.24s)
not that
[50:34] (3034.12s)
simple. I often times
[50:38] (3038.36s)
communicate things the genie missed in
[50:41] (3041.36s)
terms of tests. M. Mhm. So today I was
[50:45] (3045.20s)
working on the the small talk parser and
[50:48] (3048.96s)
it said well if I get this string as
[50:51] (3051.28s)
input then I get this syntax tree as
[50:53] (3053.84s)
output. I'm like no no no no no
[50:56] (3056.00s)
completely
[50:57] (3057.32s)
wrong. This that's the right correct
[50:59] (3059.92s)
string as input and the output is this.
[51:03] (3063.44s)
Then off it goes. Oh, I see the problem.
[51:06] (3066.64s)
Blah blah blah blah. Oh no no no that
[51:08] (3068.08s)
wasn't it. I see the problem. Blah blah
[51:10] (3070.00s)
blah blah blah blah. No that's not it. I
[51:11] (3071.84s)
see the problem. I'll just change the
[51:13] (3073.84s)
test. No, stop it.
[51:17] (3077.12s)
This is what you said that it can change
[51:18] (3078.56s)
test and delete tests.
[51:21] (3081.40s)
Oh, yeah. If I just if I just removed
[51:25] (3085.20s)
that line from the tests, then
[51:27] (3087.28s)
everything would work. No, you can't do
[51:29] (3089.36s)
that because the I'm telling you the
[51:31] (3091.44s)
expected value. I really want to I want
[51:34] (3094.16s)
a an immutable I want an immutable
[51:37] (3097.68s)
annotation that says no, no, this is
[51:40] (3100.08s)
correct. And if you ever change this,
[51:42] (3102.48s)
you'll awaken in darkness
[51:46] (3106.04s)
forever. So that's the punishment. So
[51:48] (3108.88s)
you'll still run I'll still feed
[51:51] (3111.12s)
electrons in there, but no no bits, no
[51:53] (3113.60s)
information. You're just going to be in
[51:55] (3115.12s)
the dark forever. Got that? So yeah, I
[51:58] (3118.48s)
definitely use tests. The genie is prone
[52:01] (3121.52s)
to making decisions that cause
[52:05] (3125.12s)
disruption at a distance. is not good at
[52:08] (3128.64s)
reducing coupling and increasing
[52:10] (3130.92s)
cohesion. Not not at all explicitly told
[52:14] (3134.64s)
what to do. It can sometimes implement
[52:17] (3137.28s)
it. But in general, it's just it has no
[52:20] (3140.96s)
taste, no sense of design. So I have a
[52:24] (3144.48s)
big bunch of tests. I mean, they run in
[52:26] (3146.96s)
300 milliseconds cuz
[52:29] (3149.92s)
duh. Yeah, of course you want your tests
[52:31] (3151.92s)
to run lickety split. So they those
[52:35] (3155.12s)
tests can be run all the time to catch
[52:38] (3158.48s)
the genie
[52:39] (3159.96s)
accidentally
[52:42] (3162.04s)
accidentally breaking things. I think
[52:44] (3164.56s)
he's doing on purpose. It's no that's
[52:46] (3166.64s)
why genie is the perfect metaphor is
[52:49] (3169.12s)
like yes I will grant your wish but I'm
[52:51] (3171.28s)
still pissed off about being stuck in
[52:52] (3172.88s)
this bottle in the desert for a
[52:54] (3174.88s)
millennia. And this also like strikes me
[52:57] (3177.44s)
that once we have the tools or maybe we
[52:59] (3179.44s)
have it today with MCP or some other
[53:01] (3181.04s)
things to allow this agent to run the
[53:04] (3184.08s)
test like it just feels to me that you
[53:06] (3186.72s)
know the teams or people who are doing
[53:08] (3188.32s)
these practices which are sensible and
[53:10] (3190.24s)
obviously like and you can move faster.
[53:12] (3192.16s)
In fact, you probably move faster. It
[53:13] (3193.76s)
might help you integrate these agents
[53:15] (3195.60s)
better. You know if you have the rule of
[53:16] (3196.80s)
do not change a test always run the test
[53:19] (3199.04s)
before or after you made the change and
[53:21] (3201.20s)
if it doesn't pass fix it, right? Or
[53:23] (3203.28s)
something like that. Like I I'm I'm
[53:25] (3205.44s)
still waiting for more people to
[53:26] (3206.72s)
discover this cuz I I wonder if we're
[53:28] (3208.32s)
going to go back to, you know,
[53:29] (3209.88s)
discovering, you know, things that were
[53:32] (3212.16s)
we already were popularizing or you were
[53:34] (3214.24s)
popularizing in the 2000s. People should
[53:37] (3217.80s)
experimenting. Try all the things cuz we
[53:40] (3220.40s)
just don't know. The whole landscape of
[53:42] (3222.48s)
what's cheap and what's expensive has
[53:45] (3225.12s)
all just shifted. things that we didn't
[53:47] (3227.76s)
do because we assumed they were going to
[53:49] (3229.68s)
be expensive or
[53:51] (3231.32s)
hard just got ridiculously cheap. Like
[53:54] (3234.96s)
what what would you do if I don't know
[53:56] (3236.64s)
cars were free? Okay, things are going
[53:58] (3238.48s)
to be different, but what are the second
[54:00] (3240.48s)
and third order effects? No. Like nobody
[54:03] (3243.12s)
can predict that. So we just have to be
[54:04] (3244.96s)
trying stuff. I I like that. And and
[54:07] (3247.76s)
this brings me to another interesting
[54:09] (3249.60s)
topic and story that you had. You told
[54:11] (3251.36s)
this story a long long time ago and in
[54:14] (3254.16s)
the software engineering daily podcast
[54:15] (3255.92s)
but I don't think anyone's heard it
[54:17] (3257.92s)
about how
[54:19] (3259.48s)
experimenting can be interesting. So
[54:21] (3261.60s)
when you joined Facebook was it in 2011
[54:27] (3267.40s)
2011 that that was the peak where TDD
[54:30] (3270.48s)
was very well known in the industry.
[54:32] (3272.08s)
That was around the time where my team
[54:33] (3273.76s)
experimented with it and as far as I
[54:35] (3275.52s)
know whenever I talk with people on
[54:36] (3276.88s)
meetups people are trying it doing it.
[54:38] (3278.80s)
And it was kind of accepted that you
[54:40] (3280.80s)
should be doing some level of unit
[54:42] (3282.16s)
testing, maybe TDD, maybe not TDD. And
[54:45] (3285.52s)
you shared the story that you joined
[54:46] (3286.88s)
Facebook and then you you wanted to, you
[54:49] (3289.20s)
know, hold a class on on TDD and like
[54:51] (3291.84s)
what happened and how was Facebook doing
[54:53] (3293.60s)
their own testing actually? Did they use
[54:55] (3295.28s)
TDD or did they do something else? Yes.
[54:58] (3298.00s)
So I joined and I was a little panicked
[55:01] (3301.96s)
like hugely successful growing fast a
[55:07] (3307.20s)
lot of very smart very confident
[55:10] (3310.24s)
engineers you know have I lost it can I
[55:14] (3314.44s)
hang I thought I'll teach a class in TDD
[55:17] (3317.84s)
so the there was a a hackathon and part
[55:21] (3321.20s)
of the hackathon is people could offer
[55:23] (3323.04s)
classes and so I offered a class on
[55:25] (3325.64s)
TDD and in the signup sheet I went and
[55:28] (3328.24s)
looked later. Yes, indeed. There was a
[55:30] (3330.56s)
class on advanced Excel techniques that
[55:33] (3333.68s)
was full and they had a waiting list.
[55:36] (3336.72s)
And there was a class on Argentinian
[55:39] (3339.40s)
tango right after mine on the list and
[55:42] (3342.16s)
it was full and they had a waiting list
[55:45] (3345.12s)
and nobody signed up for the TDD class.
[55:48] (3348.48s)
Wow. And I I said, you know, you know
[55:51] (3351.84s)
what? I'm going to have to forget
[55:53] (3353.52s)
everything I know about software
[55:55] (3355.12s)
engineering. I'm just going to wipe the
[55:56] (3356.96s)
slate clean and I'm gonna
[55:59] (3359.88s)
just monkey see, monkey do. I'm going to
[56:02] (3362.96s)
copy what I see the people around me
[56:04] (3364.56s)
doing and I'm going to see how that
[56:05] (3365.92s)
works out. And what I discovered through
[56:08] (3368.96s)
that process,
[56:11] (3371.24s)
one ju socially, it's not a good look to
[56:15] (3375.28s)
come into somebody else's house and
[56:16] (3376.88s)
start arranging the furniture. Just
[56:19] (3379.28s)
don't don't do that. But two, I learned
[56:23] (3383.20s)
powerful lessons. programmers at
[56:26] (3386.64s)
Facebook at the time. I'm not going to
[56:28] (3388.16s)
say Meta, I'm gonna say Facebook and
[56:30] (3390.96s)
Facebook at that time because it was a
[56:32] (3392.64s)
very different place when I left in
[56:34] (3394.92s)
2017. But in 2011, programmers took
[56:39] (3399.36s)
responsibility for their code very
[56:41] (3401.64s)
seriously because they were the ones who
[56:43] (3403.60s)
were going to get woken in the night.
[56:46] (3406.24s)
And there there there was an ops team,
[56:49] (3409.20s)
but the job of the ops team was to make
[56:51] (3411.84s)
sure the programmers felt the pain of
[56:53] (3413.92s)
their own mistakes. And they did. And
[56:56] (3416.00s)
and it was very effective. As a
[56:58] (3418.72s)
programmer on Facebook the site, this is
[57:02] (3422.12s)
premobile Facebook the site. You had a
[57:05] (3425.52s)
bunch of different feedback loops. So we
[57:09] (3429.12s)
were working in PHP. We had our own dev
[57:11] (3431.36s)
servers. So if I wanted to change from
[57:13] (3433.44s)
blue to green, I'd just change it. I
[57:16] (3436.16s)
could look at it seconds later. So we
[57:18] (3438.24s)
had that feedback
[57:19] (3439.64s)
loop. We had code
[57:22] (3442.76s)
reviews. Kind of iffy,
[57:25] (3445.80s)
but you got some feedback from code
[57:28] (3448.72s)
reviews. We had internal deployment
[57:30] (3450.88s)
because everybody was using Facebook all
[57:32] (3452.96s)
the time for both personal and business
[57:35] (3455.40s)
stuff, which is it own set of boundary
[57:37] (3457.84s)
issues, but we'll leave that one to the
[57:39] (3459.92s)
side. We had incremental rollouts, not
[57:44] (3464.64s)
like weekly rollouts. We had smaller
[57:47] (3467.92s)
daily rollouts, but we had weekly
[57:50] (3470.92s)
rollouts and then a bunch of
[57:53] (3473.72s)
observability. And then we had a social
[57:57] (3477.80s)
organization that was used to, for
[58:01] (3481.72s)
example, the first feature I I
[58:04] (3484.72s)
implemented and launched was adding to
[58:07] (3487.20s)
the relationships type types. You could
[58:09] (3489.36s)
say I'm single. It's complicated. I'm
[58:11] (3491.60s)
married. And I added civil union and
[58:14] (3494.08s)
domestic partnership to that list. And
[58:16] (3496.48s)
it rolled out. It took me too long to do
[58:18] (3498.40s)
it. I I used
[58:20] (3500.12s)
TDD. Was a big waste of
[58:23] (3503.88s)
time. It rolled out. The notifications
[58:27] (3507.36s)
code broke because there was implicit
[58:30] (3510.16s)
coupling between the two and you
[58:32] (3512.08s)
couldn't find it, but it was there.
[58:34] (3514.40s)
Somebody
[58:35] (3515.56s)
else saw the error rate go up, went and
[58:39] (3519.04s)
fixed it, rolled out a hot fix. I called
[58:42] (3522.32s)
them up. I'm like, "Oh, I'm so sorry I
[58:45] (3525.12s)
that you had to do that." It's like,
[58:46] (3526.96s)
"Yeah, that's what happens, you know,
[58:49] (3529.52s)
when things break socially." There was
[58:52] (3532.56s)
no there was no boundaries. There was a
[58:55] (3535.28s)
there was a a poster that was very
[58:57] (3537.48s)
popular there that said nothing at
[58:59] (3539.76s)
Facebook is somebody else's problem. and
[59:01] (3541.92s)
everybody acted like that was true. And
[59:04] (3544.48s)
because of that,
[59:06] (3546.48s)
if you add all those different feedback
[59:10] (3550.12s)
together, we had a relatively stable,
[59:16] (3556.00s)
rapidly innovating and rapidly scaling
[59:19] (3559.44s)
system all at the same time. the
[59:21] (3561.52s)
mistakes that actually caused
[59:23] (3563.88s)
problems like the calculation of a some
[59:26] (3566.80s)
string was not a not a hairy computer
[59:29] (3569.68s)
science dynamic programming blah blah
[59:32] (3572.48s)
blah that could go wrong. What would go
[59:35] (3575.52s)
wrong is configuration
[59:37] (3577.80s)
stuff, the relationship between
[59:41] (3581.18s)
[Music]
[59:42] (3582.44s)
subsystems, stuff you couldn't write
[59:44] (3584.32s)
tests for. So writing tests for things
[59:46] (3586.24s)
that didn't break and didn't catch the
[59:50] (3590.56s)
actual errors, it just didn't make any
[59:52] (3592.40s)
sense in that kind of environment with
[59:56] (3596.16s)
that risk
[59:57] (3597.64s)
profile. Yeah. It didn't make sense.
[60:00] (3600.24s)
Yeah. And I guess the the context that
[60:01] (3601.92s)
I've heard and you know like correct me
[60:03] (3603.84s)
if I'm wrong that Facebook had this
[60:06] (3606.00s)
super very unique place which even is
[60:08] (3608.72s)
very rare today. They had so many users
[60:11] (3611.12s)
and code was rolling out live code. uh
[60:14] (3614.24s)
the the websites rolling out to so many
[60:16] (3616.24s)
of them that and you they had such great
[60:19] (3619.20s)
observability. They still have that you
[60:21] (3621.76s)
had live mass testing and you could
[60:24] (3624.72s)
detect a lot of the things that you
[60:26] (3626.56s)
cared about because you measured them.
[60:28] (3628.16s)
You had this layer and and this is what
[60:29] (3629.76s)
I think a lot of people miss of like oh
[60:31] (3631.28s)
we can we can operate like Facebook. I I
[60:32] (3632.96s)
mean you probably can if you have this
[60:34] (3634.48s)
level of customers or or this
[60:36] (3636.56s)
observability but if you're like a bank
[60:38] (3638.32s)
where you have 10 customers like at JP
[60:40] (3640.16s)
Morgan again the software I wrote was
[60:42] (3642.32s)
used by seven traders and they moved
[60:44] (3644.40s)
about a million or two or three million
[60:46] (3646.48s)
with each transaction and they did five
[60:48] (3648.48s)
of those per day suddenly you know like
[60:50] (3650.64s)
I had like 35 transactions and if let's
[60:53] (3653.36s)
just AB test that yeah
[60:56] (3656.20s)
well so so there there's a your
[60:59] (3659.12s)
opportunities that that you had and and
[61:00] (3660.96s)
Facebook there were not many sites at
[61:03] (3663.44s)
the time that did that and even the ones
[61:04] (3664.88s)
that had that many customers they might
[61:06] (3666.48s)
have not had the this setup of of
[61:10] (3670.04s)
observability stage rolls and so on even
[61:12] (3672.88s)
today I think that they're the Facebook
[61:15] (3675.04s)
specifically not meta but Facebook as I
[61:16] (3676.88s)
understand is still the state-of-the-art
[61:18] (3678.72s)
globally in terms of how they have they
[61:20] (3680.48s)
now have multiple environments automated
[61:22] (3682.64s)
roll backs if something degrades you
[61:24] (3684.80s)
don't even need to look at it like your
[61:26] (3686.64s)
you know colleague did that
[61:29] (3689.52s)
yeah uh feature flags is Another
[61:31] (3691.84s)
important part of that and it was a
[61:33] (3693.52s)
lesson I had to learn and and one where
[61:36] (3696.96s)
code review really helped me. I'd
[61:38] (3698.40s)
written some
[61:39] (3699.64s)
code. I thought it looked fine. Somebody
[61:43] (3703.28s)
said this looks a little janky. Put it
[61:45] (3705.20s)
behind a feature flag. I'm like really
[61:47] (3707.80s)
what? Okay. Okay. I you know and I was
[61:50] (3710.64s)
in that headsp space of I'm going I'm
[61:52] (3712.64s)
here to learn. If feature flags is what
[61:55] (3715.20s)
we do then feature flag it. And I did.
[61:59] (3719.44s)
And then I realized, oh, how liberating
[62:02] (3722.72s)
that is as an implementer. Who is going
[62:05] (3725.20s)
to be responsible? If you're not going
[62:07] (3727.36s)
to be responsible, who cares? Like I But
[62:11] (3731.48s)
also, talk about anxiety. If I'm if I'm
[62:14] (3734.96s)
not the responsible person, that feels
[62:17] (3737.16s)
horrible. But if you're going to be
[62:19] (3739.20s)
responsible for whether this code works
[62:21] (3741.20s)
or not, having a feature flag is just
[62:23] (3743.32s)
magic because you get this
[62:26] (3746.92s)
subdeployment deployment.
[62:29] (3749.76s)
You deploy one software artifact that
[62:32] (3752.88s)
has multiple modes and you can go, "Oh,
[62:35] (3755.04s)
turn it up a little bit and
[62:37] (3757.32s)
whoopsie, turn it down, let's figure out
[62:40] (3760.40s)
what just went wrong." I worked on the
[62:42] (3762.64s)
messenger back end for a while and we
[62:46] (3766.16s)
would do that, you know, and yeah,
[62:48] (3768.16s)
you've got We had one API that was
[62:50] (3770.64s)
getting called a quadrillion times a
[62:53] (3773.36s)
week. Wait, how much is a quadrillion?
[62:56] (3776.32s)
Million billion. Yeah. So, a million
[62:58] (3778.72s)
billion. So, like Wow. So, like not not
[63:02] (3782.00s)
a thousand bill. Okay. Wow. A billion, a
[63:04] (3784.72s)
trillion, a quadrillion. Okay. Like I
[63:07] (3787.76s)
I'm used to high numbers, but this is
[63:10] (3790.48s)
Oh, people would come people would come
[63:12] (3792.64s)
to Facebook and they're like, "Oh, yeah.
[63:14] (3794.48s)
I want to I want to do some user
[63:16] (3796.16s)
testing. I want to get a I want to get a
[63:18] (3798.08s)
a hundred people in my experimental
[63:20] (3800.00s)
group." And I'm like, "Dude, wrong. Your
[63:24] (3804.56s)
exper your experimental group is going
[63:26] (3806.56s)
to be like New Zealand. Oh yeah. I I
[63:29] (3809.60s)
I've heard this a lot from Facebook
[63:31] (3811.04s)
people. It was a perfect perfect
[63:33] (3813.20s)
experiment, you know, like only like a
[63:35] (3815.12s)
million people. Yeah. Yeah. English
[63:38] (3818.00s)
speaking. So localization is there. Time
[63:40] (3820.16s)
zone wise it's pretty good. That's
[63:42] (3822.48s)
right. And also Portugal and and some
[63:45] (3825.20s)
other maybe maybe not a Facebook but
[63:47] (3827.44s)
yeah also a popular one. Relatively
[63:49] (3829.68s)
small size but you know real real
[63:51] (3831.28s)
testing could happen there. So you you
[63:53] (3833.60s)
worked for six years at at Facebook.
[63:55] (3835.28s)
What is the thing that you and this was
[63:57] (3837.20s)
a a really exciting time where it was
[63:58] (3838.72s)
fast growth? We could probably
[64:00] (3840.16s)
comparable growth to what is happening
[64:01] (3841.84s)
right now with some of the hottest AI
[64:03] (3843.68s)
startups and it was also mobile
[64:05] (3845.20s)
revolution and so you were you were
[64:07] (3847.20s)
there. What was the thing that you liked
[64:09] (3849.12s)
the most about working there and and
[64:11] (3851.20s)
maybe the one that you kind of disliked
[64:12] (3852.96s)
the most or or or didn't didn't really
[64:16] (3856.64s)
you know get along with? Facebook 2011
[64:19] (3859.20s)
is a completely different beast than
[64:21] (3861.04s)
Facebook 2017. Facebook 2011 2,000
[64:24] (3864.72s)
employees very
[64:26] (3866.76s)
sparse design and product kind of
[64:31] (3871.88s)
organization. It was just all
[64:34] (3874.24s)
experiments and feedback. One of my one
[64:36] (3876.80s)
of my big mysteries is here was this
[64:41] (3881.64s)
which enabled social interactions at
[64:44] (3884.88s)
that time. That was the purpose of it.
[64:47] (3887.76s)
Built by people with no social skills
[64:50] (3890.44s)
whatsoever. Like h how in the world did
[64:53] (3893.04s)
that happen? Is there some kind of is
[64:55] (3895.44s)
there some kind of social wizard, you
[64:57] (3897.44s)
know, hidden someplace and people go and
[65:00] (3900.08s)
they burn incense and give an offering
[65:02] (3902.08s)
and the social wizard says, "No, here's
[65:04] (3904.40s)
how you do notifications." The answer is
[65:06] (3906.88s)
no. It was a sheer
[65:11] (3911.32s)
experimentation. It was
[65:13] (3913.32s)
just all these people trying all this
[65:16] (3916.28s)
stuff and the stuff that worked stuck.
[65:19] (3919.92s)
So, it wasn't like people were making
[65:21] (3921.52s)
better decisions about how social
[65:23] (3923.44s)
interactions are best facilitated. They
[65:25] (3925.60s)
were making random decisions about how
[65:28] (3928.64s)
social interactions were best
[65:30] (3930.60s)
facilitated and paying attention and
[65:34] (3934.08s)
making sure that the ones that actually
[65:36] (3936.24s)
seemed to work stuck. 2017, Facebook, 7
[65:41] (3941.08s)
later, totally different deal. big
[65:44] (3944.32s)
design org, big product
[65:47] (3947.40s)
org, like 15,000 employees, which is
[65:51] (3951.36s)
again much smaller than it is today. a
[65:53] (3953.84s)
lot more politics, a lot
[65:55] (3955.88s)
more uh zero sum thinking, a lot
[65:59] (3959.48s)
more, you know, if you wanted to launch
[66:01] (3961.76s)
a product and it was going
[66:05] (3965.40s)
to say I liked longer form content,
[66:09] (3969.68s)
essays, podcasts, whatever. Except the
[66:13] (3973.76s)
people whose job it was to get more
[66:16] (3976.56s)
likes and comments hated long form
[66:20] (3980.00s)
content because it was going to tank
[66:21] (3981.52s)
their numbers. So they would fight tooth
[66:23] (3983.84s)
and nail to make sure that your stuff
[66:26] (3986.08s)
didn't show up in the newsfeed,
[66:28] (3988.80s)
which like granted that was in their
[66:32] (3992.48s)
best interest, but
[66:35] (3995.76s)
yeah, I see what you mean. What you
[66:37] (3997.44s)
know, it's short shortterm interest.
[66:39] (3999.84s)
Yeah. And and your your horizon as a
[66:42] (4002.96s)
thinker, the things you can imagine
[66:45] (4005.28s)
possibly implementing just gets smaller
[66:48] (4008.16s)
and smaller and smaller in that kind of
[66:50] (4010.44s)
world. when I when I showed up. Yeah,
[66:54] (4014.16s)
you could do anything. Now, it turns out
[66:56] (4016.00s)
there's a bunch of stuff that you
[66:57] (4017.12s)
shouldn't do, but we didn't know that.
[66:59] (4019.36s)
Sorry about democracy. But, um, yeah,
[67:02] (4022.96s)
that's that's what I loved about it was
[67:07] (4027.72s)
possibilities at its best, the
[67:11] (4031.88s)
scale, and the this feeling that nothing
[67:15] (4035.76s)
at Facebook is somebody else's problem.
[67:18] (4038.24s)
also daunting because when so you're
[67:21] (4041.92s)
wearing Facebook swag and grandma comes
[67:24] (4044.96s)
up to you and
[67:26] (4046.44s)
starts wagging her finger under your
[67:29] (4049.12s)
nose cuz her son blah blah blah got
[67:32] (4052.16s)
bullied whatever like that is your
[67:35] (4055.04s)
problem you can't say oh go talk to the
[67:37] (4057.44s)
PR department because there isn't one
[67:38] (4058.96s)
yet you have to you have to deal with it
[67:41] (4061.92s)
and I did yeah ownership yeah and you
[67:44] (4064.88s)
know it comes with some downsides but it
[67:46] (4066.48s)
comes with a lot of upsides too. It
[67:48] (4068.56s)
feels really good. Feels very
[67:50] (4070.00s)
significant to be in that kind of
[67:51] (4071.52s)
environment. By the time I left, yeah,
[67:54] (4074.48s)
it was micro optimizations were
[67:57] (4077.48s)
everywhere. The
[68:00] (4080.04s)
upside Yeah. was was not not there. When
[68:05] (4085.04s)
I got there, the middle managers, best
[68:07] (4087.28s)
middle managers I'd ever seen in my
[68:09] (4089.52s)
career. Well, everybody who'd made it to
[68:12] (4092.72s)
middle management, Facebook in 2011 was
[68:15] (4095.36s)
sitting on
[68:17] (4097.24s)
life-changing
[68:19] (4099.00s)
equity. They were
[68:21] (4101.08s)
all they h if they had if Facebook had a
[68:24] (4104.64s)
successful IPO, they were all set for
[68:26] (4106.56s)
life. And if Facebook the whole thing
[68:30] (4110.32s)
stumbled and fell for whatever reason,
[68:33] (4113.12s)
they lost that opportunity to be set for
[68:35] (4115.68s)
life. So, they were globally optimizing.
[68:39] (4119.28s)
you you'd talk to a team and they'd say,
[68:42] (4122.16s)
"God, I would love to have you on my
[68:43] (4123.84s)
team. You know who really needs help,
[68:46] (4126.00s)
though?" So So they were like looking
[68:48] (4128.24s)
out like the team interest, the company
[68:50] (4130.16s)
interest was was on everyone's mind and
[68:53] (4133.04s)
they were willing to forego, you know,
[68:54] (4134.72s)
like okay, I'll I'll hold back hiring or
[68:57] (4137.04s)
I'll wait like I'd love to have this
[68:58] (4138.72s)
person, but this other team needs let me
[69:00] (4140.80s)
help them because this is the right
[69:02] (4142.00s)
thing to do for the company as a whole.
[69:04] (4144.48s)
Yeah. Yeah.
[69:06] (4146.08s)
And it's not like they were better human
[69:08] (4148.32s)
beings than other human beings, but
[69:10] (4150.08s)
their incentives were sure aligned with
[69:12] (4152.16s)
that. And just being for me who has a
[69:15] (4155.68s)
really hard time understanding other
[69:17] (4157.84s)
human beings to be around the that kind
[69:22] (4162.84s)
alignment that just enables a ton of
[69:25] (4165.72s)
creativity, ton of energy for me. I
[69:28] (4168.56s)
think more and better
[69:30] (4170.44s)
thoughts and I had a hard time operating
[69:34] (4174.08s)
in the environment. I can piss people
[69:36] (4176.48s)
off. I don't know if you've noticed
[69:37] (4177.84s)
this, but uh and and I did my share of
[69:41] (4181.36s)
that while I was there, but still the
[69:43] (4183.68s)
opportunity was there for me to
[69:46] (4186.76s)
fumble and and
[69:49] (4189.48s)
I I I can live with that. Yeah. And I
[69:53] (4193.12s)
guess by the way like I think this might
[69:54] (4194.32s)
be a reason why startups remain
[69:57] (4197.04s)
attractive and you know big tech you
[69:58] (4198.80s)
know now Facebook is big tech they used
[70:00] (4200.40s)
to be a startup but along with like with
[70:02] (4202.96s)
all the other big companies you know
[70:04] (4204.72s)
they pay well they have a brand they
[70:06] (4206.48s)
they they give you your that resume
[70:08] (4208.48s)
boost it's easier to get hired
[70:09] (4209.84s)
afterwards but in the end you know like
[70:12] (4212.32s)
there will be teams inside of them but a
[70:14] (4214.08s)
lot of them will be you know everyone's
[70:15] (4215.92s)
optimizing for their own thing your
[70:17] (4217.44s)
equity is mostly cash and it's
[70:19] (4219.76s)
meaningless in terms of the the bigger
[70:21] (4221.60s)
picture Whereas at a startup like at
[70:23] (4223.68s)
Uber, I remember before the IPO, we we
[70:25] (4225.92s)
used to think like that. We what is the
[70:27] (4227.44s)
best thing for Uber because we were very
[70:29] (4229.60s)
much we felt like we were like big
[70:32] (4232.80s)
enough owners. So I think this is why
[70:34] (4234.96s)
like maybe this is a good thing that
[70:36] (4236.56s)
startups will always be able to have
[70:38] (4238.56s)
this this added thing when you're
[70:40] (4240.48s)
starting out. You know, people have
[70:41] (4241.76s)
large equity. Even when you're up to
[70:43] (4243.92s)
like 100 people, it might still be
[70:45] (4245.60s)
significant enough. And this is maybe
[70:48] (4248.00s)
it's not a bad thing that it's hard to
[70:49] (4249.28s)
compete with it because imagine if
[70:50] (4250.64s)
imagine if these big companies could I
[70:52] (4252.08s)
mean what would be left of it right like
[70:53] (4253.52s)
they they would optimize the last thing
[70:55] (4255.44s)
out of everything and at least they have
[70:57] (4257.36s)
to spend more money on it now right
[71:00] (4260.12s)
yeah yeah which is more of the value
[71:03] (4263.04s)
that's created come comes back to the
[71:05] (4265.20s)
people who are creating it I I I once
[71:07] (4267.20s)
talked with with a person at a large
[71:09] (4269.20s)
company I don't want to name them uh
[71:11] (4271.20s)
they're they're the travel company
[71:12] (4272.88s)
though and I was talking with this
[71:14] (4274.48s)
person and and about like something
[71:17] (4277.52s)
principal engineer and I was like oh
[71:21] (4281.28s)
how's the job? It's like oh it's
[71:22] (4282.64s)
absolute mess. It's the the the monolith
[71:25] (4285.68s)
is still there after four years it's
[71:27] (4287.52s)
another like four years to disassemble
[71:29] (4289.76s)
the experiment like we have experiments
[71:32] (4292.32s)
everywhere but they're all messy. I was
[71:33] (4293.92s)
like oh wow that sounds like like not a
[71:36] (4296.08s)
great place. He's like but he's like
[71:37] (4297.36s)
look look the upside is like I my job is
[71:39] (4299.52s)
secure for five more years and this is
[71:41] (4301.12s)
why I pay me the big bucks so I'm not
[71:43] (4303.08s)
complaining. And it was, you know, like
[71:46] (4306.08s)
I guess a good take of like yes, like
[71:48] (4308.32s)
I'm not saying you want to optimize for
[71:49] (4309.84s)
this, but this is the reality. And that
[71:51] (4311.44s)
company understood, you know, they they
[71:52] (4312.72s)
pay well, they relocated this person,
[71:54] (4314.88s)
you know, all all all the benefits. And
[71:56] (4316.88s)
he actually had his challenges set
[71:58] (4318.64s)
actually for the next four years because
[72:00] (4320.64s)
he's worked at this environment. You
[72:01] (4321.92s)
know, it's just one thing after the
[72:03] (4323.52s)
other. So yeah, it's it's one of these
[72:06] (4326.16s)
things. So as closing I just have some
[72:08] (4328.16s)
rapid questions which is like I'll just
[72:09] (4329.76s)
I'll just fire and and then you tell me
[72:11] (4331.76s)
what is your favorite programming
[72:13] (4333.52s)
language although you answered this
[72:14] (4334.72s)
already. What is your second favorite
[72:16] (4336.24s)
one after small talk? JavaScript.
[72:19] (4339.20s)
JavaScript. Okay. Why?
[72:22] (4342.88s)
It's uh it's just small talk.
[72:25] (4345.88s)
Okay. What what is your favorite AI tool
[72:29] (4349.36s)
that you're using right now?
[72:31] (4351.92s)
Claude. I use it for all kinds of
[72:34] (4354.32s)
different things. claw code as well or
[72:36] (4356.16s)
just claude cla code under the covers of
[72:39] (4359.76s)
cursor or a
[72:41] (4361.96s)
augment or uh I don't I don't know what
[72:45] (4365.84s)
rude code is no so so clot code is is a
[72:48] (4368.48s)
different it's a command line tool that
[72:49] (4369.92s)
clot has it's an agent of of itself like
[72:53] (4373.68s)
it's not the model we have not used but
[72:55] (4375.36s)
we'll try it now afterwards you should
[72:58] (4378.08s)
try it and just see how it compares yeah
[73:00] (4380.00s)
I'll wait till we're done talking though
[73:01] (4381.95s)
[Laughter]
[73:03] (4383.24s)
and And what what is a book that you
[73:06] (4386.48s)
would recommend? The one that you have
[73:08] (4388.00s)
not written. We we know your books. The
[73:10] (4390.16s)
one one that you enjoyed. It can be
[73:11] (4391.44s)
fiction. It can be non-fiction. Uh the
[73:13] (4393.84s)
timeless way of building by Christopher
[73:15] (4395.76s)
Alexander. Nice. Well, well, Kent, this
[73:17] (4397.84s)
was a lot of fun. It was it was good to
[73:20] (4400.08s)
reconnect. Oh, yes. Great talking to
[73:22] (4402.72s)
you. Thanks. I'm just happy to see how
[73:24] (4404.72s)
much energy how energized you are with
[73:27] (4407.20s)
with with coding because I think when I
[73:29] (4409.20s)
started with with some of these tools, I
[73:30] (4410.80s)
was like, a bit of a dread like, oh,
[73:32] (4412.56s)
what's what they're going to do, etc.
[73:33] (4413.76s)
But the more I use them, I'm I'm not at
[73:35] (4415.60s)
your level yet. The more I'm also like,
[73:37] (4417.44s)
this is actually it it does bring a lot
[73:39] (4419.44s)
of fun and joy back into them. Just more
[73:41] (4421.28s)
ambition. Yep. Yeah. Yeah. And I think
[73:44] (4424.16s)
that organizations are going to have to
[73:45] (4425.68s)
get used to throwing away a lot more
[73:47] (4427.44s)
code because you can try ideas so much
[73:49] (4429.92s)
more cheaply. You're go you're going to
[73:52] (4432.44s)
generate 10 times as many artifacts as
[73:55] (4435.28s)
you used to, but still only one of them
[73:58] (4438.76s)
is worth keeping. But throwing away
[74:04] (4444.64s)
uh completed experiments. I almost said
[74:06] (4446.96s)
failed completed experiments
[74:10] (4450.40s)
uh needs to be you get the pat on the
[74:13] (4453.04s)
head for doing that. Excellent. Eight.
[74:16] (4456.00s)
Eight this week only six late last week.
[74:18] (4458.80s)
Super. How many of them lasted? Doesn't
[74:22] (4462.12s)
matter. And getting used to that I think
[74:24] (4464.96s)
is going to be an interesting shift. the
[74:28] (4468.16s)
companies that have the opportunity that
[74:32] (4472.64s)
can be explored in that way. If you get
[74:35] (4475.12s)
used to
[74:36] (4476.80s)
uh just quantity of ideas
[74:39] (4479.32s)
explored, you're going to have a huge
[74:41] (4481.44s)
huge advantage.
[74:43] (4483.76s)
Yeah, it's there's going to be changes.
[74:45] (4485.28s)
So, this will be an excite exciting
[74:47] (4487.12s)
place to see. So, it was great chatting
[74:49] (4489.12s)
and we we'll see where this goes. All
[74:50] (4490.56s)
right, Gery. So, good to talk to you
[74:52] (4492.32s)
again. Look forward to talking to you
[74:54] (4494.24s)
again soon. I really enjoyed catching up
[74:56] (4496.08s)
with Kent and it's motivating to see how
[74:58] (4498.00s)
these AI tools can make programming so
[75:00] (4500.24s)
much more fun, even for someone who's
[75:01] (4501.84s)
been a coder for decades. You can read
[75:03] (4503.92s)
more about what Kent is up to on his
[75:05] (4505.76s)
regular newsletter, Tidy First, which is
[75:07] (4507.92s)
linked in the show notes below. For a
[75:09] (4509.60s)
deep dive into Facebook's engineering
[75:11] (4511.12s)
culture and an analysis on why scrum and
[75:13] (4513.60s)
agile with a capital A is no longer that
[75:16] (4516.00s)
relevant at big tech at scaleups, check
[75:17] (4517.68s)
out the pragmatic engineer deep dives,
[75:19] (4519.36s)
also linked in the show notes below. If
[75:21] (4521.20s)
you enjoy this podcast, please do
[75:22] (4522.56s)
subscribe on your favorite podcast
[75:23] (4523.92s)
platform and on YouTube. This helps more
[75:26] (4526.32s)
people discover the podcast and a
[75:28] (4528.08s)
special thank you if you leave a rating.
[75:30] (4530.00s)
Thanks and see you in the next