[00:00] (0.08s)
Right now there's a huge explosion of AI
[00:02] (2.16s)
and LLM tools. How might this change
[00:04] (4.48s)
software engineering? There's things
[00:06] (6.56s)
that seem relatively clear and things
[00:08] (8.32s)
that seem less clear. I think on the
[00:09] (9.76s)
relatively clear side is that the AI
[00:11] (11.60s)
tools will make it easier to churn out
[00:13] (13.60s)
low-level code as you call it
[00:15] (15.04s)
autocomplete. Autocomplete will get
[00:16] (16.64s)
better and better. Could actually be
[00:18] (18.08s)
reasonably high quality code that gets
[00:19] (19.84s)
turned out. So it seems like that will
[00:21] (21.52s)
probably happen. Then the big question
[00:23] (23.28s)
is to what degree can the AI tools
[00:25] (25.84s)
actually replace the higher level design
[00:28] (28.00s)
tasks? I don't know the answer to that.
[00:30] (30.16s)
So far, I don't see anything in the
[00:32] (32.72s)
current tools that makes me think
[00:34] (34.88s)
they're going to do that. But what's
[00:36] (36.64s)
interesting about this, I think, is that
[00:38] (38.24s)
by handling more and more of the
[00:40] (40.24s)
low-level programming tasks, what
[00:42] (42.40s)
software designers do is going to be
[00:44] (44.08s)
more and more design. I think software
[00:46] (46.08s)
design is going to become more and more
[00:48] (48.24s)
important. That'll be a larger and
[00:49] (49.84s)
larger fraction of where developers
[00:51] (51.36s)
spend their time, which makes it even
[00:53] (53.20s)
more sad that we don't really teach
[00:54] (54.96s)
software design in our universities at
[00:56] (56.32s)
all. John also is the author of a
[00:58] (58.08s)
philosophy of software design and
[00:59] (59.76s)
currently a professor of computer
[01:01] (61.04s)
science at Stanford. John co founded two
[01:03] (63.28s)
tech companies worked at some micros
[01:04] (64.96s)
systemystems created a TCL scripting
[01:06] (66.80s)
language and invented the raft consensus
[01:08] (68.96s)
algorithm that is used across databases
[01:10] (70.72s)
and systems like MongoDB, Cockroach DB,
[01:13] (73.12s)
Cafka and others. In our conversation,
[01:15] (75.44s)
we cover the impact of AI on software
[01:18] (78.08s)
engineering and why software design
[01:19] (79.84s)
could become more important than before.
[01:22] (82.32s)
the concept of deep and shallow modules
[01:24] (84.16s)
and why deep modules are so important to
[01:26] (86.08s)
deal with complexity in software. John's
[01:28] (88.48s)
advice on error handling commenting and
[01:30] (90.72s)
why he suggests avoiding test-driven
[01:32] (92.48s)
development and many more topics. If
[01:35] (95.36s)
you're interested in practical software
[01:36] (96.80s)
design tactics and strategies and want
[01:38] (98.80s)
to take a step back from tactical
[01:40] (100.48s)
coding, this episode is for you. If you
[01:42] (102.80s)
enjoy the show, please subscribe to the
[01:44] (104.24s)
podcast on any podcast platform and on
[01:46] (106.08s)
YouTube. Welcome to the podcast. Thank
[01:49] (109.04s)
you. I'm excited to be here. You have
[01:50] (110.80s)
been working in academia at Stanford for
[01:52] (112.64s)
quite a while but you were in the
[01:53] (113.92s)
industry beforehand. What made you
[01:56] (116.16s)
change from working at tech companies
[01:58] (118.40s)
from founding your own companies to to
[02:00] (120.24s)
academia and you know why did you do it?
[02:03] (123.60s)
I've always wanted to do both academia
[02:06] (126.56s)
and industry. When I got my PhD I I
[02:09] (129.12s)
debated what to do because I love the
[02:12] (132.08s)
creative freedom of academia and I love
[02:14] (134.56s)
teaching also but I also love building
[02:17] (137.92s)
real software. For me, coding is one of
[02:20] (140.00s)
the passions that I I live for in my
[02:21] (141.76s)
life. And so, I always kind of figured
[02:24] (144.00s)
I'd probably do both over my career. But
[02:25] (145.60s)
when I got my PhD, the advice I was
[02:27] (147.36s)
given was if you're going to do both,
[02:29] (149.04s)
it's always better to do academia first
[02:30] (150.80s)
because it's easier to switch from
[02:32] (152.72s)
academia to industry than the other way
[02:34] (154.40s)
around. So, I went to Berkeley, had a
[02:36] (156.40s)
fabulous 14 years there. But over time,
[02:39] (159.12s)
the this nagging feel for wanting to
[02:41] (161.36s)
build commercial software had had built
[02:42] (162.96s)
up. And so, I finally decided to take
[02:44] (164.32s)
the plunge. I gave up my position at
[02:45] (165.84s)
Berkeley, moved to Silicon Valley,
[02:47] (167.60s)
worked at Sun for a few years, started a
[02:49] (169.28s)
couple of only moderately successful
[02:51] (171.52s)
companies and and had a great time doing
[02:53] (173.84s)
that. Just learned a ton of stuff doing
[02:56] (176.76s)
startups. Uh but over time, the the
[02:59] (179.84s)
nagging of sort of the call of industry
[03:02] (182.56s)
kind of built up in me and I gradually
[03:05] (185.12s)
realized although I liked doing startups
[03:07] (187.68s)
that I think being a professor is my
[03:10] (190.00s)
that's my real sort of true passion in
[03:12] (192.16s)
life. So after doing startups for 14
[03:14] (194.56s)
years, I was fortunate enough to get a
[03:16] (196.00s)
position at Stanford and I've had
[03:17] (197.20s)
another 14 or 15 years back at Stanford
[03:19] (199.12s)
where I've been really happy. So it's
[03:20] (200.80s)
been really fun. Really fun getting to
[03:22] (202.80s)
see both sides. And honestly, I think
[03:25] (205.84s)
you know doing both has actually made me
[03:27] (207.92s)
better at both. What what do you see I
[03:31] (211.68s)
talk with a lot of people obviously who
[03:33] (213.28s)
are only in the industry and maybe
[03:35] (215.12s)
they're thinking of at one day doing
[03:37] (217.20s)
academia. I I at some point thought of
[03:39] (219.76s)
this myself. My my my dad is a
[03:41] (221.52s)
university teacher or was a university
[03:43] (223.52s)
teacher and obviously there's people in
[03:45] (225.12s)
academia. Having done both, what what
[03:47] (227.44s)
are big differences between you know as
[03:49] (229.76s)
we call the tech world which is industry
[03:51] (231.68s)
and an academia. Well, first it wasn't
[03:54] (234.48s)
as different as I thought it was going
[03:56] (236.16s)
to be when I went to industry because in
[03:59] (239.20s)
both cases you're well particularly
[04:00] (240.88s)
doing startups because in both cases
[04:02] (242.16s)
you're working with a relatively small
[04:03] (243.76s)
team. You're trying to do something
[04:05] (245.68s)
relatively new uh you know trying to
[04:08] (248.48s)
build software that really works. So
[04:09] (249.84s)
even when I was in academia, we tried to
[04:11] (251.44s)
build stuff that really works, not just
[04:12] (252.80s)
sort of throwaway research prototypes.
[04:15] (255.44s)
So in many ways it wasn't that
[04:17] (257.04s)
different, but in other ways it was. And
[04:19] (259.04s)
that you deal with a much broader
[04:20] (260.88s)
spectrum of people, you know,
[04:22] (262.16s)
salespeople and marketing people and
[04:24] (264.00s)
financers and venture capitalists. That
[04:26] (266.32s)
was one of the fun things about it is
[04:27] (267.76s)
seeing a much greater diversity of
[04:29] (269.84s)
people. I would say the difference that
[04:32] (272.56s)
contributed to my going back to
[04:34] (274.28s)
academia. I would say one of the
[04:36] (276.48s)
downsides of
[04:37] (277.96s)
industry is that there's tremendous
[04:40] (280.68s)
pressure in startups particularly to
[04:43] (283.12s)
make yourself look bigger and better
[04:44] (284.88s)
than you are. You know, you're trying to
[04:46] (286.32s)
survive and get funding and so there's a
[04:50] (290.24s)
lot of pressure
[04:51] (291.80s)
to spin your company and exaggerate and
[04:56] (296.48s)
market yourself. I mean, everybody does
[04:58] (298.64s)
it to some degrees. Some people do it in
[05:00] (300.72s)
ways so extreme it's illegal. They get
[05:02] (302.64s)
in trouble. But that kind of bothered me
[05:06] (306.08s)
and within my companies know of course
[05:08] (308.48s)
we had to market ourselves externally
[05:10] (310.08s)
but internally we tried to be very
[05:11] (311.52s)
honest and we're not going to I don't
[05:12] (312.88s)
want to hear lies and spinning
[05:14] (314.08s)
internally because that puts our company
[05:15] (315.44s)
at risk if you do that but nonetheless
[05:18] (318.32s)
the thing I like about academia is you
[05:20] (320.80s)
do a project some of them work some of
[05:22] (322.72s)
them don't. If a project doesn't work
[05:24] (324.16s)
you just say oops okay that wasn't a
[05:26] (326.88s)
good idea here's why and you go on to
[05:28] (328.80s)
the next thing. in a company if it's not
[05:31] (331.12s)
working you can't really say oops that
[05:33] (333.76s)
didn't work sorry everybody we're
[05:35] (335.04s)
shutting down tomorrow you try and
[05:36] (336.32s)
somehow figure some way around it so so
[05:38] (338.32s)
I think that was the thing the one thing
[05:40] (340.16s)
about industry that I liked least this
[05:42] (342.48s)
episode is brought to you by code rabbit
[05:44] (344.32s)
the AI code review platform transforming
[05:46] (346.40s)
how engineering teams ship faster
[05:48] (348.08s)
without sacrificing code quality code
[05:51] (351.04s)
reviews are critical but timeconuming
[05:53] (353.76s)
code rabbit acts as your AI co-pilot
[05:55] (355.84s)
providing instant code review comments
[05:57] (357.60s)
and potential impacts of every poll
[05:59] (359.36s)
request request. Beyond just flagging
[06:01] (361.76s)
issues, code driver provides one-click
[06:03] (363.52s)
fix solutions and lets you define custom
[06:05] (365.76s)
code quality rules using a graph
[06:08] (368.16s)
patterns, catching sub issues that
[06:10] (370.32s)
traditional static analysis tools might
[06:12] (372.36s)
miss. Code Rabbit has so far reviewed
[06:14] (374.96s)
more than 5 million pull requests, is
[06:16] (376.96s)
installed on 1 million repositories, and
[06:19] (379.20s)
is used by 50,000 open source projects.
[06:22] (382.24s)
Try Code Rabbit free for one month at
[06:24] (384.60s)
rabbit.ai using the code pragmatic. That
[06:28] (388.16s)
is code
[06:29] (389.48s)
rabbit.ai and use the code pragmatic.
[06:32] (392.96s)
This episode is brought to you by modal,
[06:34] (394.88s)
the cloud platform that makes AI
[06:36] (396.48s)
development simple. Need GPUs without
[06:38] (398.88s)
the headache. With modal, just add one
[06:41] (401.20s)
line of code to any Python function and
[06:42] (402.88s)
boom, it's running in the cloud on your
[06:44] (404.80s)
choice of CPU or GPU. And the best part,
[06:48] (408.16s)
you only pay for what you use. With
[06:50] (410.56s)
sub-second container start and instant
[06:52] (412.48s)
scaling to thousands of GPUs, it's no
[06:54] (414.48s)
wonder companies like Sunno, RAMP, and
[06:56] (416.24s)
Substack already trust Modal for their
[06:58] (418.08s)
AI applications. Getting an H100 is just
[07:00] (420.88s)
a PIP install away. Go to
[07:03] (423.40s)
modal.com/pragmatic to get $30 in free
[07:05] (425.60s)
credits every month. That is m o
[07:07] (427.92s)
d.com/pragmatic.
[07:11] (431.20s)
Now, you know, thanks to you having
[07:13] (433.28s)
worked in academia as well and and and
[07:15] (435.52s)
you taught a course which which we'll
[07:17] (437.28s)
later talk about, but which you wrote
[07:18] (438.64s)
this book, The Philosophy of Software
[07:20] (440.16s)
Design. And a lot of people in the tech
[07:21] (441.60s)
world who I talk with point to this book
[07:24] (444.64s)
as one of the best books that that gives
[07:27] (447.40s)
them specific actionable things, ways to
[07:30] (450.64s)
think about software design. And I just
[07:32] (452.64s)
want to go go into some of the things
[07:34] (454.40s)
into the book. uh people who've read it
[07:36] (456.48s)
will find this refreshing and people who
[07:38] (458.40s)
haven't mind it as new ideas. In the
[07:41] (461.28s)
very beginning of the book in chapter I
[07:43] (463.36s)
think two or three in in in the first
[07:45] (465.52s)
like 10 or 15 pages you write about
[07:47] (467.52s)
something called tactical tornadoes and
[07:49] (469.36s)
let me just quote a little bit from from
[07:50] (470.96s)
from this uh you wrote a tactical
[07:54] (474.16s)
tornado is a prolific programmer who who
[07:56] (476.16s)
pumps out code faster than others works
[07:58] (478.40s)
but does it in totally tactical fashion.
[08:00] (480.64s)
When it comes to implementing a quick
[08:02] (482.40s)
feature, nobody gets it done faster than
[08:04] (484.24s)
a tactical tornado. In some
[08:05] (485.92s)
organizations, management treats
[08:07] (487.36s)
tactical tornadoes as heroes. However,
[08:09] (489.60s)
tactical tornadoes leave a wave of
[08:12] (492.00s)
destruction behind, and typically other
[08:13] (493.84s)
engineers must clean up the messes left
[08:15] (495.60s)
by this tactical tornado. So, I I wanted
[08:18] (498.08s)
to ask you first of all, how did you
[08:19] (499.44s)
come across your first tactical tornado?
[08:22] (502.68s)
And why do you think these types of
[08:26] (506.08s)
folks are still around in most
[08:27] (507.76s)
companies? because I I had an aha moment
[08:29] (509.76s)
when I was like, "Oh, yeah. I I I know
[08:31] (511.44s)
the tactical tornadoes as well." I I
[08:34] (514.00s)
can't point to a specific incident or
[08:36] (516.00s)
person and I wouldn't want to identify
[08:37] (517.36s)
them anyway, but but I've encountered
[08:39] (519.84s)
them over my career. I bet everybody who
[08:42] (522.08s)
has significant software development
[08:43] (523.52s)
experience has encountered these people
[08:45] (525.20s)
over your career and just kind of
[08:47] (527.92s)
observing them uh and the frustration
[08:50] (530.48s)
with that, you know, I think it's just a
[08:52] (532.88s)
particular personality type. There are
[08:54] (534.56s)
people who are very uh very detail
[08:57] (537.84s)
focused and sort of closers want to get
[08:59] (539.68s)
absolutely everything right and they
[09:01] (541.28s)
finish everything and there's people who
[09:03] (543.44s)
are they love getting started on
[09:05] (545.76s)
projects and doing the first 80 or 90%
[09:08] (548.00s)
but that last 10 or 20% doesn't matter
[09:11] (551.12s)
to them so much and they're not you know
[09:13] (553.44s)
there's just like there's there's neater
[09:15] (555.12s)
people and there's sloppier people in
[09:16] (556.48s)
the world so the technical is just sort
[09:18] (558.32s)
of sloppy they don't care about you know
[09:20] (560.00s)
leaving croft behind it doesn't really
[09:21] (561.60s)
bother them at all it's it's just a
[09:23] (563.52s)
personality type. So I I suspect they'll
[09:26] (566.56s)
always be there, particularly because
[09:29] (569.52s)
some organizations value speed above all
[09:33] (573.60s)
especially startups, right? Yeah. Yeah.
[09:35] (575.36s)
A lot of Right. Right. There are
[09:37] (577.44s)
startups that are probably completely
[09:38] (578.80s)
staffed by tactical tornadoes. You call
[09:40] (580.80s)
them tactical tornadoes, but like some
[09:42] (582.56s)
of the synonyms that came into my mind
[09:46] (586.00s)
as I was thinking of of what people use,
[09:47] (587.76s)
sometimes they use a 10x engineer,
[09:49] (589.60s)
especially when management sees, oh,
[09:51] (591.12s)
they just get things faster. Sometimes
[09:53] (593.20s)
it you know like it's a hacky way of
[09:55] (595.04s)
doing things or someone who just hacks
[09:56] (596.80s)
quickly or at some point maybe a decade
[09:58] (598.88s)
ago people called it a hacker but in in
[10:01] (601.76s)
a way of working fast but I I don't
[10:03] (603.60s)
think we have that anymore but there are
[10:05] (605.44s)
a lot of lot of cases it is a a positive
[10:07] (607.60s)
thing uh indeed well I would positive
[10:10] (610.96s)
initially when I think of 10x engineers
[10:14] (614.00s)
and the way Google define it I don't
[10:15] (615.60s)
think of this as technical tornadoes I
[10:17] (617.44s)
think of these are people who come up
[10:19] (619.76s)
with the really clean designs
[10:22] (622.32s)
that can be implemented in very small
[10:24] (624.48s)
amounts of code. And so they might
[10:27] (627.20s)
actually write less code per day than
[10:30] (630.00s)
other people, but the functionality that
[10:32] (632.16s)
they implement is way higher and it
[10:34] (634.88s)
comes with higher stability and
[10:36] (636.40s)
evolvability and so on. So to me at
[10:38] (638.24s)
least that's the 10X engineer. I agree
[10:41] (641.84s)
with you. I'll just add the note that I
[10:43] (643.68s)
think the 10x engineer is so ambiguous
[10:46] (646.08s)
that it means different for every person
[10:48] (648.00s)
and like and especially so when I've
[10:49] (649.76s)
when I've seen it was mostly less
[10:52] (652.56s)
technical CEOs refer to my 10x engineer
[10:55] (655.68s)
who was a tactical tornado because what
[10:57] (657.60s)
they saw is oh I'm getting output
[10:59] (659.76s)
they're not saying no they're building
[11:01] (661.28s)
it but of course they had trouble
[11:03] (663.04s)
understanding the tech depth that is
[11:04] (664.56s)
going into there how it's slowing others
[11:06] (666.16s)
down and you know some of the people
[11:07] (667.60s)
that you talked about in this
[11:08] (668.64s)
environment might have not been
[11:09] (669.92s)
perceived as a 10x engineer by uh this
[11:12] (672.88s)
person who didn't really who is not that
[11:14] (674.56s)
close to the code and I I totally agree
[11:16] (676.32s)
with you by the way on but this is
[11:18] (678.08s)
probably a good reminder of you you know
[11:20] (680.40s)
like everything's relative right like it
[11:22] (682.32s)
everything is based on your vantage
[11:23] (683.68s)
point if if you are close to the code
[11:25] (685.36s)
you're technical you will value other
[11:27] (687.12s)
things you should value other things
[11:28] (688.64s)
than than you know I think a lot of this
[11:31] (691.52s)
comes down to strategic versus tactical
[11:34] (694.24s)
again there are people who value all the
[11:36] (696.00s)
short-term stuff and there are people
[11:37] (697.04s)
who value place more emphas
[11:41] (701.52s)
design is all about the long-term stuff,
[11:43] (703.12s)
but as you've said, there are many
[11:44] (704.32s)
people who think of it all in terms of
[11:46] (706.24s)
short-term stuff. And so, I can see how
[11:48] (708.08s)
they would have a different idea of what
[11:49] (709.36s)
a 10x engineer is. And speaking of long
[11:52] (712.32s)
long-term versus short-term, one thing
[11:54] (714.32s)
that came to my mind reading rereading
[11:56] (716.56s)
this tactical tornado part is right now
[11:58] (718.96s)
there's a huge explosion of of AI and
[12:01] (721.28s)
LLM tools, which are as as as you're
[12:04] (724.56s)
probably seeing it, they are really good
[12:06] (726.00s)
at generating code rapidly. A little bit
[12:08] (728.08s)
like the tactical tornado to be honest.
[12:09] (729.68s)
You can prompt them. They can do short,
[12:11] (731.36s)
they can do long, they can do
[12:12] (732.48s)
autocomplete, they can do all skeletons.
[12:14] (734.88s)
I was wondering so far what you've
[12:17] (737.36s)
observed. What do you think the
[12:18] (738.96s)
long-term impact of these tools? If we
[12:21] (741.36s)
assume that this is how they're going to
[12:22] (742.48s)
stay, let's just assume that, you know,
[12:24] (744.88s)
will every engineer will have tactical
[12:26] (746.96s)
tornadoes at their fingertips and who do
[12:29] (749.12s)
not say no and they they will, you know,
[12:30] (750.80s)
turn out
[12:31] (751.88s)
code. How might this change software
[12:34] (754.88s)
engineering?
[12:37] (757.36s)
That's a really interesting question and
[12:39] (759.76s)
I wish I knew the answer. I don't. I can
[12:41] (761.60s)
only I can only guess. I think, you
[12:43] (763.68s)
know, I think we're going to see big
[12:44] (764.96s)
changes over the next 5 to 10 years and
[12:47] (767.20s)
it's really hard to predict what
[12:48] (768.64s)
direction they're going to go. What
[12:50] (770.56s)
seems there's things that seem
[12:52] (772.08s)
relatively clear and things that seem
[12:53] (773.76s)
less clear. I think on the relatively
[12:56] (776.00s)
clear side is that the the AI tools will
[12:58] (778.72s)
make it easier to churn out low-level
[13:00] (780.72s)
code. As you call it, autocomplete.
[13:02] (782.80s)
Autocomplete will get better and better.
[13:04] (784.88s)
And I think it won't necessarily be be
[13:06] (786.88s)
sort of tactical tornado stuff. It could
[13:08] (788.40s)
actually be reasonably high quality code
[13:10] (790.32s)
that gets turned out. So it seems like
[13:12] (792.72s)
that will probably happen. Then the big
[13:15] (795.68s)
question is to what degree can the AI
[13:18] (798.80s)
tools actually replace the higher level
[13:22] (802.04s)
tasks. And you know I don't know the
[13:24] (804.80s)
answer to that. So far I don't see
[13:28] (808.08s)
anything in the current tools that makes
[13:30] (810.64s)
me think they're going to do that. But,
[13:33] (813.20s)
you know, never underestimate what could
[13:35] (815.60s)
happen to this. But, but what's
[13:38] (818.16s)
interesting about this, I think, is that
[13:39] (819.96s)
by by handling more and more of the
[13:43] (823.20s)
low-level programming
[13:44] (824.84s)
tasks, what software designers do is
[13:47] (827.52s)
going to be more and more design. So,
[13:49] (829.76s)
software I think software design is
[13:51] (831.92s)
going to become more and more important.
[13:54] (834.08s)
that'll be a larger and larger fraction
[13:56] (836.00s)
of where developers spend their time,
[13:58] (838.48s)
which makes it even more sort of sad
[14:01] (841.04s)
that we don't really teach software
[14:02] (842.40s)
design in our universities at all. So,
[14:04] (844.56s)
the skills we're teaching students may
[14:06] (846.96s)
actually be the skills that are going to
[14:08] (848.88s)
be replaced by the AI tools.
[14:12] (852.24s)
Can we just double click on software
[14:15] (855.48s)
design? How do you see software design?
[14:18] (858.48s)
What is it and and why is it important?
[14:21] (861.20s)
And the reason I'm asking the question,
[14:22] (862.72s)
it's a it's it might seem like a naive
[14:25] (865.28s)
question, but these days you really can
[14:27] (867.12s)
get started building an application by,
[14:30] (870.08s)
you know, create taking a template uh
[14:32] (872.88s)
using it, you know, you you will soon be
[14:34] (874.88s)
able to prompt an AI tool that will
[14:38] (878.08s)
generate things for you. How do you
[14:39] (879.76s)
think about software design? You know,
[14:43] (883.12s)
it's abstract, right, by by nature. So,
[14:45] (885.76s)
well, I think of it as it's a
[14:48] (888.48s)
decomposition problem. It's how do you
[14:51] (891.28s)
take a large complicated system and
[14:54] (894.64s)
divide it up into smaller units that you
[14:57] (897.04s)
can implement relatively independently.
[14:59] (899.52s)
And by the way to me I often when I give
[15:02] (902.40s)
talks I often ask people what do you
[15:03] (903.68s)
think is the most important idea in all
[15:05] (905.20s)
of computer science and ask the audience
[15:07] (907.60s)
to me I think decomposition that's the
[15:09] (909.92s)
key thing that threads through
[15:11] (911.04s)
everything we do in computer science.
[15:12] (912.48s)
How do you take large complicated
[15:13] (913.84s)
problems and break them up and that's
[15:15] (915.44s)
design and then implementation is when
[15:17] (917.12s)
you go about the individual pieces but
[15:19] (919.28s)
again you may do more design of those to
[15:21] (921.12s)
break them up into still smaller pieces
[15:23] (923.60s)
but that's the way I think about
[15:24] (924.80s)
software design. Now an interesting
[15:27] (927.36s)
thing that I've noticed over the past
[15:29] (929.68s)
few decades is about 20 years or so
[15:33] (933.60s)
maybe a little bit more there was a lot
[15:35] (935.20s)
of focus on on architectural practices
[15:37] (937.92s)
that felt closer to software design
[15:39] (939.68s)
things like TDD testdriven development
[15:42] (942.16s)
architecture approaches like
[15:43] (943.44s)
architecture patterns I think they were
[15:45] (945.60s)
called design patterns a gang of four
[15:47] (947.76s)
different like factory etc patterns
[15:50] (950.36s)
however the focus seems to be moving
[15:53] (953.68s)
away at least at least in the industry
[15:55] (955.68s)
for there's fewer people referencing the
[15:58] (958.00s)
the these models, these ideas. From your
[16:00] (960.64s)
observation both in academia and
[16:02] (962.48s)
industry like why do you think this this
[16:04] (964.56s)
might have been why did we have such a
[16:05] (965.84s)
big focus on architecture design topics
[16:10] (970.00s)
10 or 20 years ago and may maybe just
[16:12] (972.32s)
were talking about it less or maybe even
[16:14] (974.24s)
been thinking about it less. Well, you
[16:15] (975.76s)
know this of course there's always fads
[16:17] (977.44s)
that everybody gets excited about and
[16:18] (978.88s)
they gradually fade away. So maybe maybe
[16:21] (981.76s)
that's part of it. Now to me the things
[16:24] (984.48s)
you mentioned TDD and
[16:26] (986.84s)
patterns I don't think of those as
[16:30] (990.92s)
uh certainly not TDD in fact I I would
[16:33] (993.68s)
argue if you want to get into a TDD
[16:35] (995.20s)
discussion we could have that but I
[16:36] (996.08s)
would argue it actually works against
[16:37] (997.96s)
design patterns that's an alternative to
[16:40] (1000.88s)
design that is rather than designing
[16:42] (1002.80s)
something you pull something off the
[16:44] (1004.16s)
shelf and I think I think where it works
[16:47] (1007.12s)
it works certainly there's areas where
[16:48] (1008.88s)
of course if a pattern works you should
[16:50] (1010.48s)
just use it but to me that only
[16:52] (1012.64s)
addresses a tiny tiny fraction of what I
[16:55] (1015.04s)
think of as software design. There
[16:56] (1016.48s)
there's so much more to design than just
[16:58] (1018.40s)
picking one of a half a dozen patterns.
[17:00] (1020.72s)
When you're starting to when you're
[17:03] (1023.20s)
building new software and you're saying,
[17:05] (1025.12s)
"Okay, I'm I'm going to start to design
[17:06] (1026.96s)
like how how do you start designing? Is
[17:09] (1029.44s)
it a sitting down pen and paper?" And if
[17:12] (1032.56s)
if if we can talk about something
[17:13] (1033.84s)
specific, I I think that could be even
[17:15] (1035.68s)
useful if there was a project that that
[17:17] (1037.52s)
you did because I'm just really curious
[17:19] (1039.28s)
about your you know thinking. I I can't
[17:22] (1042.40s)
give a recipe for design. I wish I could
[17:24] (1044.24s)
kind you know follow these steps in this
[17:26] (1046.24s)
order and you'll get a great design. Uh
[17:28] (1048.32s)
I wish I could but you know there's two
[17:30] (1050.88s)
general approaches each of which has its
[17:33] (1053.44s)
pitfalls is the top down approach and
[17:35] (1055.20s)
the bottom up approach. I think what
[17:37] (1057.84s)
people tend to use particularly uh less
[17:40] (1060.64s)
experienced engineers or people who are
[17:43] (1063.28s)
in a domain that they're not familiar
[17:44] (1064.88s)
with is you do bottom up. You just say
[17:47] (1067.00s)
okay I'm going to need this
[17:49] (1069.36s)
functionality of this system. I have no
[17:50] (1070.96s)
idea what the overall system is going to
[17:52] (1072.32s)
look like, but I need this piece. So,
[17:53] (1073.60s)
I'll just build this piece. And then you
[17:55] (1075.68s)
pick another piece and you build it. You
[17:56] (1076.80s)
pick then you start trying to put the
[17:58] (1078.16s)
pieces together and you gradually layer
[18:00] (1080.72s)
in your design. The so that's the bottom
[18:04] (1084.00s)
up approach. The top down approach is
[18:06] (1086.24s)
you start at the top and you try and
[18:07] (1087.52s)
break the system up into what you think
[18:09] (1089.12s)
are relatively independent components
[18:10] (1090.64s)
and then decompose those until you
[18:12] (1092.64s)
eventually get something that that you
[18:14] (1094.24s)
can design. My personal and by the way
[18:17] (1097.04s)
in the 1970s back when I was in graduate
[18:19] (1099.60s)
school there was a big discussion about
[18:20] (1100.88s)
should you do design top down or bottom
[18:22] (1102.64s)
up and people argued about it. My
[18:25] (1105.20s)
personal opinion is that it's hard to do
[18:26] (1106.96s)
either one pure that the actual process
[18:29] (1109.44s)
design is some combination where you
[18:31] (1111.20s)
think about big pieces and then you
[18:32] (1112.72s)
start thinking about some little pieces
[18:34] (1114.08s)
and you build some stuff and you
[18:36] (1116.08s)
discover actually didn't work very well.
[18:37] (1117.76s)
you throw it away and you build some
[18:38] (1118.88s)
more stuff and it's this iterative kind
[18:40] (1120.72s)
of back and forth process.
[18:44] (1124.28s)
And it's it's interesting that you
[18:46] (1126.40s)
mentioned the two the two ways because
[18:48] (1128.48s)
in the book the book itself starts with
[18:52] (1132.08s)
you you're writing how design is all
[18:54] (1134.80s)
about managing complexity and then you
[18:56] (1136.72s)
bring these two approaches. Well, what
[18:58] (1138.32s)
are the two approaches to complexity? So
[19:01] (1141.52s)
one is that there are certain things you
[19:03] (1143.12s)
can do with design that simply eliminate
[19:04] (1144.80s)
complexity completely. Yeah. eliminating
[19:07] (1147.28s)
working on special cases and things like
[19:09] (1149.04s)
that. So those are the best most
[19:10] (1150.96s)
powerful approaches but you can't
[19:13] (1153.44s)
eliminate all complexity. So then the
[19:16] (1156.08s)
second approach is where you use modular
[19:18] (1158.72s)
design where you basically try and hide
[19:20] (1160.32s)
the complexity that is you take things
[19:22] (1162.56s)
that are relatively complicated and put
[19:24] (1164.00s)
them off to the side where you where
[19:25] (1165.52s)
somebody can solve this problem and deal
[19:27] (1167.20s)
with this complexity and nobody else in
[19:29] (1169.28s)
the system has to be aware of that
[19:31] (1171.08s)
complexity. So those were the those were
[19:33] (1173.28s)
the two overall ways of of dealing with
[19:35] (1175.36s)
complexity. Yeah. And
[19:37] (1177.40s)
and good software design will help us
[19:41] (1181.44s)
deal like wrangle complexity, right? Do
[19:44] (1184.64s)
do I sense that? That's that's one of
[19:46] (1186.16s)
the it'll do with both of those. It will
[19:47] (1187.68s)
both give you ideas for how to eliminate
[19:49] (1189.36s)
it and also for how to again modularize
[19:53] (1193.20s)
it so most people don't have to be aware
[19:54] (1194.96s)
of most of the complexity. One
[19:57] (1197.36s)
interesting idea that I it resonated
[19:59] (1199.60s)
with the book is is you wrote uh about
[20:02] (1202.24s)
how it's worth designing things twice.
[20:05] (1205.12s)
In fact, this is a whole chapter in the
[20:06] (1206.64s)
book, chapter 11. It's it's called
[20:08] (1208.24s)
designing it twice. And you wrote, I'll
[20:11] (1211.04s)
quote from you. Unfortunately, I often
[20:13] (1213.44s)
see smart people who insist implementing
[20:15] (1215.20s)
the first idea that comes to mind, and
[20:16] (1216.80s)
this causes them to underperform their
[20:18] (1218.56s)
true potential. It also makes them
[20:20] (1220.48s)
frustrating to work with. Uh how did you
[20:24] (1224.00s)
come across this this observation? And
[20:27] (1227.60s)
one more thing that you added is it
[20:30] (1230.56s)
doesn't take more time to design uh
[20:32] (1232.80s)
twice than what most people think.
[20:36] (1236.24s)
Yeah. So first the way I came across
[20:39] (1239.20s)
this is because I've been a professor at
[20:40] (1240.64s)
both Stanford and Berkeley which are you
[20:42] (1242.32s)
know two of the the top universities in
[20:44] (1244.72s)
the world and both places get brilliant
[20:46] (1246.80s)
graduate students and I've noticed it's
[20:49] (1249.68s)
common for students at both places to
[20:51] (1251.60s)
have bad work habits and the reason is
[20:55] (1255.36s)
all their lives everything's always been
[20:58] (1258.08s)
easy for them. They've always been the
[20:59] (1259.76s)
best at everything they did in high
[21:01] (1261.04s)
school they were smarter than their
[21:02] (1262.08s)
teachers in college perhaps as smart as
[21:03] (1263.84s)
their professors top of their class.
[21:06] (1266.24s)
their first ideas, just the first thing
[21:08] (1268.40s)
that come off their mind was always good
[21:09] (1269.92s)
enough to get great grades. And so
[21:13] (1273.76s)
there's never been any real incentive
[21:15] (1275.92s)
for them to think twice. And they and I
[21:18] (1278.00s)
think they start kind of latching on to
[21:20] (1280.00s)
the idea that whatever comes to my mind
[21:22] (1282.48s)
is going to be good. But, you know, as
[21:25] (1285.04s)
you get into harder and harder problems,
[21:27] (1287.44s)
that characteristic goes away. And and
[21:29] (1289.76s)
particularly when we're doing research
[21:30] (1290.96s)
at top universities, honestly, we're
[21:33] (1293.68s)
working on really hard problems.
[21:35] (1295.44s)
Nobody's first idea is going to be the
[21:37] (1297.28s)
best idea. And so it's I often have to
[21:40] (1300.64s)
work with students to force them to
[21:42] (1302.96s)
think about things. And so for example,
[21:45] (1305.36s)
a common technique I will use is I will
[21:47] (1307.24s)
say, suppose I told you that you may not
[21:50] (1310.32s)
implement this thing you just suggested.
[21:51] (1311.60s)
And if you try and do it, I'm going to
[21:52] (1312.64s)
stop funding you and have you thrown out
[21:53] (1313.92s)
as a grad
[21:55] (1315.32s)
student. What would be your second
[21:57] (1317.52s)
approach? Would you come up with
[21:58] (1318.88s)
something or would you say, "Okay, I
[22:00] (1320.16s)
better look for a new adviser." And the
[22:02] (1322.72s)
interesting thing is, and by the way, I
[22:04] (1324.00s)
say this typically when I think there's
[22:05] (1325.20s)
a better way of doing it. And when I
[22:07] (1327.12s)
force them and they go away and come
[22:09] (1329.20s)
back, the second idea is always better.
[22:11] (1331.68s)
It's better. And I've seen this myself.
[22:14] (1334.12s)
Um the the best example I can think of
[22:17] (1337.20s)
over my whole career was when I was
[22:18] (1338.96s)
designing the TK toolkit, which is a
[22:22] (1342.00s)
part of the Tickle TK system. And I was
[22:25] (1345.12s)
trying to figure out what should be the
[22:26] (1346.64s)
API for programming graphical widgets.
[22:30] (1350.16s)
And I spent actually like a I think it
[22:32] (1352.16s)
was mostly a long airplane ride and I
[22:35] (1355.44s)
basically did two designs for this. I
[22:38] (1358.48s)
did kind of the first idea that came to
[22:40] (1360.00s)
mind and then I said, "Okay, suppose I'm
[22:41] (1361.20s)
going to throw that out. What would what
[22:42] (1362.32s)
would be something that's really
[22:43] (1363.36s)
different that I could do?" I did a
[22:44] (1364.72s)
second one. And after comparing them, I
[22:47] (1367.76s)
ended up choosing the second idea. And
[22:50] (1370.00s)
honestly, that was one of the best ideas
[22:52] (1372.88s)
I've had in my professional career. the
[22:54] (1374.64s)
the one of the things that made Tickle
[22:56] (1376.40s)
TK popular was the the API for TK was
[22:59] (1379.12s)
just a very sweet, simple, powerful API
[23:02] (1382.80s)
and it was my second choice. It was the
[23:04] (1384.56s)
second thing that came to my mind. So I
[23:07] (1387.36s)
have that example that always motivates
[23:08] (1388.88s)
me and when I'm doing new things, I
[23:11] (1391.04s)
usually try and force myself to think of
[23:12] (1392.72s)
two ways. Even if I think one of them is
[23:14] (1394.48s)
really bad, even come up with a what you
[23:16] (1396.88s)
think is a bad alternative and compare
[23:19] (1399.28s)
it to what you did. You'll learn
[23:21] (1401.60s)
something from that and you may discover
[23:23] (1403.12s)
the bad thing wasn't as bad as you
[23:24] (1404.48s)
thought it was.
[23:26] (1406.48s)
I'm actually starting to realize when
[23:28] (1408.48s)
when I worked at Uber, we had this
[23:30] (1410.16s)
concept of design docs, RFC's where you
[23:33] (1413.28s)
were expected to just, you know, like
[23:34] (1414.88s)
write down why you're doing something,
[23:37] (1417.20s)
how you're doing it, etc. And after a
[23:41] (1421.04s)
while, we start to add a thing called
[23:42] (1422.92s)
trade-offs or alter no alternatives
[23:45] (1425.36s)
considered. And we were asking people
[23:47] (1427.12s)
like write down at least one ideally
[23:49] (1429.52s)
multiple arts alternatives considered
[23:51] (1431.60s)
and and we we did it because we it just
[23:54] (1434.48s)
like we got better, you know, designs,
[23:56] (1436.48s)
ideas, plans, etc. And better
[23:58] (1438.56s)
discussions as well because it was clear
[24:00] (1440.48s)
why why you wouldn't do it. And now I'm
[24:02] (1442.72s)
just reflecting on what you said on like
[24:05] (1445.20s)
yeah well we were kind of doing a little
[24:06] (1446.72s)
bit of what what you were saying forcing
[24:08] (1448.56s)
people to not stop either not get their
[24:11] (1451.28s)
first idea or put put down others as
[24:13] (1453.60s)
well and compare them. But what you said
[24:15] (1455.20s)
often times it will be people just doing
[24:17] (1457.28s)
you know okay here's some things that
[24:18] (1458.88s)
will surely not work and usually fair
[24:20] (1460.96s)
enough it doesn't work but then
[24:22] (1462.00s)
sometimes like oh actually that's a
[24:23] (1463.76s)
simple thing and we could just use that
[24:25] (1465.36s)
instead or maybe we combine these two
[24:27] (1467.20s)
things together. Yeah. Yeah. And you
[24:29] (1469.12s)
know you asked about does this take a
[24:30] (1470.48s)
lot of time and slow down the process.
[24:32] (1472.80s)
Well it will take some extra time of
[24:35] (1475.04s)
course nothing is free but honestly the
[24:38] (1478.24s)
design at this level is a very high
[24:40] (1480.00s)
level design. This is not it's not like
[24:42] (1482.56s)
we're completely building a second
[24:44] (1484.32s)
alternative. It's I kind of think at the
[24:47] (1487.28s)
high level. So with tickle and TK this
[24:49] (1489.12s)
TK design I might have spent maybe a few
[24:52] (1492.20s)
days on the design thinking about these
[24:54] (1494.80s)
two alternatives and comparing them but
[24:56] (1496.64s)
then it took a year you know to
[24:58] (1498.08s)
implement TK or more than that. So we're
[25:01] (1501.28s)
talking something on the order of one or
[25:03] (1503.44s)
2% of the total time to to build a
[25:06] (1506.96s)
system. It's just it's not that big of a
[25:09] (1509.20s)
deal. And if it gives you a better
[25:10] (1510.96s)
design, you get back way more than one
[25:12] (1512.96s)
or two percent. Yeah, I I feel this is a
[25:15] (1515.68s)
lesson
[25:16] (1516.52s)
that people, teams, engineers learn
[25:19] (1519.52s)
again and again. You know, first there's
[25:21] (1521.44s)
always a point where you jump into
[25:22] (1522.48s)
coding, I'm faster, and then when it's
[25:24] (1524.56s)
complex enough, you learn, oh, we
[25:26] (1526.32s)
probably should have planned a little
[25:27] (1527.28s)
bit better. I would have saved myself
[25:28] (1528.64s)
time. And then after a while, the the
[25:31] (1531.92s)
cycle goes into like too much planning
[25:33] (1533.92s)
or people feels too much planning and
[25:35] (1535.84s)
then it starts again. So I I feel
[25:37] (1537.68s)
there's a little bit of never- ending
[25:38] (1538.96s)
cycle you know there people still talk
[25:40] (1540.64s)
about waterfall which which used to be a
[25:43] (1543.04s)
thing like 20 or 30 years ago it's not
[25:45] (1545.20s)
really a thing but like there is some
[25:48] (1548.32s)
disdain especially for fastmoving teams
[25:50] (1550.56s)
in planning too much because it's seen
[25:52] (1552.16s)
as you know too much talking not enough
[25:53] (1553.92s)
doing so well and and I get that too.
[25:56] (1556.88s)
So, you know, the I think the
[25:59] (1559.68s)
fundamental thing about design is it
[26:01] (1561.44s)
always involves tradeoffs. And if you
[26:04] (1564.00s)
take any one idea and you push it too
[26:06] (1566.48s)
far, you're probably end up in a bad
[26:08] (1568.00s)
place. So, this idea about, you know,
[26:10] (1570.16s)
doing multiple designs and you could
[26:12] (1572.16s)
certainly take that too far also. And I
[26:15] (1575.12s)
I think one of the things that that
[26:17] (1577.44s)
separates the really good designers from
[26:19] (1579.04s)
the not as good ones is they kind of
[26:20] (1580.40s)
know how to make those trade-offs and
[26:22] (1582.24s)
how to combine these different ideas and
[26:24] (1584.16s)
different phases and balance them off.
[26:26] (1586.48s)
And that's the kind of thing probably
[26:29] (1589.12s)
you just have to learn from experience.
[26:30] (1590.80s)
You know, you try you try both
[26:32] (1592.24s)
approaches and see the the bad problems
[26:33] (1593.84s)
with them and then you eventually get a
[26:35] (1595.84s)
gut feel for what the trade-offs are.
[26:38] (1598.32s)
Now in in the book, one idea that stuck
[26:40] (1600.56s)
with me is this concept of deep modules
[26:42] (1602.72s)
and shallow modules. And you introduce
[26:45] (1605.36s)
this concept that a deep module is
[26:47] (1607.36s)
something a piece of code or or
[26:49] (1609.52s)
functionality or well a module that has
[26:51] (1611.44s)
a pretty simple interface but it has a
[26:53] (1613.20s)
lot of depth to it. It has a lot of
[26:55] (1615.04s)
functionality complexity. And then a
[26:57] (1617.04s)
shallow module is something that has a
[26:58] (1618.88s)
you know wide interface but it it
[27:00] (1620.48s)
doesn't do too much. So it it kind of it
[27:03] (1623.60s)
almost it might be even transparent in
[27:05] (1625.84s)
in some points that it it it it doesn't
[27:07] (1627.92s)
do much hiding. And in the book you you
[27:10] (1630.48s)
emphasize how deep modules are are
[27:13] (1633.52s)
important for good software design. Why
[27:16] (1636.16s)
is this so? And how did you again like
[27:18] (1638.00s)
come across this realization? Well, it
[27:20] (1640.00s)
all comes back to complexity and and
[27:22] (1642.80s)
everything in the book really derives
[27:24] (1644.72s)
from this idea about how do we uh
[27:27] (1647.68s)
eliminate or manage complexity. And so a
[27:31] (1651.68s)
deep module is what gives us leverage
[27:35] (1655.12s)
against
[27:36] (1656.04s)
complexity. And this the way it does
[27:38] (1658.16s)
that is it provides this very simple
[27:40] (1660.16s)
interface. So people using the module
[27:42] (1662.40s)
have almost no cognitive load very easy
[27:44] (1664.40s)
to learn. But inside the module there's
[27:47] (1667.60s)
a tremendous amount of functionality and
[27:49] (1669.28s)
and complexity that is hidden from
[27:51] (1671.04s)
everybody else. And so that's I in the
[27:54] (1674.56s)
deep notion I was trying to capture
[27:56] (1676.48s)
capture the tradeoffs. And basically
[27:59] (1679.28s)
it's this trade-off between two things.
[28:01] (1681.44s)
The the complexity of the interface and
[28:03] (1683.68s)
how much functionality you have in the
[28:05] (1685.64s)
module. And so what you want to do is
[28:08] (1688.24s)
have get the you know the most
[28:09] (1689.84s)
functionality you can for the simplest
[28:12] (1692.88s)
possible interface on top. In the book
[28:15] (1695.60s)
you have a chapter titled defining
[28:17] (1697.76s)
errors out of existence and and in this
[28:19] (1699.44s)
one you talk about error handling. What
[28:21] (1701.44s)
is your take on error handling and how
[28:23] (1703.52s)
to design for you know sensible error
[28:26] (1706.00s)
handling? Right. So again, this comes
[28:28] (1708.00s)
back to the the complexity issue and and
[28:30] (1710.72s)
anybody who's built a lot of software
[28:32] (1712.16s)
knows that error handling is a huge
[28:34] (1714.16s)
source of complexity by basically it's
[28:37] (1717.52s)
you know it's all the special cases all
[28:39] (1719.04s)
these weird special cases you have to
[28:40] (1720.56s)
deal with and so it's easy for the for
[28:43] (1723.04s)
error handling to impose tremendous
[28:44] (1724.80s)
complexity on software. So then the
[28:47] (1727.44s)
question I I'm constantly asking myself
[28:49] (1729.12s)
is how can we reduce the impact? I mean
[28:52] (1732.72s)
we do have some situations we have to
[28:54] (1734.56s)
deal with. there's there's some you know
[28:56] (1736.32s)
there's many exceptions you simply can't
[28:58] (1738.00s)
be avoided they're fundamental to the
[28:59] (1739.36s)
system so you have to deal with them
[29:01] (1741.12s)
other cases where they're not so
[29:02] (1742.76s)
important so in that chapter what I was
[29:05] (1745.44s)
trying to argue is
[29:07] (1747.00s)
that exceptions having more exceptions
[29:10] (1750.72s)
is not better it is sometimes necessary
[29:14] (1754.64s)
but I I sometimes think that designers
[29:16] (1756.64s)
think that the more exceptions I throw
[29:18] (1758.72s)
from a class the better a programmer I'm
[29:21] (1761.12s)
being I'm being a more cautious careful
[29:22] (1762.96s)
programmer and and I would No, the
[29:25] (1765.68s)
problem is every exception you throw is
[29:27] (1767.52s)
imposing complexity on the users of your
[29:30] (1770.12s)
class. And so if you can if you can
[29:34] (1774.08s)
reduce the number of exceptions you
[29:36] (1776.12s)
throw, the number of special cases you
[29:38] (1778.56s)
generate that will reduce system
[29:40] (1780.88s)
complexity and I gave a bunch of
[29:42] (1782.72s)
examples in that chapter where in fact
[29:45] (1785.36s)
by just a slight change in the design of
[29:48] (1788.48s)
the system whole classes of errors
[29:51] (1791.12s)
simply disappear. They can't happen.
[29:52] (1792.64s)
then there is no error to deal with. But
[29:56] (1796.00s)
but I will say this is something that
[29:59] (1799.04s)
this happens occasionally but you have
[30:01] (1801.60s)
to be very careful about this. And in
[30:04] (1804.32s)
the in the class I teach on software
[30:06] (1806.84s)
design students have virtually every
[30:09] (1809.52s)
class there are people that that
[30:11] (1811.04s)
misunderstand this. And in the first
[30:12] (1812.96s)
project they have essentially zero
[30:14] (1814.88s)
exception handling. They're building a
[30:16] (1816.00s)
distributed server where machines can
[30:17] (1817.28s)
crash and they have they don't even
[30:18] (1818.40s)
check for errors in the network. And I
[30:20] (1820.88s)
say, 'You have no error checks here.
[30:22] (1822.88s)
What happens if a system crashes? And
[30:25] (1825.20s)
they say, 'Oh, we just defined those
[30:27] (1827.04s)
errors out of
[30:28] (1828.20s)
existence. And I said, 'Well,
[30:31] (1831.68s)
uh, no, the errors are still there.
[30:34] (1834.16s)
You're just ignoring them. You can't do
[30:35] (1835.92s)
that. Yeah. So, anyhow, I think this
[30:39] (1839.12s)
this is a chapter I have to warn people
[30:40] (1840.64s)
about. It's really easy to take this one
[30:42] (1842.88s)
in in in bad directions. Be very It's
[30:45] (1845.84s)
kind of like a spice. You know, you use
[30:47] (1847.44s)
tiny amounts of it in your cooking and
[30:48] (1848.80s)
you get a good result, but if you use
[30:50] (1850.00s)
very much, you end up with a mess. Yeah.
[30:52] (1852.24s)
But I I I feel errors,
[30:55] (1855.16s)
exceptions, things going wrong, they're
[30:57] (1857.28s)
they're one of the things that are we
[30:58] (1858.88s)
don't really talk to enough about it
[31:01] (1861.36s)
when we think about design. When I look
[31:04] (1864.08s)
at outages, I I remember a lot of
[31:05] (1865.92s)
outages that we had specifically at at
[31:08] (1868.56s)
Uber, but even at other companies, a lot
[31:10] (1870.72s)
of the times when we looked into what
[31:12] (1872.24s)
caused it, it was, you know, like a
[31:13] (1873.76s)
mishandling of of an error. it came from
[31:16] (1876.48s)
one system, we mapped it incorrectly as
[31:18] (1878.64s)
a success. And there was a lot of these
[31:20] (1880.56s)
edge cases which were we we
[31:22] (1882.88s)
misunderstood either errors or
[31:25] (1885.04s)
unexpected responses. You know, they're
[31:26] (1886.72s)
all the same thing. It it was something
[31:28] (1888.08s)
with errors. And I think we struggled to
[31:30] (1890.00s)
put a finger on it. You know, we we we
[31:31] (1891.76s)
looked at ways of like, okay, let's map
[31:33] (1893.52s)
map responses better. Let's have white
[31:35] (1895.76s)
list. Let's have blacklist. But during
[31:39] (1899.28s)
the planning phase, I don't really
[31:40] (1900.72s)
remember. I I feel for planning people
[31:43] (1903.36s)
tend to be optimistic of here's how it
[31:45] (1905.28s)
work here's how the the these things
[31:47] (1907.28s)
will communicate and I rarely recall
[31:50] (1910.00s)
planning sessions where we thought okay
[31:51] (1911.60s)
what what could go bad what will how
[31:54] (1914.32s)
will we catch it how will we will we
[31:56] (1916.24s)
cover so it's it's just I think you make
[31:58] (1918.64s)
a good point in that I I think people
[32:01] (1921.80s)
don't they I don't think exception
[32:04] (1924.40s)
handling is top of people's mind when
[32:06] (1926.32s)
they are doing the overall system
[32:09] (1929.48s)
design it just sort happens tactically
[32:12] (1932.64s)
as people discover potential problems.
[32:15] (1935.60s)
The Yes, this this is I think you're
[32:18] (1938.48s)
you're putting what I'm trying to to put
[32:20] (1940.80s)
as this observation. It's just a very
[32:22] (1942.64s)
interesting one. So the one general
[32:24] (1944.72s)
piece of advice I have for people is
[32:27] (1947.44s)
that when you're building a module and
[32:29] (1949.84s)
thinking about what kinds of special
[32:32] (1952.00s)
cases and exceptions you're going to
[32:33] (1953.48s)
export through your
[32:35] (1955.88s)
interface, you should think about how
[32:39] (1959.52s)
you think the callers are going to deal
[32:42] (1962.00s)
with these and try and put yourself in
[32:45] (1965.68s)
the mindset of the user as you're
[32:48] (1968.16s)
thinking through these and and see
[32:50] (1970.96s)
rather than having 10 different
[32:52] (1972.24s)
exceptions kind of is that actually
[32:53] (1973.68s)
there's only really two different ways
[32:55] (1975.60s)
people are going to handle this and so I
[32:57] (1977.20s)
can boil it all down into two kinds of
[32:59] (1979.68s)
exceptions rather than 10 kinds of
[33:01] (1981.52s)
exceptions for
[33:02] (1982.60s)
example. Uh so I I think the key thing
[33:05] (1985.76s)
is to think about it from the caller
[33:07] (1987.96s)
standpoint. By the way that that brings
[33:10] (1990.56s)
me to I think what what I think is one
[33:12] (1992.48s)
of the most important attributes of a
[33:14] (1994.64s)
really good designer which is what is
[33:17] (1997.44s)
it? they can change their mindset and
[33:20] (2000.40s)
think about things from very different
[33:22] (2002.08s)
point of views. So when I'm designing a
[33:25] (2005.60s)
module, you know, I'm I'm thinking about
[33:27] (2007.20s)
all of the details of that module, but
[33:29] (2009.60s)
then I can change my mindset to think
[33:31] (2011.76s)
about the user of this module and
[33:34] (2014.24s)
realize I don't want to be have any
[33:35] (2015.84s)
awareness of those details. And in
[33:38] (2018.32s)
particular, you know, when I'm using a
[33:39] (2019.84s)
module, I don't want to take advantage
[33:41] (2021.20s)
of how things I might know about the
[33:42] (2022.80s)
insides of that module. I only want to
[33:44] (2024.40s)
use things in the interface. And so
[33:46] (2026.48s)
being able to shift your mindset and
[33:49] (2029.44s)
think about things at one point, but
[33:51] (2031.12s)
then completely put those out of your
[33:53] (2033.28s)
mind and take a totally different view
[33:55] (2035.36s)
some other time. That's really powerful.
[33:57] (2037.68s)
That's how you come up with good
[33:58] (2038.56s)
designs, I think, is again is you're
[34:00] (2040.56s)
when you're designing something, you can
[34:02] (2042.24s)
also think about it from the standpoint
[34:04] (2044.24s)
of somebody that's going to be using it.
[34:07] (2047.60s)
Interesting. So are you saying that
[34:11] (2051.04s)
having empathy and having be being able
[34:13] (2053.44s)
to you know put yourself in the shoes of
[34:15] (2055.92s)
another role may that be a customer
[34:18] (2058.32s)
another developer you know the another
[34:21] (2061.68s)
developer who will be working on this
[34:23] (2063.52s)
that will that could make us a better
[34:24] (2064.96s)
software engineer. I was about to say
[34:26] (2066.72s)
the word empathy.
[34:28] (2068.68s)
Yes. What I was going to say is I think
[34:31] (2071.12s)
this skill set has tremendous value in
[34:34] (2074.32s)
social context as well as an engineering
[34:36] (2076.40s)
context. the ability to think about
[34:39] (2079.04s)
things from some other person's
[34:40] (2080.84s)
viewpoint. By the way, one of the things
[34:43] (2083.12s)
I love about computer science, people,
[34:45] (2085.68s)
you know, people think of us as these
[34:47] (2087.28s)
sort of geeky, nerdy people, but but
[34:50] (2090.72s)
many of the ideas that we use in
[34:52] (2092.80s)
computer systems actually have
[34:54] (2094.48s)
interesting analogies in social systems
[34:56] (2096.64s)
as well.
[34:57] (2097.92s)
Yeah, it's it's just interesting because
[35:01] (2101.04s)
when I you know I I sort of off like you
[35:05] (2105.12s)
as a on a software engineering journey
[35:06] (2106.80s)
you know start to learn to code go to
[35:09] (2109.20s)
college get my first job etc. Initially
[35:11] (2111.68s)
I thought the hard part is the coding
[35:13] (2113.68s)
and every developer I talk to or every
[35:16] (2116.16s)
engineer I talk to above a certain
[35:17] (2117.92s)
certain years we always have this
[35:19] (2119.92s)
discussion that the hardest part in in
[35:21] (2121.68s)
software engineering computers and
[35:23] (2123.12s)
programming it's the people you know
[35:25] (2125.92s)
when we think back of after your first
[35:28] (2128.00s)
few years once you get the syntax and
[35:29] (2129.68s)
and you learn how to debug and those
[35:31] (2131.52s)
things you think about what was the
[35:33] (2133.12s)
hardest project I was what was the
[35:34] (2134.80s)
biggest problem what caused me the most
[35:36] (2136.24s)
headache and it's often times
[35:38] (2138.00s)
miscommunication you know like we
[35:39] (2139.68s)
misunderstood each
[35:41] (2141.36s)
the spec was off etc.
[35:44] (2144.44s)
Uh, of course there's outages as well,
[35:46] (2146.88s)
but what usually the root cause is we
[35:48] (2148.96s)
did not expect this to happen. We didn't
[35:51] (2151.36s)
have the empathy for the the user and
[35:54] (2154.40s)
and yeah, it it usually and the biggest
[35:56] (2156.56s)
one is is conflicts with with teammates
[35:59] (2159.68s)
with with people that that you work
[36:01] (2161.28s)
with. It's all human things. It's it so
[36:04] (2164.00s)
it's it's it's interesting. You know,
[36:05] (2165.36s)
there's this joke that the hardest thing
[36:06] (2166.56s)
in computer science are, I think,
[36:08] (2168.24s)
caching and naming things, but I will
[36:10] (2170.16s)
add people as well.
[36:12] (2172.40s)
Yeah, that sounds interesting. Yep.
[36:15] (2175.20s)
One one thing that in your book you
[36:17] (2177.12s)
don't touch on. Uh, of course, you know,
[36:19] (2179.44s)
the book cannot touch on anything, but
[36:20] (2180.96s)
it's a practice that a lot of teams and
[36:23] (2183.28s)
and tech companies use is these things
[36:24] (2184.96s)
called design reviews and discussions.
[36:26] (2186.56s)
The idea is that someone before building
[36:29] (2189.12s)
a system will write down a plan. Um,
[36:31] (2191.76s)
people will either have a meeting. Uh,
[36:33] (2193.76s)
Amazon famously has this writing culture
[36:36] (2196.72s)
where people get get into a room and
[36:38] (2198.24s)
they they read the plan and then they
[36:40] (2200.00s)
discuss. In other places it's done with
[36:41] (2201.68s)
Google Docs uh and and even some others
[36:44] (2204.64s)
like like Figma they use their own tools
[36:46] (2206.40s)
to kind of like draw visuals and comment
[36:49] (2209.60s)
on it. What what is your take
[36:52] (2212.28s)
of building a
[36:54] (2214.68s)
system explaining a plan either
[36:57] (2217.04s)
whiteboarding or other ways and then
[36:58] (2218.16s)
kind of criticizing each other's plans?
[37:00] (2220.00s)
Have have you either done this to some
[37:02] (2222.24s)
of the things that you do? Do you
[37:03] (2223.60s)
encourage students in your class on
[37:05] (2225.68s)
design classes to do this?
[37:07] (2227.84s)
Oh yeah, we do this. Certainly all the
[37:10] (2230.08s)
projects I work on and certainly when I
[37:12] (2232.32s)
was doing startups, we would do design
[37:14] (2234.48s)
reviews. They were relatively informal.
[37:16] (2236.64s)
We didn't have Oh yeah. You know,
[37:18] (2238.32s)
lengthy written documents, but we get
[37:20] (2240.32s)
together and talk about ideas. And this
[37:23] (2243.04s)
is where again, you get the multiple
[37:24] (2244.88s)
designs and you talk about the
[37:26] (2246.64s)
trade-offs. And you know, it's just if
[37:29] (2249.12s)
you can get multiple minds to think
[37:31] (2251.44s)
about a topic, it's pretty clear you're
[37:33] (2253.52s)
going to come up with better ideas than
[37:35] (2255.68s)
if just one person does it. Again, this
[37:37] (2257.44s)
is how this is an area where smart spark
[37:38] (2258.96s)
people sometimes have to to get past
[37:40] (2260.88s)
their history because they've been for
[37:43] (2263.28s)
much of their life in environments where
[37:44] (2264.56s)
they weren't they couldn't expect to get
[37:46] (2266.48s)
a lot of useful input from other people.
[37:48] (2268.24s)
But when you're working at the very
[37:49] (2269.36s)
highest level with very very smart
[37:51] (2271.16s)
people, you
[37:52] (2272.84s)
know, two two minds are better than one
[37:55] (2275.36s)
clearly. So I'm all for it. Again, you
[37:57] (2277.52s)
know, it's you don't want to get caught
[37:58] (2278.88s)
in analysis paralysis and so trying to
[38:02] (2282.88s)
figure out what's the right place to do
[38:04] (2284.56s)
it, how much time to spend on it. a
[38:06] (2286.08s)
certain art to getting just the right
[38:07] (2287.44s)
level, but I'm all for that. And I think
[38:10] (2290.64s)
this might have changed in the last few
[38:12] (2292.08s)
years because of remote work. You know,
[38:13] (2293.44s)
we've had an explosion of remote work
[38:14] (2294.88s)
and now it's a bit of a contraction. But
[38:16] (2296.72s)
I remember before 2020, some of the most
[38:21] (2301.04s)
innovative companies when you went into
[38:22] (2302.88s)
their thing or startups who were gaining
[38:24] (2304.56s)
momentum, what they had is whiteboards
[38:27] (2307.52s)
and and these erasable whiteboards
[38:29] (2309.92s)
everywhere. And what would happen in
[38:32] (2312.00s)
these places actually I even had at one
[38:33] (2313.68s)
of my older teams where we actually
[38:35] (2315.60s)
requested a whiteboard is people you
[38:37] (2317.76s)
know you're just doing your thing
[38:39] (2319.28s)
someone says something you're like hold
[38:40] (2320.96s)
on and then you go to the whiteboard you
[38:43] (2323.04s)
start whiteboarding and you know you can
[38:44] (2324.72s)
erase it sometimes you leave it there
[38:46] (2326.40s)
sometimes you photograph it basically it
[38:48] (2328.32s)
it's just like on on demand when there's
[38:50] (2330.56s)
a it could be any trigger it could be
[38:52] (2332.32s)
just mishering it can be doing you just
[38:55] (2335.20s)
have people come together do their ideas
[38:57] (2337.28s)
and there's something special about uh
[39:00] (2340.00s)
there haven't startups have who tried to
[39:01] (2341.44s)
replicate this digitally. But there's
[39:02] (2342.72s)
something special about in in person
[39:04] (2344.56s)
just you know like doing just getting
[39:06] (2346.88s)
your ideas out there especially when
[39:08] (2348.88s)
we're talking about something like
[39:09] (2349.92s)
software where you do you know boxes and
[39:11] (2351.68s)
arrows do help. I totally agree. I love
[39:14] (2354.48s)
whiteboards and I like having meetings
[39:16] (2356.72s)
that are in person. I've also over my
[39:19] (2359.44s)
career developed a technique for using
[39:22] (2362.52s)
whiteboards to resolve complex issues
[39:25] (2365.36s)
which may or may not even be technical
[39:27] (2367.52s)
other kinds of management issues in a
[39:29] (2369.20s)
company. And what I found is that often
[39:31] (2371.52s)
times people get in meetings and they
[39:33] (2373.04s)
just kind of talk past each other,
[39:34] (2374.88s)
repeat the same account arguments and
[39:36] (2376.48s)
counter arguments and it just goes round
[39:38] (2378.88s)
and round and round and never reaches a
[39:40] (2380.32s)
conclusion. So what I do in these
[39:42] (2382.76s)
situations is I stand at the whiteboard
[39:46] (2386.04s)
and basically I list typically we're
[39:49] (2389.44s)
arguing for or against something just
[39:51] (2391.20s)
list all the arguments for and all the
[39:52] (2392.88s)
arguments against. And the rule of the
[39:55] (2395.52s)
discussion is you can make any argument
[39:58] (2398.08s)
you think is reasonable. No one is
[40:00] (2400.64s)
allowed to tell you your argument has to
[40:02] (2402.64s)
be removed. That's a bad argument. Every
[40:04] (2404.16s)
argument goes on the board. But you are
[40:06] (2406.80s)
not allowed to repeat an argument that
[40:08] (2408.64s)
is already on the
[40:10] (2410.68s)
board. And this is really important. So
[40:13] (2413.12s)
having the arguments up so everybody can
[40:14] (2414.80s)
see everybody's argument is valid. You
[40:16] (2416.80s)
know, everyone's allowed to contribute.
[40:18] (2418.72s)
No one can stop you from putting your
[40:20] (2420.40s)
argument on the board. So everybody's
[40:21] (2421.92s)
arguments goes up and then the
[40:23] (2423.76s)
discussion just naturally reaches a
[40:25] (2425.36s)
point where nobody has anything more to
[40:26] (2426.72s)
say. And so this shortens the discussion
[40:29] (2429.92s)
and then what I do after that is I take
[40:31] (2431.68s)
a straw poll. And what's amazing to me
[40:36] (2436.40s)
and you should try this sometime because
[40:37] (2437.92s)
it's really it's quite shocking is that
[40:40] (2440.56s)
people will be arguing violently on both
[40:42] (2442.80s)
sides and you'll think there's total
[40:44] (2444.92s)
disagreement and then at the end and
[40:46] (2446.96s)
then we do straw poll. We tell
[40:48] (2448.40s)
everybody, you get to weight these
[40:50] (2450.24s)
arguments in any way that you think is
[40:52] (2452.08s)
appropriate. You decide which ones you
[40:53] (2453.60s)
value, which ones you don't value.
[40:55] (2455.44s)
Should we do A or B? We almost always
[40:58] (2458.40s)
end up with a really really strong
[41:00] (2460.08s)
consensus. It's it's shocking. Like I'll
[41:02] (2462.40s)
be in these in these meetings where I
[41:03] (2463.68s)
think we have total disagreement here
[41:05] (2465.76s)
and then we do the vote and it's
[41:07] (2467.84s)
unanimous. Well, it sounds I have not
[41:10] (2470.56s)
done it. So it's it's you know it sounds
[41:12] (2472.00s)
one of those things where I don't
[41:13] (2473.12s)
believe I wouldn't believe you. I'm
[41:14] (2474.32s)
sorry. I I do believe you. But yeah, as
[41:16] (2476.64s)
as you said, I think, you know, I'm I'm
[41:18] (2478.16s)
going to try to try this out. That's at
[41:19] (2479.60s)
white on that whiteboard, I think. Yeah.
[41:22] (2482.32s)
And and I think anyone listening or
[41:24] (2484.08s)
watching just like this is this is why I
[41:26] (2486.56s)
love these discussions is just like
[41:28] (2488.00s)
getting a tactic that you can try out,
[41:30] (2490.16s)
you know, and and now that more and more
[41:32] (2492.80s)
companies are are doing at least a few
[41:34] (2494.32s)
days in the office, you get a
[41:35] (2495.28s)
whiteboard, try it out. By the way, this
[41:37] (2497.52s)
works best in environments where people
[41:39] (2499.64s)
are first generally pretty smart and
[41:42] (2502.40s)
reasonable and goal aligned like you
[41:44] (2504.16s)
know in a startup for example or an
[41:45] (2505.68s)
engineering team. You're typically goal
[41:47] (2507.36s)
aligned. You all want to achieve the
[41:48] (2508.88s)
same result. You know would this work in
[41:51] (2511.20s)
Congress with uh with opposing political
[41:53] (2513.20s)
parties? No. No. Not in political
[41:55] (2515.04s)
environments. Yeah. Fortunately, we
[41:56] (2516.80s)
don't operate mostly in those
[41:58] (2518.00s)
environments. Well, yeah. And and I I
[42:00] (2520.64s)
think you know one nice thing about how
[42:02] (2522.16s)
the tech industry seems to be evolving
[42:03] (2523.92s)
from my perspective is it's more and
[42:06] (2526.16s)
more common and accepted even at large
[42:08] (2528.32s)
companies that you do want teams to have
[42:10] (2530.00s)
clear goals like h have those goals
[42:11] (2531.84s)
because if you don't have it you know
[42:13] (2533.52s)
everyone's going to go in different
[42:14] (2534.56s)
directions. So I I think it's becoming
[42:16] (2536.00s)
more common also companies that don't do
[42:17] (2537.60s)
it they they fizzle out pretty quickly.
[42:19] (2539.88s)
So similar question uh on design there's
[42:23] (2543.84s)
two schools of of thoughts when it comes
[42:25] (2545.76s)
to design. One is let's design up front
[42:27] (2547.36s)
and the other one is let's not let's not
[42:28] (2548.64s)
design up front. let's just do
[42:29] (2549.68s)
prototyping or just, you know, build
[42:31] (2551.20s)
something and let let the code decide.
[42:33] (2553.36s)
Do you subscribe to either of these
[42:35] (2555.04s)
approaches? You know, taking time
[42:36] (2556.88s)
upfront or or just dropping into and
[42:39] (2559.20s)
prototyping or or you have you seen
[42:41] (2561.52s)
times or types of projects where one
[42:43] (2563.68s)
just works better than the other? My
[42:46] (2566.16s)
personal belief is that design permeates
[42:48] (2568.80s)
the entire development process. you do
[42:50] (2570.96s)
it up front, you do it while you're
[42:52] (2572.56s)
coding, you do it while you're testing,
[42:54] (2574.16s)
you do it while you're fixing bugs, that
[42:56] (2576.08s)
you should constantly be thinking about
[42:58] (2578.28s)
design. Uh I I think I think you should
[43:02] (2582.16s)
always do at least a little bit of
[43:03] (2583.60s)
design up front. Again, you know, not
[43:06] (2586.16s)
waterfall method. You you it's important
[43:09] (2589.12s)
to realize that our software systems are
[43:11] (2591.20s)
so complicated that we are not able to
[43:14] (2594.64s)
predict the consequences of our design
[43:16] (2596.40s)
decisions. We simply can't. But it's I
[43:19] (2599.04s)
think it's really important to do some
[43:21] (2601.04s)
design and to have some hypotheses to
[43:23] (2603.68s)
work from and then you start
[43:25] (2605.92s)
implementing and of course you know once
[43:27] (2607.92s)
you get in battle all your plans kind of
[43:29] (2609.68s)
fall apart and you'll discover lots of
[43:31] (2611.20s)
problems with it and so you have to be
[43:33] (2613.04s)
prepared to revise as soon as you
[43:35] (2615.12s)
discover problems. But if you don't do
[43:37] (2617.04s)
any design up front I think you're
[43:38] (2618.48s)
wasting your time on code that is that's
[43:41] (2621.04s)
just very unlikely to be useful. The
[43:44] (2624.16s)
only the only time I'd argue for coding
[43:45] (2625.68s)
without design is if you're so young and
[43:47] (2627.60s)
inexperienced you really have no clue
[43:49] (2629.28s)
how to do design. Yeah. Then you know
[43:51] (2631.36s)
you may just have to write some code and
[43:52] (2632.80s)
and start learning from it. But in any
[43:55] (2635.04s)
case, absolutely be prepared to redesign
[43:57] (2637.84s)
you know as you discover problems.
[44:00] (2640.32s)
Yeah. Like I had one of my colleagues a
[44:03] (2643.36s)
long time ago many years ago. He said
[44:06] (2646.24s)
something super interesting because we
[44:07] (2647.52s)
were talking about why software is hard
[44:09] (2649.44s)
and we were experienced engineers at
[44:10] (2650.80s)
this point. like why do some projects
[44:12] (2652.24s)
take a lot longer even if we do proper
[44:14] (2654.16s)
planning up front? Why are some projects
[44:16] (2656.48s)
actually, you know, they're they're
[44:17] (2657.44s)
pretty easy? And he says something
[44:18] (2658.56s)
interesting. He's like software like
[44:20] (2660.64s)
building software is a little bit like
[44:22] (2662.40s)
like we're in a terrain uh you know like
[44:25] (2665.12s)
and we need to march to that location.
[44:27] (2667.68s)
We we we see that we see the target but
[44:29] (2669.84s)
we the terrain is unknown. And sometimes
[44:31] (2671.76s)
you just walk there and it's exactly how
[44:33] (2673.84s)
you'd expect. Other times you're walking
[44:36] (2676.00s)
and oh suddenly there's a big kind of
[44:37] (2677.84s)
boulder appears out of nowhere and you
[44:39] (2679.60s)
kind of climb over it. Then another one
[44:41] (2681.44s)
comes. Sometimes you walk and like a
[44:43] (2683.44s)
mine explodes right in front of you and
[44:45] (2685.12s)
suddenly you have this massive hole and
[44:46] (2686.96s)
you you didn't know this this was a
[44:48] (2688.80s)
minefield. And and he said and this
[44:52] (2692.20s)
analogy kind of resonated with me
[44:54] (2694.88s)
because I realized a lot of it is about
[44:57] (2697.04s)
like how how much unknowns are are
[44:59] (2699.48s)
unknowns and the approach is just very
[45:01] (2701.92s)
different right like if you're building
[45:03] (2703.20s)
a tech stack or product that you've done
[45:05] (2705.04s)
before versus something that is
[45:06] (2706.80s)
completely new. if are using new
[45:08] (2708.48s)
technology, are using a beta version of
[45:10] (2710.64s)
a framework that could have issues, but
[45:12] (2712.32s)
maybe it'll work and so on. Yeah, I I
[45:16] (2716.08s)
think a lot of it has to do whether
[45:17] (2717.44s)
you're in a domain you've been in before
[45:19] (2719.20s)
or not. It's not that you got lucky and
[45:21] (2721.84s)
there was a flat field. I think it's
[45:23] (2723.76s)
more that oh, I've actually I've
[45:25] (2725.52s)
actually walked through this mountain
[45:26] (2726.80s)
range before and so I know the right
[45:28] (2728.28s)
paths. And so, you know, if you're
[45:30] (2730.80s)
building your if you're building a a
[45:33] (2733.04s)
driver for a new device and you've built
[45:34] (2734.72s)
drivers for five devices before that,
[45:36] (2736.64s)
you kind of know what the main problems
[45:38] (2738.40s)
are. And the process is much more
[45:40] (2740.64s)
predictable and smooth. If you're in a
[45:43] (2743.40s)
environment, well, I don't know, maybe
[45:45] (2745.28s)
maybe I'm just unlucky, but I don't
[45:47] (2747.12s)
think I've ever had a smooth experience
[45:48] (2748.64s)
when I'm in a new environment. There's
[45:50] (2750.56s)
just it's just too hard to predict.
[45:52] (2752.80s)
Neither have I. like the only kind of
[45:54] (2754.64s)
new environment. Well, it's not really a
[45:56] (2756.00s)
new environment like a new version of
[45:57] (2757.36s)
the framework and it's not not a big
[45:58] (2758.96s)
change or you know the a new language
[46:02] (2762.08s)
release which is backwards compatible
[46:03] (2763.92s)
with all the important stuff and you're
[46:05] (2765.36s)
not using the new features that kind of
[46:06] (2766.80s)
stuff but it's kind of fake right like
[46:08] (2768.08s)
you're it's kind of the same as
[46:11] (2771.80s)
before. One one thing I'd like to get
[46:13] (2773.76s)
into is is the writing of this book
[46:15] (2775.44s)
because again like like this book it it
[46:17] (2777.44s)
feels to me it has just a lot of really
[46:19] (2779.28s)
kind of practical insights.
[46:21] (2781.92s)
Now first of all it I haven't seen many
[46:24] (2784.24s)
books written about software design or
[46:26] (2786.32s)
software architecture
[46:28] (2788.04s)
specifically. How did you get the idea
[46:30] (2790.32s)
to even start writing a book about this?
[46:33] (2793.68s)
You know like pretty hard to hard to
[46:35] (2795.44s)
tangle uh domain. It's kind of a long
[46:38] (2798.00s)
process but the path leads through the
[46:40] (2800.24s)
course I taught at Stanford. So the the
[46:43] (2803.04s)
the background for this is as I
[46:45] (2805.04s)
mentioned earlier I love coding. I'm one
[46:46] (2806.88s)
of few professors that still writes
[46:49] (2809.12s)
large amounts of code. You know, I try
[46:50] (2810.40s)
to write five or 10,000 lines of code a
[46:51] (2811.92s)
year at least. It's been significant
[46:53] (2813.28s)
fraction of my time coding. And I love
[46:56] (2816.08s)
design. I think design is one of the
[46:57] (2817.84s)
most amazing
[47:00] (2820.68s)
creative forms that has ever existed in
[47:03] (2823.60s)
in the history of humankind. It's just
[47:05] (2825.20s)
really fascinating, beautiful,
[47:08] (2828.44s)
challenging domain. And so I' I've often
[47:11] (2831.28s)
I thought think about it and I try and
[47:12] (2832.80s)
think what is a good design and how do I
[47:15] (2835.20s)
get to how do I design software that's
[47:16] (2836.72s)
good? What what are the techniques for
[47:18] (2838.16s)
that? But over time, I noticed to my
[47:22] (2842.28s)
shock, nobody teaches it. Literally,
[47:25] (2845.28s)
other than the course I eventually
[47:26] (2846.56s)
started at Stanford, I don't think
[47:27] (2847.68s)
there's a single course anywhere in the
[47:29] (2849.64s)
world where software design, not tools
[47:32] (2852.80s)
or processes, but the act of software
[47:34] (2854.32s)
design is the primary element of the
[47:35] (2855.88s)
course. And so this just kind of graded
[47:39] (2859.36s)
on me for I don't know a decade or more.
[47:41] (2861.88s)
And once I got back to academia at
[47:44] (2864.32s)
Stanford, I now would have had this
[47:45] (2865.52s)
experience in industry. I'd had a lot
[47:47] (2867.12s)
more software development experience and
[47:48] (2868.72s)
more ideas myself. I started thinking
[47:51] (2871.28s)
about this and I finally decided I'm
[47:52] (2872.72s)
just going to who knows what's going to
[47:55] (2875.04s)
happen but I'm just going to try
[47:56] (2876.08s)
teaching a course see what I can do. And
[47:58] (2878.56s)
so that forced me to start thinking
[48:00] (2880.40s)
about my ideas and try and capture them
[48:02] (2882.56s)
in sort of simple principles. You said
[48:04] (2884.96s)
no one teaches design and you know the
[48:06] (2886.80s)
I'll tell you the perspective from like
[48:09] (2889.92s)
someone working at a tech company like
[48:12] (2892.40s)
usually we don't interview new grads or
[48:15] (2895.68s)
people with a few years experience on
[48:17] (2897.04s)
this on software architecture topics
[48:18] (2898.64s)
usually interviews coding and then
[48:20] (2900.32s)
software architecture or design a
[48:21] (2901.60s)
complex system and the reason being well
[48:24] (2904.00s)
you need to work in the industry for
[48:25] (2905.52s)
several years to start to get a sense
[48:27] (2907.76s)
for for for this thing but I have a
[48:29] (2909.92s)
feeling you kind of you know you turned
[48:31] (2911.68s)
this upside down you're you're you've
[48:33] (2913.20s)
kind of proven or well you're attempting
[48:35] (2915.28s)
to to to do with the class that design
[48:38] (2918.16s)
can be taught even for students who have
[48:41] (2921.76s)
recently learned to code. What what was
[48:44] (2924.96s)
your approach and and what what has the
[48:46] (2926.48s)
response been from from the people
[48:47] (2927.76s)
who've done it especially now that you
[48:49] (2929.52s)
know some of those folks are have been
[48:51] (2931.20s)
out the industry for years and I'm sure
[48:52] (2932.96s)
they're coming back with some of their
[48:54] (2934.24s)
experience because they now have a skill
[48:55] (2935.76s)
set that you know a lot of their peers
[48:57] (2937.36s)
do not. Yeah. So I so I decided to teach
[49:00] (2940.16s)
this class and then the question is well
[49:01] (2941.68s)
how do I teach it? And the model I used
[49:04] (2944.72s)
was my English writing classes from high
[49:06] (2946.80s)
school. I don't know if this is the way
[49:08] (2948.00s)
it's still done today but back when I
[49:09] (2949.36s)
took English the way you learn to write
[49:11] (2951.04s)
is you would get an assignment you'd
[49:12] (2952.48s)
write something your teacher would mark
[49:14] (2954.40s)
it up and then you would rewrite it and
[49:15] (2955.92s)
submit it again and you might do a
[49:17] (2957.20s)
several iterations on it. And the I
[49:19] (2959.52s)
think the key elements are first getting
[49:22] (2962.64s)
feedback so you have somebody that can
[49:25] (2965.52s)
criticize your work and then second
[49:28] (2968.88s)
redoing to incorporate that feedback and
[49:31] (2971.04s)
it's the process of redoing something I
[49:33] (2973.92s)
think where you really internalize the
[49:37] (2977.44s)
feedback and the concepts. So that's the
[49:39] (2979.92s)
way the class is taught. It's over a
[49:41] (2981.36s)
course of a quarter people do basically
[49:42] (2982.88s)
three stages where they build something
[49:44] (2984.24s)
significant and then we do extensive
[49:45] (2985.92s)
code reviews. For example, I read every
[49:47] (2987.68s)
line of code for all the teams on the
[49:49] (2989.20s)
class, which means that what what do
[49:51] (2991.36s)
people build? What is the project or
[49:53] (2993.12s)
what was one of the projects? So the
[49:54] (2994.88s)
actually the first two projects happen
[49:56] (2996.08s)
to be the raft consensus protocol
[49:57] (2997.60s)
divided up into two phase and that's
[49:59] (2999.44s)
actually turns out to have some really
[50:00] (3000.72s)
interesting design problems that that
[50:02] (3002.72s)
confound students and lead to lots of
[50:04] (3004.32s)
mistakes. So it's this process of they
[50:06] (3006.72s)
and when they start off I give them no
[50:08] (3008.16s)
clues, no hints, no structure. They have
[50:10] (3010.32s)
to do it completely from scratch. This
[50:12] (3012.24s)
is the first time in their lives they've
[50:13] (3013.92s)
ever done anything like that. So it it's
[50:16] (3016.00s)
actually it's both very fun and very
[50:17] (3017.92s)
scary for the
[50:19] (3019.24s)
students. And then and then we do this
[50:22] (3022.32s)
extensive code review where they review
[50:23] (3023.84s)
it in class and other students read
[50:25] (3025.44s)
their projects and give feedback. And
[50:27] (3027.36s)
then I read every line of code and I
[50:29] (3029.12s)
spend about an hour with each team. They
[50:30] (3030.80s)
typically get 50 to 100 comments from me
[50:33] (3033.36s)
on their so two to three thousand lines
[50:35] (3035.36s)
of code they've written. And then they
[50:36] (3036.80s)
go back and they rework. And I think
[50:39] (3039.12s)
that's when the aha moments come from
[50:40] (3040.96s)
students when they get the feedback from
[50:43] (3043.04s)
me where I can point out code is
[50:45] (3045.68s)
complicated. People generally know that
[50:47] (3047.84s)
code is complicated. But then I can
[50:48] (3048.96s)
point out here's why it's complicated
[50:51] (3051.04s)
because you didn't follow this principle
[50:53] (3053.12s)
that we've been talking about and here's
[50:55] (3055.52s)
how if you apply the principle you can
[50:58] (3058.08s)
make it simpler. And then they come back
[51:00] (3060.56s)
with their second projects and the
[51:02] (3062.00s)
second projects are so much better. And
[51:04] (3064.00s)
and you could tell the students the
[51:06] (3066.56s)
students are really excited because they
[51:09] (3069.04s)
can kind of feel their power. They can
[51:11] (3071.12s)
feel that I was able to make this a lot
[51:13] (3073.72s)
better. And so um you know I was worried
[51:16] (3076.80s)
about how the students would react to
[51:18] (3078.00s)
the class. So it's an extremely positive
[51:20] (3080.56s)
experience for the students. You can
[51:22] (3082.24s)
tell over the course of the quarter the
[51:24] (3084.08s)
way they think about software has
[51:25] (3085.20s)
changed really in significant ways. And
[51:28] (3088.40s)
and actually I warned them at the end of
[51:29] (3089.84s)
the quarter I say you know I just want
[51:31] (3091.44s)
to warn you when you go into companies
[51:33] (3093.28s)
you're going to find you know a lot of
[51:34] (3094.56s)
stuff that people in the companies don't
[51:35] (3095.84s)
know even much more senior engineers. So
[51:38] (3098.56s)
and then we talk about how do you deal
[51:40] (3100.16s)
with that because as a junior employee
[51:41] (3101.76s)
you may not have much ability to change
[51:43] (3103.12s)
the company but anyhow uh so I I totally
[51:46] (3106.88s)
totally believe design can be taught.
[51:49] (3109.36s)
Now, you still need experience. You
[51:50] (3110.80s)
know, it's not like you're gonna you're
[51:51] (3111.92s)
not gonna be a world-class programmer
[51:53] (3113.20s)
after one quarter in my course, but I
[51:56] (3116.08s)
think that's they can start the process
[51:57] (3117.84s)
in motion and give you a new way of
[51:59] (3119.68s)
thinking about software that you've
[52:00] (3120.88s)
never thought about in your classes up
[52:02] (3122.16s)
until now. Yeah. And I think you can
[52:04] (3124.72s)
probably like my sense is that the
[52:06] (3126.88s)
students who go through this this class
[52:09] (3129.76s)
and and you know they they build up
[52:11] (3131.28s)
their experience. They also learn about
[52:13] (3133.20s)
themselves. They they learn how they can
[52:14] (3134.96s)
challenge themselves, how they can talk
[52:16] (3136.32s)
about it.
[52:18] (3138.48s)
other software developers take, you
[52:20] (3140.64s)
know, like in when you enter the
[52:22] (3142.24s)
industry, like in the first few years,
[52:23] (3143.60s)
you're just going to like learn the hard
[52:25] (3145.44s)
way. Like I I remember that when I was a
[52:28] (3148.16s)
a new grad or a junior developer, you
[52:30] (3150.64s)
know, start out of college or I was
[52:32] (3152.56s)
actually still doing college, but I was
[52:34] (3154.24s)
working at a workplace and a senior
[52:36] (3156.32s)
developer told me, I need to do this,
[52:38] (3158.08s)
you know, the planning, we're going to
[52:39] (3159.84s)
use this technology, you need to
[52:41] (3161.28s)
implement this and this. And I just had
[52:43] (3163.44s)
a bad feeling about it, but I did it.
[52:46] (3166.00s)
And it it it just became worse and worse
[52:48] (3168.00s)
and more and more complicated. And it
[52:49] (3169.76s)
the the solution that it was some some
[52:52] (3172.00s)
in-house database and it had performance
[52:53] (3173.84s)
issues and it got bad. It got really bad
[52:56] (3176.48s)
and you know the developers the senior
[52:57] (3177.92s)
developers stepped away and in the end
[53:00] (3180.20s)
like weeks into the project the the
[53:03] (3183.04s)
customer was frustrated because like a
[53:04] (3184.88s)
page load took 15 seconds which is like
[53:08] (3188.48s)
it's it's not it was not okay. And then
[53:10] (3190.80s)
I I just like over a weekend I I I
[53:13] (3193.28s)
worked I must have worked for like I
[53:14] (3194.72s)
don't know 20 plus hours. I just rewrote
[53:16] (3196.48s)
the whole thing the way I I would have
[53:18] (3198.88s)
done it and it all of the things were
[53:21] (3201.04s)
fixed etc. But it it felt to me like I I
[53:24] (3204.48s)
was kind of like I was thinking like am
[53:26] (3206.48s)
I being crazy like like this experienced
[53:28] (3208.96s)
person told me to do this? I just didn't
[53:30] (3210.72s)
really have the vocabulary. I didn't
[53:32] (3212.64s)
have the the faith in in myself as well
[53:35] (3215.76s)
that that I I can do this. And everyone
[53:39] (3219.20s)
goes through these things eventually,
[53:40] (3220.48s)
right? Like you you kind of learn, you
[53:42] (3222.16s)
burn yourself, but I feel you might be
[53:44] (3224.08s)
giving those students a bit of a
[53:45] (3225.28s)
shortcut uh to to maybe avoid some of
[53:48] (3228.32s)
these pitfalls. I suspect that
[53:50] (3230.16s)
experience was really good for you. Oh,
[53:52] (3232.24s)
it was really good. Totally. To do it
[53:53] (3233.92s)
the wrong way. Yes. And see and see your
[53:57] (3237.52s)
your gut feeling validated. Yep. So next
[54:00] (3240.80s)
time around, you know, you can probably
[54:03] (3243.36s)
kind of put some words to that and talk
[54:05] (3245.84s)
more with more authority about why
[54:09] (3249.20s)
that's a problem and then going back and
[54:11] (3251.12s)
doing it the right way and seeing how
[54:12] (3252.32s)
much better it is. I mean that that must
[54:13] (3253.44s)
have been a really great experience for
[54:14] (3254.64s)
you. It like the nice the cool thing is
[54:16] (3256.64s)
that I think all these things are like
[54:18] (3258.16s)
really good learnings. It's like I
[54:19] (3259.28s)
wouldn't have any other suffering maybe
[54:20] (3260.88s)
but yeah but I I feel that's sometimes
[54:23] (3263.52s)
that's the most memorable learning. No.
[54:25] (3265.44s)
Uh honestly I think almost all learning
[54:29] (3269.12s)
is about making mistakes. Yeah. that the
[54:31] (3271.52s)
way you learn, the most powerful ways to
[54:34] (3274.04s)
learn are to make
[54:36] (3276.60s)
mistakes, see, understand why there are
[54:38] (3278.96s)
mistakes and then fix them. And to me,
[54:41] (3281.76s)
education is it's about basically
[54:44] (3284.64s)
creating a safe space where people can
[54:46] (3286.64s)
make mistakes and learn from them. And I
[54:48] (3288.56s)
tell the students in the class about
[54:49] (3289.60s)
this. I tell them, you know, you're
[54:51] (3291.20s)
going to get a tremendous amount of
[54:52] (3292.24s)
criticism. I tell them, I am going to
[54:54] (3294.00s)
pick every single knit with your code.
[54:57] (3297.20s)
You're going to think there you're gonna
[54:58] (3298.24s)
probably be mad at me for things I'm
[54:59] (3299.44s)
picking, but I'm going to show you every
[55:00] (3300.72s)
single thing that's wrong because I want
[55:01] (3301.92s)
you to be aware of that. And I one of
[55:04] (3304.24s)
the things I hope students get out of
[55:05] (3305.60s)
the class is realizing that's a really
[55:08] (3308.08s)
productive experience. It's actually
[55:09] (3309.68s)
good for me to have people come in and
[55:11] (3311.12s)
challenge me, scrutinize my code, and
[55:13] (3313.44s)
give me feedback because I think a lot
[55:15] (3315.12s)
of developers out there are kind of
[55:17] (3317.56s)
sensitive. Maybe maybe they worry that
[55:20] (3320.80s)
if somebody criticizes their code, maybe
[55:22] (3322.64s)
that means they weren't such a good
[55:23] (3323.80s)
coder. Or maybe they think if I have to
[55:26] (3326.40s)
come up with a second idea, maybe I'm
[55:27] (3327.84s)
not that smart. Only only dumb people
[55:30] (3330.48s)
have to do things twice. Smart people
[55:32] (3332.08s)
always get things right the first time.
[55:33] (3333.44s)
And so then you you latch on to this. I
[55:35] (3335.12s)
can't ever admit that my first idea
[55:36] (3336.40s)
isn't great. And so I think it's really
[55:38] (3338.24s)
important that the whole idea of making
[55:40] (3340.16s)
mistakes is so important and so
[55:42] (3342.56s)
constructive. We need to, you know, we
[55:44] (3344.88s)
need to honor that, I think, as
[55:46] (3346.00s)
engineers. slightly different topic, but
[55:48] (3348.96s)
you and Uncle Bob, Robert Martin, had an
[55:51] (3351.92s)
interesting discussion about his book
[55:53] (3353.36s)
Clean Code Online where you you both
[55:56] (3356.08s)
like wrote uh your your thoughts on on
[55:59] (3359.20s)
different parts and there were a few
[56:00] (3360.80s)
parts where you your opinions diverged
[56:04] (3364.56s)
and I I was interested in getting into
[56:06] (3366.40s)
some of these. So, f first was on short
[56:09] (3369.20s)
methods and the methods doing just one
[56:11] (3371.36s)
thing. You know, Robert Martin is is a
[56:13] (3373.76s)
big fan of of doing so. He's saying this
[56:16] (3376.40s)
will lead to cleaner code, clear
[56:18] (3378.08s)
responsibilities and so on. And you had
[56:20] (3380.24s)
a slightly different take on this. Boy,
[56:23] (3383.20s)
I sure do. So I mentioned earlier on
[56:25] (3385.92s)
that design is about tradeoffs and if
[56:28] (3388.72s)
you take any idea and take it to the
[56:31] (3391.32s)
extreme, you end up in a bad place. And
[56:34] (3394.40s)
so I think clean code has done that in
[56:36] (3396.40s)
many places. And this is this was my my
[56:39] (3399.68s)
biggest overall concern. And so method
[56:41] (3401.52s)
size is one of them. So, of course, I
[56:44] (3404.08s)
think we agree that really long methods
[56:47] (3407.04s)
are probably harder to understand and
[56:48] (3408.80s)
deal with than shorter methods. So,
[56:50] (3410.40s)
there's there's some value in
[56:53] (3413.24s)
shortness, but it isn't the method
[56:55] (3415.84s)
length per se that's most important to
[56:57] (3417.84s)
me. It's this notion of depth. Are we
[56:59] (3419.52s)
hiding complexity?
[57:01] (3421.84s)
And so, what happened in clean code is
[57:04] (3424.64s)
that is that shortness was taken as an
[57:07] (3427.12s)
absolute good with no limits on it. The
[57:09] (3429.44s)
more you shorter the better. And so a
[57:12] (3432.00s)
threeline method is better than a
[57:13] (3433.44s)
fiveline method according to clean code.
[57:16] (3436.24s)
And the problem with that is
[57:18] (3438.60s)
that well maybe that one method is a
[57:21] (3441.92s)
little bit easier to understand but by
[57:24] (3444.08s)
having shorter methods you now have a
[57:25] (3445.92s)
lot more methods and so you have a lot
[57:28] (3448.16s)
more interfaces and now the complexity
[57:30] (3450.88s)
of the interfaces ends up actually
[57:33] (3453.60s)
making the system more complicated than
[57:35] (3455.60s)
it was before. And in particular his
[57:40] (3460.24s)
uh he is comfortable if uh if methods
[57:43] (3463.52s)
are entangled having multiple short
[57:45] (3465.84s)
methods where in fact they're so
[57:47] (3467.20s)
entangled that you can't understand one
[57:49] (3469.20s)
method without actually looking at the
[57:50] (3470.96s)
code of the other methods at the same
[57:52] (3472.24s)
time. Yeah. So it was might as well to
[57:54] (3474.80s)
me I see no benefit of that. I think to
[57:56] (3476.24s)
me that has made things worse not better
[57:58] (3478.16s)
that if things are really related you're
[58:00] (3480.32s)
better off pulling them together. So, so
[58:03] (3483.36s)
the thing about the the notion of depth
[58:05] (3485.28s)
is it kind of captures the trade-offs
[58:07] (3487.76s)
that we want to have a lot of
[58:10] (3490.16s)
functionality, but it needs to have a
[58:11] (3491.84s)
relatively simple interface. And so, I
[58:14] (3494.40s)
think that
[58:15] (3495.40s)
notion will keep you from going astray
[58:19] (3499.04s)
in either one direction or the other.
[58:20] (3500.80s)
Whereas this, for example, the uh the
[58:23] (3503.60s)
single responsibility principle, the do
[58:25] (3505.44s)
one thing principle that just pushes you
[58:27] (3507.68s)
relentlessly in one direction without
[58:29] (3509.68s)
any bounds and you get in trouble that
[58:32] (3512.48s)
And it's interesting how you know you
[58:34] (3514.80s)
talked about methods you being short
[58:38] (3518.24s)
doing one thing and you can have a lot
[58:39] (3519.76s)
of them or you can like group them and
[58:41] (3521.52s)
have bigger methods that do do more
[58:43] (3523.04s)
thing but they're more sensible. I was
[58:45] (3525.28s)
thinking as you're talking about this in
[58:47] (3527.28s)
the industry we've had a similar thing
[58:48] (3528.72s)
with microservices. So I I happened to
[58:51] (3531.12s)
work at Uber at the time where it was
[58:52] (3532.88s)
popularized that the company had more
[58:54] (3534.72s)
than 5,000 microservices and we had a
[58:56] (3536.72s)
lot of really small microservices and
[58:58] (3538.80s)
what happened over the years is at least
[59:01] (3541.52s)
in my I cannot speak for the whole
[59:03] (3543.04s)
company but I can talk about the domain
[59:04] (3544.56s)
I was which was in in the payments
[59:06] (3546.32s)
domain we we had a lot of small services
[59:08] (3548.00s)
because they were easy to spin up. They
[59:09] (3549.68s)
did like one thing or two things or a
[59:11] (3551.20s)
few things and after a while we realized
[59:13] (3553.04s)
like huh it's it's just really tough to
[59:16] (3556.08s)
maintain to know which lives there. A
[59:18] (3558.40s)
lot of communication back and forth. So
[59:19] (3559.84s)
we started to just pull them together
[59:21] (3561.20s)
and create like mids I would say like
[59:23] (3563.24s)
midsize services that kind of had a
[59:25] (3565.76s)
domain responsibility and it wasn't
[59:29] (3569.28s)
either extreme but like I think we saw
[59:30] (3570.80s)
that one extreme it it sounds good it it
[59:33] (3573.68s)
sounded really good by the way initially
[59:35] (3575.36s)
and it has benefits but over time it
[59:37] (3577.28s)
just wasn't as practical. So it's
[59:39] (3579.84s)
somewhat similar to to to what to what
[59:41] (3581.52s)
you say like extremes on one end are not
[59:43] (3583.92s)
great. It's not great to have like one
[59:45] (3585.20s)
massive service that does everything.
[59:47] (3587.28s)
You know, we Uber used to have that. It
[59:49] (3589.12s)
was called API and it did everything and
[59:51] (3591.28s)
then it got broken into smaller things
[59:53] (3593.12s)
and then it came back into like kind of
[59:54] (3594.96s)
domains or or reasonably sized ones. So,
[59:57] (3597.12s)
so you could actually keep track in your
[59:59] (3599.04s)
head. You could understand here are the
[60:00] (3600.96s)
different parts that are there. Okay,
[60:02] (3602.80s)
you can go inside and then you can
[60:04] (3604.56s)
understand the part. So, there's I think
[60:06] (3606.24s)
there's a level
[60:07] (3607.40s)
of when does it make sense? How can you
[60:10] (3610.64s)
follow those kind of things as well? I
[60:13] (3613.20s)
think there's
[60:14] (3614.36s)
a you know again you can air on both
[60:16] (3616.64s)
sides but I I think these days people
[60:18] (3618.96s)
often er on the side of over decomposing
[60:21] (3621.84s)
and I think so it's think and clean code
[60:23] (3623.92s)
for example advocates that what I
[60:25] (3625.20s)
consider to be over decomposing that
[60:26] (3626.72s)
causes problems and so one of the things
[60:29] (3629.12s)
I tell my students is
[60:31] (3631.08s)
often you can make something deeper by
[60:34] (3634.08s)
actually combining things together if
[60:36] (3636.72s)
they're if they were closely related you
[60:39] (3639.20s)
may discover that if you bring them
[60:40] (3640.40s)
together into one class or one method or
[60:42] (3642.56s)
one module or one subsystem, you end up
[60:45] (3645.12s)
with the combined functionality but with
[60:47] (3647.36s)
a simpler overall API and without having
[60:50] (3650.96s)
two separate things with a lot of
[60:52] (3652.08s)
dependencies between them, which is
[60:53] (3653.28s)
really bad. Again, you can overdo this,
[60:56] (3656.56s)
you know, so everything is trade-offs,
[60:58] (3658.16s)
but but you could often make things
[60:59] (3659.76s)
better by making the units a little bit
[61:01] (3661.64s)
larger. Another area of the screen was
[61:04] (3664.88s)
was test-driven development and we
[61:06] (3666.24s)
touched on a little bit and Robert
[61:10] (3670.00s)
Martin is a big fan of of using TDD as a
[61:12] (3672.40s)
way to write code. Write the test first
[61:14] (3674.40s)
then write the code that that makes it
[61:16] (3676.08s)
pass. This method was also pretty
[61:18] (3678.24s)
popular in the two early 2000s. It's
[61:20] (3680.48s)
kind of gotten a little bit out of
[61:21] (3681.52s)
style. Your take was was different as
[61:24] (3684.72s)
well. Yeah, I'm not a fan of TDD. Y uh
[61:27] (3687.52s)
because I think it works against design.
[61:30] (3690.08s)
So again, to me, software, so tests are
[61:32] (3692.32s)
important. I love unit tests. I write
[61:33] (3693.92s)
them for everything I do. They're
[61:35] (3695.12s)
essential. If you're a if you're a
[61:36] (3696.80s)
responsible developer, you write unit
[61:38] (3698.48s)
tests with very high coverage. So let's
[61:40] (3700.16s)
let's agree on that. But but we want the
[61:43] (3703.68s)
development process to be focused on
[61:45] (3705.36s)
design. I think that should be the
[61:47] (3707.44s)
center of everything we do in
[61:48] (3708.72s)
development should be organized towards
[61:50] (3710.08s)
getting the best possible design. And I
[61:52] (3712.80s)
think TDD works against that because it
[61:55] (3715.44s)
encourages you to do a little tiny
[61:57] (3717.44s)
increment of design. I write one test
[62:00] (3720.56s)
and then I implement the functionality
[62:02] (3722.16s)
to make that test pass. Then I write one
[62:04] (3724.64s)
more test and write the functionality to
[62:07] (3727.12s)
make that test pass. And so there's no
[62:09] (3729.44s)
point in the process where you're
[62:10] (3730.80s)
encouraged to step back and think about
[62:13] (3733.20s)
the overall task, the big picture. How
[62:16] (3736.24s)
do all these pieces fit together? what's
[62:18] (3738.64s)
the most pleasing simple clean
[62:20] (3740.48s)
architecture that will handle that will
[62:22] (3742.24s)
solve 10 problems rather than coming up
[62:24] (3744.80s)
with 10 point solutions to individual
[62:27] (3747.64s)
problems and so it results in what I
[62:30] (3750.32s)
believe the risk is you you end up in an
[62:32] (3752.80s)
extremely tactical style of development
[62:35] (3755.28s)
that produces just a horrible mess so
[62:38] (3758.08s)
that's my concern and we had this long
[62:40] (3760.00s)
back and forth about it and I think the
[62:41] (3761.52s)
main reason that that Bob likes TDD is
[62:44] (3764.56s)
it in it it ensures that the tests get
[62:46] (3766.16s)
written because you have to write the
[62:47] (3767.60s)
test before the code and so I agree the
[62:49] (3769.84s)
test should be written but I don't think
[62:51] (3771.84s)
there's anything he was not able to
[62:53] (3773.76s)
convince me that there's any particular
[62:55] (3775.80s)
advantage in writing the test before the
[62:58] (3778.32s)
code other than ensuring that the tests
[63:00] (3780.32s)
get written it does not I I can't see
[63:02] (3782.00s)
any reason that makes the design better
[63:03] (3783.76s)
and I can see a lot of reasons why it
[63:05] (3785.28s)
might make the design a lot worse I
[63:07] (3787.12s)
think an interesting question is what
[63:08] (3788.32s)
should be your units of development you
[63:10] (3790.08s)
know you're going to do development is
[63:11] (3791.20s)
always in chunks of work what should
[63:12] (3792.48s)
those chunks be and I would argue those
[63:14] (3794.80s)
chunks should be
[63:16] (3796.36s)
abstractions not individual tests.
[63:18] (3798.96s)
That's too small of a chunk. Again, you
[63:20] (3800.72s)
want to think in a big enough chunk that
[63:23] (3803.84s)
you can consider
[63:25] (3805.56s)
tradeoffs and try and come up with a
[63:27] (3807.92s)
fairly generalpurpose solution that will
[63:30] (3810.00s)
solve many
[63:31] (3811.72s)
problems. By the way, one of the one of
[63:34] (3814.16s)
the most important elements of design, I
[63:36] (3816.32s)
think, is pushing yourself towards
[63:38] (3818.08s)
general purpose to avoid specialization
[63:40] (3820.16s)
as much as you possibly can. And so the
[63:43] (3823.28s)
the TDD approach encourages you to do
[63:45] (3825.44s)
something very specialized to pass each
[63:46] (3826.88s)
test rather than thinking about the
[63:48] (3828.80s)
general purpose thing that solves many
[63:50] (3830.64s)
problems in one place. Oh, all right.
[63:53] (3833.12s)
Yeah, that is true. That that is true.
[63:54] (3834.96s)
In fact, like I I remember like at at
[63:56] (3836.48s)
some point when we did we did some TDD
[63:58] (3838.24s)
mobbing where we would pass the
[64:00] (3840.00s)
keyboards around and someone would write
[64:01] (3841.52s)
a test, someone would make it pass. We
[64:03] (3843.76s)
would try to it felt a bit artificial
[64:05] (3845.68s)
because when you wrote more code that
[64:07] (3847.60s)
you needed to come, they're like, "Huh,
[64:08] (3848.80s)
no, no, don't write that. That doesn't
[64:10] (3850.40s)
make just just make the test pass." So
[64:12] (3852.32s)
it almost felt a little bit of
[64:13] (3853.52s)
artificial and again like I've I'm sure
[64:15] (3855.84s)
there are times and cases where it it
[64:18] (3858.48s)
could be useful may maybe in a more
[64:19] (3859.92s)
formal way maybe where it's more about
[64:22] (3862.24s)
adding extending additional code where
[64:24] (3864.88s)
it's like very heavy in business logic
[64:26] (3866.64s)
maybe maybe that could be a good good
[64:28] (3868.32s)
fit but there might be a reason that it
[64:31] (3871.20s)
kind of slowly you know became less
[64:33] (3873.36s)
popular and we're not talking about it
[64:34] (3874.88s)
too much these days at least. The one
[64:36] (3876.64s)
place where I would recommend writing
[64:38] (3878.64s)
the test first is when you're fixing a
[64:41] (3881.08s)
bug. I would I would argue write the
[64:44] (3884.64s)
test that will detect the bug and then
[64:47] (3887.04s)
fix it. Although actually what I
[64:48] (3888.64s)
typically do is I honestly I kind of
[64:50] (3890.56s)
cheat on this one. I I write the fix and
[64:54] (3894.72s)
then I write the test for the
[64:56] (3896.68s)
fix and then I back out the fix and make
[65:00] (3900.64s)
sure the test fails so I know that. But
[65:03] (3903.04s)
anyhow, that that's one place where I
[65:04] (3904.40s)
think doing the tests early can be
[65:06] (3906.32s)
useful.
[65:08] (3908.40s)
Now, one more area that you disagree is
[65:10] (3910.68s)
comments. Uh so, Uncle Uncle Bob uh says
[65:15] (3915.32s)
unsurprisingly that you should have as
[65:17] (3917.44s)
few comments in the code, the code
[65:19] (3919.04s)
should speak for
[65:20] (3920.28s)
itself. You also did not fully subscribe
[65:22] (3922.88s)
to this one. If the code could fully
[65:25] (3925.52s)
speak for itself, that would be
[65:26] (3926.80s)
wonderful. I would have no objection,
[65:29] (3929.04s)
but it can't and it never will as as far
[65:31] (3931.76s)
as I'm concerned. There's just so much
[65:34] (3934.12s)
stuff that can't be described in the
[65:36] (3936.72s)
code itself. And and we ended up
[65:38] (3938.16s)
discussing examples where he said, "Look
[65:39] (3939.76s)
at this code. It needs no comments,
[65:41] (3941.36s)
right?" And I and then I well here's
[65:44] (3944.00s)
five questions people have that I don't
[65:45] (3945.36s)
see answered in this
[65:46] (3946.92s)
code. And and then he kind of hedged
[65:49] (3949.60s)
about that. And so I don't know, he's
[65:52] (3952.56s)
very very biased against comments. I I'm
[65:55] (3955.04s)
not sure why if he had some scarring
[65:56] (3956.88s)
experience and he he argues that he I
[66:01] (3961.36s)
believe he basically said he thinks that
[66:03] (3963.60s)
people lose more time by being misled by
[66:06] (3966.56s)
comments that are out of date than than
[66:10] (3970.16s)
time lost because you haven't had
[66:11] (3971.44s)
adequate comments. And that certainly is
[66:14] (3974.56s)
not my experience. You know,
[66:16] (3976.64s)
occasionally there are comments that are
[66:17] (3977.76s)
out of date but rarely and even usually
[66:20] (3980.96s)
useful information. And then he also
[66:23] (3983.04s)
made the he argued that well if you're
[66:24] (3984.96s)
working on a project that you and other
[66:26] (3986.88s)
developers were you don't need any
[66:28] (3988.08s)
comments because everybody's got it all
[66:29] (3989.60s)
loaded into their
[66:30] (3990.84s)
minds. And I again I just don't agree
[66:32] (3992.96s)
with that. Now maybe his mind is better
[66:34] (3994.80s)
than my mind but you know I can't keep
[66:37] (3997.28s)
everything in my mind and I forget code
[66:39] (3999.44s)
within a few weeks of when I've written
[66:40] (4000.64s)
it. So I need those comments. So the
[66:43] (4003.04s)
bottom line is there's just a lot of
[66:44] (4004.56s)
stuff that you can't get from the code.
[66:46] (4006.88s)
And then what what is your kind of
[66:48] (4008.72s)
approach on like how do you do comments?
[66:51] (4011.36s)
What do you like to put into comments or
[66:53] (4013.60s)
is it just like whenever you feel like
[66:55] (4015.04s)
it just put there because it's going to
[66:57] (4017.12s)
be additional information that could
[66:58] (4018.40s)
help later?
[67:00] (4020.48s)
Well, again, the the number one rule is
[67:03] (4023.12s)
comments should tell you things that
[67:04] (4024.40s)
aren't obvious from the code. And so,
[67:07] (4027.76s)
you know, a lot there are a lot of bad
[67:09] (4029.36s)
comments out there where people are
[67:10] (4030.56s)
basically just duplicating stuff that's
[67:12] (4032.40s)
obvious from the code. So, no need for
[67:14] (4034.56s)
comments in those situations. And you
[67:16] (4036.16s)
know in my student projects in my class
[67:18] (4038.16s)
I often take comments out tell students
[67:19] (4039.76s)
this comment just repeats the code you
[67:21] (4041.20s)
don't need it. I think where comments
[67:23] (4043.28s)
are most important is for interfaces.
[67:25] (4045.76s)
This is where they're really really
[67:26] (4046.96s)
important
[67:28] (4048.28s)
because this the assumption of
[67:30] (4050.32s)
interfaces. You don't want people to
[67:31] (4051.52s)
have to read the code of the thing that
[67:33] (4053.36s)
you're that you're communicating with
[67:35] (4055.52s)
talking with. You just want to look at
[67:36] (4056.48s)
the interface. And there's simply it is
[67:39] (4059.60s)
impossible in the the functional
[67:42] (4062.08s)
signatures of a module to provide all
[67:44] (4064.56s)
the information people need to use that
[67:46] (4066.08s)
module. And so that's where comments are
[67:48] (4068.16s)
most important there. To me to me that's
[67:50] (4070.72s)
the most important. The second most
[67:51] (4071.84s)
important thing is documenting the
[67:54] (4074.00s)
members of member variables of classes.
[67:56] (4076.00s)
That needs really extensive
[67:57] (4077.28s)
documentation. And then I tend to find
[67:59] (4079.92s)
that inside methods, I don't tend to
[68:03] (4083.12s)
need a lot of comments because if you
[68:04] (4084.96s)
know what the the method's trying to do,
[68:06] (4086.80s)
the code typically speaks for itself
[68:08] (4088.72s)
pretty well there. I tend to have
[68:11] (4091.52s)
comments where things are tricky or
[68:12] (4092.96s)
where there was unexpected stuff that I
[68:14] (4094.56s)
only discovered, you know, when there
[68:15] (4095.76s)
were bugs and things like that. But but
[68:18] (4098.08s)
often my methods won't have any internal
[68:20] (4100.08s)
comments. They'll just be the interface
[68:21] (4101.36s)
comment. it. I think comments are going
[68:24] (4104.00s)
to have a bit of a resurgence this this
[68:26] (4106.48s)
debate potentially because AI tools
[68:28] (4108.56s)
increasingly generate more code. They
[68:30] (4110.96s)
often generate whole methods. You know,
[68:33] (4113.20s)
you can ask and what I've noticing is
[68:34] (4114.96s)
when I when I'm telling, you know, one
[68:36] (4116.80s)
of these AI assistants like generate
[68:38] (4118.72s)
something that does this for me, it
[68:40] (4120.16s)
generates the code, but it often adds
[68:41] (4121.68s)
inline methods which explains what it
[68:44] (4124.24s)
does. And actually in that case because
[68:46] (4126.16s)
it's just coming something quickly. It
[68:48] (4128.00s)
actually kind of helps me because I'm I
[68:50] (4130.32s)
know these tools hallucinate uh and I
[68:52] (4132.72s)
actually want to make sure what it does.
[68:54] (4134.32s)
So it kind of helps me understand. At
[68:56] (4136.40s)
the same time it's a good question of
[68:57] (4137.92s)
why it's doing it. You know these are
[68:59] (4139.20s)
trained on typically code that's out
[69:01] (4141.20s)
there either either open source or who
[69:02] (4142.96s)
knows what kind of licensing. So it it
[69:04] (4144.80s)
probably is uh copying however it's it's
[69:07] (4147.92s)
seen it. But uh this this this you know
[69:11] (4151.04s)
usually I would agree with you but in
[69:12] (4152.80s)
this case maybe comments are not a bad
[69:15] (4155.12s)
thing. Who knows? It's an interesting
[69:16] (4156.88s)
area. Yeah. So
[69:18] (4158.12s)
I I'm a little hesitant to say this
[69:20] (4160.80s)
because I'm a fearing a fearing I may be
[69:22] (4162.64s)
justifying bad habits. But one thing I
[69:25] (4165.40s)
found is that AI tools can to some
[69:29] (4169.04s)
degree compensate for the lack of
[69:30] (4170.64s)
comments. So I've been working in the
[69:33] (4173.36s)
Linux kernel building a new network
[69:35] (4175.64s)
transport and you know overall actually
[69:38] (4178.16s)
Linux kernel is not a bad piece of
[69:39] (4179.76s)
software but but it's pretty
[69:41] (4181.28s)
significantly under commented in my view
[69:43] (4183.60s)
and I spent a lot of my time just trying
[69:45] (4185.20s)
to figure out what is going on in Linux.
[69:47] (4187.44s)
What are the interfaces? How do I hook
[69:48] (4188.48s)
into this chat GPT has become my very
[69:52] (4192.08s)
best friend. It's amazingly capable at
[69:55] (4195.84s)
answering questions about the Linux
[69:57] (4197.44s)
kernel that I can't answer via Google
[69:59] (4199.44s)
search or any other way. It's not always
[70:02] (4202.24s)
correct. Sometimes it hallucinates, but
[70:04] (4204.08s)
but it's often right. And even when it's
[70:06] (4206.12s)
wrong, it typically gets me in the right
[70:09] (4209.04s)
vicinity and helps me to figure out
[70:10] (4210.64s)
where to look to figure out how to
[70:11] (4211.84s)
answer my questions.
[70:13] (4213.60s)
So I I I think the AI tools may be able
[70:16] (4216.88s)
to help, but even saying that if the
[70:20] (4220.72s)
Linux developers had just spent a little
[70:22] (4222.32s)
a tiny amount of time putting comments
[70:24] (4224.24s)
in, it would have made things so much
[70:26] (4226.72s)
easier than using chat GPT to try and
[70:28] (4228.64s)
figure it out. So I don't think the need
[70:30] (4230.80s)
for comments is going to go away
[70:32] (4232.16s)
completely. And I we shouldn't use AI
[70:34] (4234.96s)
tools as an excuse for people not to
[70:36] (4236.40s)
write comments. Yeah. At least not yet.
[70:40] (4240.00s)
So before the podcast, one interesting
[70:42] (4242.32s)
thing that you mentioned is you are
[70:43] (4243.68s)
actively writing code. In fact, after
[70:45] (4245.76s)
this podcast, I think you're going to be
[70:47] (4247.52s)
getting back to to writing code. What
[70:49] (4249.92s)
what are you working on right now? You
[70:51] (4251.76s)
mentioned something with the Linux
[70:52] (4252.88s)
kernel. Yeah. So uh one of my PhD
[70:56] (4256.00s)
students, Ben Montazeri, as his PhD
[70:58] (4258.56s)
dissertation uh six or eight years ago,
[71:01] (4261.28s)
developed a new transport protocol
[71:03] (4263.04s)
called Homa,
[71:04] (4264.28s)
Hom, which is intended for use in data
[71:07] (4267.04s)
centers. And the results he got in his
[71:08] (4268.96s)
dissertation are really fabulous. It's
[71:10] (4270.96s)
10 to 100 times faster than TCP for a
[71:13] (4273.28s)
whole lot of interesting cases. Wow. And
[71:16] (4276.12s)
so rather than let this just come and go
[71:18] (4278.96s)
as a you know PhD project that nobody
[71:21] (4281.44s)
ever cares about, I've taken as as my
[71:23] (4283.04s)
own personal project to see if it is
[71:25] (4285.20s)
actually possible to bring this into
[71:26] (4286.72s)
widespread use and and displace some of
[71:29] (4289.20s)
the TCP traffic and data centers. So I
[71:32] (4292.00s)
have developed a Linux kernel
[71:33] (4293.52s)
implementation of that which I'm
[71:35] (4295.12s)
continuing to enhance and I'm now in the
[71:37] (4297.68s)
process of upstreaming that into the
[71:39] (4299.04s)
kernel. And so actually what I'm
[71:40] (4300.88s)
currently dealing with is a whole bunch
[71:42] (4302.00s)
of comments I've gotten from the Linux
[71:43] (4303.20s)
kernel developers about this and trying
[71:44] (4304.64s)
to fix all the problems they pointed
[71:46] (4306.64s)
out. Another example of code reviews and
[71:48] (4308.48s)
you know the benefits of all of that and
[71:50] (4310.64s)
indeed this afternoon I'm I'm getting
[71:52] (4312.40s)
ready to submit my next round of patches
[71:54] (4314.56s)
to the uh the net. How how are you
[71:56] (4316.88s)
finding is this your first contribution
[71:58] (4318.24s)
to a Linux kernel or have you previously
[72:00] (4320.08s)
worked on it? Yeah, this is my first
[72:02] (4322.24s)
involvement with the Linux kernel and I
[72:04] (4324.08s)
have not. How are you finding the
[72:05] (4325.44s)
process? Because we we just had a an
[72:07] (4327.52s)
episode on with one of the main uh Linux
[72:11] (4331.20s)
kernel ma maintainers and I'm curious is
[72:15] (4335.28s)
it is it overwhelming? Is it easier? Is
[72:17] (4337.36s)
it harder than what you thought? So, I'd
[72:19] (4339.28s)
been warned that it might be difficult
[72:21] (4341.36s)
and painful and take a long time. Um,
[72:25] (4345.36s)
it's it's taking a fair amount of time.
[72:27] (4347.76s)
I think it's been boy, we may be coming
[72:29] (4349.60s)
up on six months since I made my first
[72:31] (4351.44s)
submission of this, but I have to say
[72:34] (4354.00s)
that the people have been very
[72:36] (4356.52s)
reasonable and the feedback I've gotten
[72:38] (4358.96s)
has been high quality feedback. So, I've
[72:40] (4360.72s)
had to fix a lot of things, but these
[72:42] (4362.96s)
were all things where either I
[72:45] (4365.12s)
misunderstood something about the kernel
[72:47] (4367.36s)
or they they showed me better ways to do
[72:49] (4369.36s)
it or pointed out complexities in my
[72:51] (4371.28s)
design, which there were plenty of. And
[72:53] (4373.40s)
so I have no objection to the process so
[72:56] (4376.72s)
far and uh I I'm actually pretty happy
[72:58] (4378.88s)
with it and home is getting a lot better
[73:00] (4380.56s)
because of this. You know, I hope we'll
[73:02] (4382.24s)
eventually reach closure and it'll find
[73:03] (4383.68s)
its way into the kernel, but but it has
[73:06] (4386.24s)
seemed pretty reasonable to me so far.
[73:08] (4388.80s)
Yeah, this this is really encouraging to
[73:11] (4391.12s)
hear and you know, good luck with that.
[73:12] (4392.80s)
So what are ideas from the history of
[73:15] (4395.36s)
software design or or even software
[73:17] (4397.20s)
engineering that you think might make a
[73:18] (4398.48s)
renaissance in the near future as as
[73:20] (4400.64s)
we're looking at you know focus on
[73:22] (4402.56s)
tooling distributed systems of obviously
[73:25] (4405.36s)
AI AI tooling. I'm not I'm not sure I
[73:28] (4408.08s)
can think of any idea that has come and
[73:30] (4410.40s)
gone and is going to come back again. Uh
[73:32] (4412.88s)
so I uh I think you know I think there
[73:35] (4415.04s)
are ideas that have come and gone
[73:36] (4416.80s)
because we've learned how to do things
[73:39] (4419.08s)
better. Uh but and hopefully we're we're
[73:42] (4422.96s)
getting better and better at this. We're
[73:44] (4424.32s)
not just oscillating from good to bad,
[73:46] (4426.08s)
good to bad, back and forth. So I I
[73:48] (4428.88s)
can't think of there's no no idea I can
[73:51] (4431.92s)
think of, you know, that people have
[73:53] (4433.36s)
forgotten about once a new and forgot
[73:54] (4434.72s)
about this going to come back into play
[73:56] (4436.32s)
again.
[73:57] (4437.92s)
And then the the book you this was first
[74:01] (4441.44s)
published in in 2018, so that's coming
[74:04] (4444.08s)
up seven seven years now. If if you were
[74:06] (4446.56s)
to write this book today, are there
[74:09] (4449.04s)
parts that you would add to it that you
[74:10] (4450.80s)
would rewrite or or even remove?
[74:13] (4453.84s)
Yeah. So, first I never actually
[74:15] (4455.12s)
finished the explanation of the book.
[74:16] (4456.24s)
So, the book came about after the class
[74:18] (4458.08s)
when I started giving talks about the
[74:19] (4459.60s)
class. People said you should write a
[74:21] (4461.36s)
book about this and so I had a sbatical
[74:23] (4463.36s)
and whenever it was 2017 2018 I used my
[74:25] (4465.92s)
sbatical to write the book. So, uh so
[74:29] (4469.92s)
feedback on those. So, first of all,
[74:31] (4471.12s)
there have been some things that I'
[74:32] (4472.56s)
where I've adjusted my opinion since
[74:34] (4474.80s)
writing the book, but there's already
[74:36] (4476.16s)
been a second edition of the book where
[74:37] (4477.52s)
I've fixed most of those and the and
[74:41] (4481.92s)
hopefully the book you have is actually
[74:43] (4483.04s)
the second edition. In fact, if you've
[74:44] (4484.32s)
gotten it in the last couple of years,
[74:46] (4486.56s)
it is first edition. I need to update
[74:48] (4488.32s)
the book. Okay. It has not changed
[74:50] (4490.40s)
dramatically, but the the biggest change
[74:53] (4493.36s)
is more emphasis on this notion of being
[74:55] (4495.76s)
general purpose and eliminating
[74:57] (4497.76s)
specializations. And that's something I
[75:00] (4500.00s)
learned actually not from the book but
[75:01] (4501.84s)
from the class. From observing student
[75:03] (4503.44s)
projects in the class over a period of
[75:04] (4504.88s)
years, I realized this idea is really
[75:07] (4507.68s)
fundamental to to teaching students how
[75:09] (4509.52s)
to write less complicated code. So, so
[75:12] (4512.96s)
that's already in the book. Uh there's
[75:15] (4515.20s)
no there's no big thing that I would
[75:17] (4517.60s)
want to change right now. The as I
[75:20] (4520.56s)
mentioned earlier, I think the
[75:23] (4523.16s)
um the part of the book that is most
[75:25] (4525.76s)
dangerous is this part of exception
[75:27] (4527.20s)
handling which as I mentioned is easy
[75:28] (4528.64s)
for people to misinterpret and apply in
[75:30] (4530.96s)
ways that that I actually wouldn't agree
[75:32] (4532.48s)
with. So if I were rewriting the book
[75:35] (4535.12s)
from scratch,
[75:36] (4536.76s)
I I would I might think about that more
[75:40] (4540.16s)
carefully to make sure I choose my
[75:41] (4541.68s)
wording more carefully to help people
[75:43] (4543.36s)
avoid uh misinterpretation of that. I
[75:46] (4546.08s)
think now one thing I hoped when I when
[75:48] (4548.48s)
I wrote the book, one thing I I was
[75:49] (4549.92s)
hoping was that people would come to me
[75:51] (4551.12s)
with new ideas that I hadn't thought of
[75:53] (4553.04s)
and that I would learn from that and
[75:54] (4554.64s)
then get more ideas and gradually, you
[75:56] (4556.72s)
know, I'd be adding more and more stuff
[75:58] (4558.16s)
to the book. So that has not happened
[76:01] (4561.36s)
very much so far. There's not I would
[76:04] (4564.00s)
have been delighted to have more I'd
[76:05] (4565.52s)
love for people to come and say you are
[76:07] (4567.68s)
totally wrong. This idea here, the book
[76:09] (4569.20s)
is totally wrong. Here's why. Here's how
[76:10] (4570.64s)
you should do it instead. And there's
[76:12] (4572.24s)
not actually been as much of that as I
[76:14] (4574.88s)
had actually hoped for when I wrote the
[76:16] (4576.32s)
book. I I I have a theory about why this
[76:21] (4581.12s)
might happen, which is that your your
[76:23] (4583.44s)
class is so unique. So software design
[76:26] (4586.08s)
in in the wild is typically you have a
[76:28] (4588.32s)
problem, you build a solution, you might
[76:31] (4591.04s)
explore some different trade-offs on on
[76:32] (4592.64s)
a whiteboard, but then you build it and
[76:34] (4594.56s)
then you move on and then you might go
[76:36] (4596.56s)
back and fix it. You might have to
[76:37] (4597.92s)
maintain it. You're going to learn about
[76:39] (4599.36s)
it. But what you don't have is you don't
[76:42] (4602.08s)
have this ideal situation where let's
[76:43] (4603.60s)
say you have like two or three different
[76:44] (4604.80s)
teams building different things and then
[76:46] (4606.48s)
you compare it because in in you know in
[76:48] (4608.32s)
the industry or in tech companies you
[76:49] (4609.76s)
just don't have that luxury. And even if
[76:52] (4612.64s)
it it's it's a bit wasteful. So you
[76:54] (4614.80s)
never really have the situation where
[76:56] (4616.24s)
you can repeat something. And my sense
[76:58] (4618.40s)
was that with your class you actually
[77:00] (4620.40s)
could do this. You could you know have a
[77:02] (4622.72s)
group of students go through well
[77:04] (4624.08s)
actually in the class you know the same
[77:06] (4626.40s)
problem people solve it different ways.
[77:08] (4628.80s)
you can now see the differences. They
[77:10] (4630.32s)
can see the differences. And I'm
[77:12] (4632.72s)
wondering if if it's just a thing that
[77:14] (4634.40s)
it it maybe this setup needs to happen
[77:17] (4637.36s)
in some level of either academia or or
[77:19] (4639.68s)
nonprofit or or somewhere where it is
[77:22] (4642.32s)
doable. I I'm not sure because I I have
[77:24] (4644.96s)
been thinking of because this this makes
[77:26] (4646.96s)
this book really special like your
[77:28] (4648.48s)
observation was not just like a lot of
[77:30] (4650.32s)
software architecture BS are like oh
[77:32] (4652.00s)
here's stuff that I found over working
[77:34] (4654.08s)
on these projects which are all one-offs
[77:35] (4655.76s)
but I learned these things and you're
[77:37] (4657.68s)
saying I've seen this thing work better
[77:40] (4660.72s)
across the students and those who did it
[77:44] (4664.72s)
you know had had better outcome
[77:46] (4666.08s)
repeatedly. Yeah, that that's something
[77:48] (4668.56s)
we can do in academia, I think, better
[77:50] (4670.48s)
than industry because in industry you
[77:52] (4672.08s)
don't have time. Yeah. For the revision.
[77:54] (4674.80s)
And and you're right. One of the things
[77:56] (4676.16s)
about the class I think that is really
[77:57] (4677.60s)
good is there are nine teams, nine
[78:00] (4680.08s)
twoperson teams in the course. They all
[78:02] (4682.40s)
do the same
[78:03] (4683.96s)
projects and then they review each
[78:06] (4686.16s)
other's projects. So in during the code
[78:08] (4688.44s)
reviews, each student will read not the
[78:11] (4691.84s)
whole project, but a big chunk of two
[78:13] (4693.44s)
other projects and then review them in
[78:15] (4695.84s)
class. And so the students and so your
[78:18] (4698.40s)
partner is reviewing two different
[78:19] (4699.76s)
projects also. So between the two of
[78:21] (4701.44s)
you, you've seen basically half of the
[78:23] (4703.04s)
rest of the work in class. And I think
[78:25] (4705.68s)
that's also a really great source of
[78:27] (4707.28s)
learning from students. They really
[78:28] (4708.32s)
enjoy looking at each other's code and
[78:30] (4710.24s)
thinking about it. And then they after
[78:31] (4711.92s)
the first project they ask me, they say,
[78:33] (4713.84s)
"Am I allowed to use ideas from this
[78:36] (4716.32s)
other project when I revise my project?"
[78:38] (4718.48s)
Because I mean, normally in classes, you
[78:40] (4720.00s)
can't use anybody else's ideas in your
[78:41] (4721.36s)
work. And I say, "Oh, absolutely. you
[78:43] (4723.60s)
are encouraged to steal and cannibalize
[78:46] (4726.24s)
all the best ideas. So it's that's
[78:48] (4728.64s)
something I think yeah where I think
[78:50] (4730.88s)
actually doing it in academia we can
[78:52] (4732.88s)
probably do a better job in industry
[78:54] (4734.56s)
than industry you don't have enough time
[78:56] (4736.08s)
you know you can't let a student a
[78:58] (4738.00s)
person spend whatever it is 15 hours a
[79:00] (4740.80s)
week for 10 weeks going through an
[79:02] (4742.80s)
exercise like this you have too much
[79:04] (4744.16s)
other stuff for them to do
[79:06] (4746.80s)
but the outcome has been has been really
[79:08] (4748.72s)
nice so as as closing what what are one
[79:10] (4750.72s)
or two books you'd recommend for
[79:12] (4752.16s)
software engineers to get better at
[79:13] (4753.76s)
their craft while obviously you we have
[79:15] (4755.92s)
your book but outside of these.
[79:18] (4758.96s)
So, unfortunately, there's not a lot of
[79:21] (4761.04s)
other stuff out there that I really
[79:22] (4762.56s)
like. Um, there are a couple of things.
[79:25] (4765.28s)
Unfortunately, I don't remember the
[79:26] (4766.96s)
names of them, but if people go to my
[79:29] (4769.84s)
homepage on the web and look, there's a
[79:32] (4772.24s)
link from there that will take you to
[79:33] (4773.36s)
the software design book and there's a
[79:35] (4775.36s)
short web page there and that has some
[79:37] (4777.52s)
other recommendations at the bottom of
[79:38] (4778.88s)
that people could look at. We're going
[79:40] (4780.64s)
to link this in the show show notes
[79:42] (4782.28s)
below. And how could listeners be
[79:44] (4784.88s)
helpful to you or help you? Well, I'm
[79:47] (4787.36s)
I'm always interested in constructive
[79:49] (4789.44s)
criticism. You know, I don't claim to
[79:51] (4791.68s)
have all of the answers. I I have my
[79:53] (4793.44s)
experiences and things I think I've
[79:55] (4795.12s)
learned, but um we're not done with
[79:58] (4798.24s)
software design yet. I'm sure there's a
[80:00] (4800.00s)
lot more to learn about that. So, I' I'd
[80:02] (4802.24s)
love to hear if people have ideas or if
[80:04] (4804.08s)
you think something I say is wrong, I'd
[80:06] (4806.24s)
like to hear about it. I'd love to hear
[80:08] (4808.80s)
what you think is wrong and why you
[80:10] (4810.48s)
think that's wrong. The main thing, you
[80:12] (4812.56s)
know, both in the class and in the book,
[80:14] (4814.48s)
what I've what I'm hoping to do is to
[80:17] (4817.20s)
encourage more awareness and
[80:19] (4819.76s)
consciousness about software design in
[80:21] (4821.52s)
the developer community, get people
[80:23] (4823.04s)
thinking and talking about it. And if we
[80:25] (4825.92s)
do that, I'm hoping that we can raise
[80:27] (4827.92s)
the overall level of design in the
[80:29] (4829.92s)
software that we build. Yeah. And I I
[80:32] (4832.24s)
I'm I'm thinking especially now that
[80:34] (4834.16s)
we're going to see more code generated
[80:36] (4836.00s)
just by nature as as you mentioned
[80:38] (4838.24s)
design will be more important at at all
[80:40] (4840.48s)
levels not not just like the senior
[80:42] (4842.96s)
engineer level but as you're like
[80:44] (4844.64s)
creating your own thing. So hopefully we
[80:46] (4846.24s)
we'll see more of this. Well John this
[80:48] (4848.24s)
was really nice and and really
[80:50] (4850.16s)
interesting discussion and thank you so
[80:51] (4851.60s)
much for spending the time on this.
[80:53] (4853.44s)
Thank you for inviting me. I really
[80:54] (4854.64s)
enjoyed the discussion as well. I hope
[80:56] (4856.40s)
you enjoyed this conversation as much as
[80:58] (4858.00s)
I did. Thanks very much to John for
[80:59] (4859.92s)
sitting down with us. If you've not read
[81:01] (4861.76s)
his book, a Philadelphia software
[81:03] (4863.44s)
design, I can very much recommend it.
[81:05] (4865.36s)
For more in-depth reading on software
[81:06] (4866.96s)
design, design docs and related topics,
[81:09] (4869.04s)
check out the pragmatic engineer deep
[81:10] (4870.64s)
dives which are linked in the show notes
[81:12] (4872.36s)
below. If you enjoy this podcast, please
[81:14] (4874.88s)
do subscribe on your favorite podcast
[81:16] (4876.48s)
platform and on YouTube. A special thank
[81:18] (4878.72s)
you if you also leave a rating. Thanks
[81:20] (4880.80s)
and see you in the next one.