davidwalker / David Walker

There are no people in davidwalker’s collective.

Huffduffed (36)

  1. Being a Generalist Versus a Specialist With David Epstein | Larry Wilmore: Black on the Air

    Larry Wilmore speaks with author David Epstein about his forthcoming book 'Range: How Generalists Triumph in a Specialized World.' They talk about the merits of being a specialist versus a generalist, the culture around trying to be the "best," and the idea of deliberate amateurs.

    Host: Larry Wilmore

    Guest: David Epstein

    https://art19.com/shows/larry-wilmore/episodes/935e43e0-43c8-4deb-93cd-befa0901aaba

    —Huffduffed by davidwalker

  2. Leanpub: Publish Early, Publish Often

    Len: Hi, I'm Len Epp from Leanpub, and in this Leanpub Frontmatter Podcast, I'll be interviewing Zach Tellman.

    Based in Oakland, Zach is a software engineer and consultant who works on projects where he takes on the roles of newly-hired principle systems engineer or architect, or where he provides recurring guidance and mentorship to software engineering staff. He also has released a number of open source libraries and is a popular conference speaker.

    Zach is the author of the Leanpub book Elements of Clojure, which is meant for readers who know and use Clojure, and want to use it more effectively - and to give teams a shared vocabulary for their Clojure related discussions.

    You can read Zach's blog at ideolalia.com, and follow him on Twitter @ztellman, and you can check out the dedicated website for his book at elementsofclojure.com.

    In this interview, we're going to talk about Zach's background and career, his professional interests, his book. And at the end we'll talk a little bit about his experience writing his book.

    So, thank you Zach for being on the Frontmatter Podcast.

    Zach: Thanks for having me.

    Len: I always like to start these interviews by asking people for their origin story, so I was wondering if you could talk a little bit about how you grew up, and how you first became interested in computers and software?

    Zach: I grew up in the Bay area. I haven't escaped very far.

    I played around with computers when I was younger, and I started to get more seriously into the question of how to actually create software on my own when I was in high school. I decided that I really wanted to major in computer science, and so I went down to Southern California, studied computer science, and around my third year, I very nearly dropped out, because there was something in software that really resonated within me. That made me excited, that made me want to dig deeper into it. But the course work that I was doing didn't evoke that in me.

    I remember I was doing a course on compilers, where they were doing this very tedious recursive descent parser, where you had to just write all these things that were not hard, but very particular. And I realized that if this was what my job was going to be, I was going to hate my job. So I just stopped going, and started to look around and understand what else what there that might more reliably sort of evoke this sort of resonance.

    Through that, I came to the point where I had effectively a minor in philosophy and creative writing, before I realized that, in everything that I looked at, there were pieces of it that really interested me, and a great many other pieces that didn't. I tried to be as responsible as I could in evaluating all of my options, and it seemed like there were the most paths, the most varied paths towards success, in software. To succeed as a writer, to succeed as a philosopher was a very, very narrow and difficult path to really successfully go down.

    So I went into my first job sort of tentatively, not really quite sure what to expect. And it turns out that it was closer to really what I had done on my own, than to what I did in my courses. I kept on pushing off in that direction. But there had been something that I'd noticed, which was that I would have these moments of sort of recognition in something, where something - as I said, it resonated. It was a recognition of something in myself - for a moment, outside of myself.

    I had this with some things in software, with some philosophers, with some writers. I would try to explain them to other people. I would get very excited and I would start trying to articulate this thing that really spoke to me. And 90% of the time I would completely fail. Because whatever it was, either it didn't mirror something inside the person I was trying to explain it to, or I was just very bad at explaining it.

    And in software - this intuition I had, this aesthetic reaction I had to it - was a pretty effective guide for how to create software. But the two problems were that it wasn't always effective. Sometimes my intuition would mislead me, but also - again, I was really unable to articulate effectively to somebody else why it was that I thought that we should go and do X instead of Y.

    As a junior engineer, in my first job, the issue was that I would say emphatically, "We should do this and not this other thing." And people would ask me why, and I wouldn't be able to say. And we wouldn't end up doing that.

    Later in my career, I would say this, and someone that I was mentoring or supervising would ask me why, and I wouldn't be able to explain it either. They would do it, but I wasn't able to actually give any reasonable explanation for what it was that I felt very strongly was the right thing to do.

    And so through my career, about four years in, I found a language called Clojure, which, again, spoke to me. There was a recognition of something that reflected something in how I thought about software. I went very deep on it, and I was able to get jobs and work with it, and create open source software. But still, there was this unanswered question in my own mind, which was, "Why?" Why this, why not this other thing?

    Clojure being a relatively new language, when I started it, people would often expect me to play the advocate for that language, or ask how I had chosen to use that language, or any number of other things. And my best, most concise answer was, "It just fits my brain."

    Len: I want to take the opportunity to ask you about that - for people who are listening who might not be aware of what Clojure is, could you explain a little bit about it, where it came from , and then, I'm really interested in hearing what it was that you found a reflection of yourself in it.

    Zach: Clojure is a language in the Lisp family. The Lisp languages started back in the fifties and have been this sideshow within the whole pantheon of languages, where there are many languages that were seen as very practical, seen as very workmanlike. And Lisp is just off to the side, often in academia, sometimes in the industry.

    The best way that I had to explain to people what it is that makes Lisp interesting, is that if you've ever diagrammed a sentence in school - where you have the sentence and that gets broken in part into the noun phrase and verb phrase and on and on - that represents the grammar. And this grammar is a separate structure from the text that you're typing in. There is this parsing that occurs when you first type this thing in. And the way that the language thinks about what you've typed in, the way that you think about what you typed, are different. They're separated by that transformation from the plain text to this parse tree.

    In Lisp, however, they're not. There is actually a very close one-to-one relationship between what you're typing in and how the language itself thinks about what you've typed in.

    What that means is that you get to interact much more closely with the innards of the language, and build your own-domain specific language atop the core of the language, and be able to create a surface area that fits your problem very closely. The language can rise up to meet the problem that you're trying to solve.

    This is not unique to Clojure. Clojure is, again, part of this long lineage.

    But what makes Clojure interesting, is that it builds atop Java, which means that it is able to build atop a very large ecosystem, which has built up over the last, I don't know? 20 odd years. Due to it's proximity to Java, a more palatable Lisp with a mandatory - people are able to leverage their pre-existing understanding. People are able to sell it more effectively to their bosses. People are able to slip it in, unbeknownst to the people around them, into projects that already are using Java or another language on the JVM language.

    It also came out around the same time as a number of essays by a man named Paul Graham, who had popularized Lisp. This created a mini renaissance where people were suddenly very interested in this old far-off language. Clojure rode that wave very effectively.

    That's the short answer. There are many more involved ones, but I think that this is not necessarily the forum to get too deep into that.

    Len: We'll be coming around to other aspects of it later, in particular related to the philosophy of language and the way you talk about things in your book and in your talks as well.

    But before we do that, I wanted to circle back to the question of computer science itself, and studying it at university. One of the, at this point, official themes of this podcast - since so many people that I interview are in software - is, if you were starting out now, and you were looking for a career in software, would you formally study computer science at university?

    Zach: I think I would. But I don't think that I could justify that as a practical decision on my part. I find the academic study of computer science to be satisfying. I didn't find the applications of it to be very interesting. I thought the concepts were very appealing. I've talked to a number of people who have asked me this question, and I think that the environment in which this decision is being made has changed quite a bit since I made it. You learn a lot of things, and you cover a lot of topics. And most of them end up being irrelevant.

    This is true of pretty much any formal, general education. Most of the things that you learn, don't end up being the thing that you actually spend your life doing. The reason that you do it is because you can't predict ahead of time which is the thing that you're going to be doing. And being exposed to all these things opens up potential paths that you didn't even know existed before.

    Within computer science, it's not as broad as all the possible other fields of study that you could pursue, but it is a pretty broad field. And there are a lot of very deep holes that you can burrow into. So I think it's somewhat valuable. Because otherwise, what you do and what you specialize in is dictated more by your first job or your first series of jobs. You'll mostly fall backwards into what your domain expertise is. What the four-year program gives you is a little bit of perspective, and maybe an ability to go and choose what it is you'd like to get deep on - to make that a deliberate rather than an accidental choice.

    Len: One of the benefits as well add to getting a four year formal degree at university, is that at least in the North American model, it gives you the opportunity to study other subjects - which is something that you already mentioned that you took advantage of with philosophy. I wanted to ask you about that. What were your favorite courses in philosophy? What philosophy did you find a reflection of yourself in?

    Zach: By far I think my favorite philosopher is Kierkegaard. His doctoral thesis, The Concept of Irony, is probably my favorite piece of philosophical writing. He is playful and plays with, I think, a self-negation - this rarefication of his own ideas to the point where they cease to mean anything. Often I think he's actually taught in intro to Buddhism for this reason.

    I think that just his whole attitude and his approach to the anchoring that he does, again - spoke to me.

    Within the book though, the stuff that I built upon is the early work in the analytic school. I think that that's another thing which I enjoy. Though, it veers into this absolutism in terms of talking about what is and isn't interesting based on what is and isn't possible to have an absolute conclusion about. It's not willing to treat the exploration itself as something which is worthwhile, even if you don't end up getting anywhere. I mean, it was birthed by a bunch of logicians and mathematicians.

    In the book I talk about why that mentality is a portrait for software. I think it is a portrait for how I approach these sorts of things. I think that there are definitely interesting results from the stuff that they explore. But I don't think that the perspective that they bring to the world as a whole is quite as in line with my own.

    Len: I'm really looking forward to discussing the latter subject later on when we're getting more specifically to this subject of your book.

    I did want to say - since my first encounter of your philosophical interests was through your talks about the book, and through the book itself, I was a little bit surprised to hear you say Kierkegaard was your favorite philosopher. My own responses to philosophy when I was studying it are actually quite similar to yours.

    There's a certain aridity, and I guess I would say what I take as a naiveté in the analytic or even positivist approach to the world. Even before you get to any specific philosophy, it's the style of mind that I just find unappealing, even though the subjects, the actual things that are discussed, can be fascinating and meaningful.

    For someone like me, someone like Kierkegaard - who, for example, had a complex philosophical project in which he wrote under different pseudonyms, in order to coherently present different perspectives - is much more interesting, and much more appealing - getting at the heart of the matter in a way that syllogisms about the Barber of Venice or whatever it was, are just not quite as compelling.

    So philosophy was effectively a minor for you?

    Zach: I don't think I actually submitted my request that it get put on my diploma, but I qualified. And likewise, English or Creative Writing. I forget exactly what the program was. But that also put me in contact with the works of Italo Calvino, and Deleuze and Guattari, A Thousand Plateaus. And Donald Barthelme, though that doesn't have really any impact on the book here that we're discussing. But he's I think one of my favorite short form writers.

    There was a period of time where I very much wanted to be a writer. I wanted to find a way to support my writing doing software. Unfortunately what I found was, at the end of a full day of writing code, that exhausted the same part of my brain that would be used to write prose. I have friends who are in bands and do software. Apparently that's something that they're able to transition from much more cleanly. I'm pretty envious of that actually. But I really haven't written any form of fiction since I started working full-time as an engineer, which is an enduring regret of mine, but at this point I think I have just gone far enough in that direction, and I have to accept it.

    Len: It's really interesting, I've never heard anyone put it quite that way before. One thing I like to shock my non-technical friends with is that programmers are writers. That's what they do, they write. They're writing all the time. Essentially, instead of writing something along the lines of fiction, they're writing arguments.

    But nonetheless, it is writing. You go off into your own mind sometimes. The work is invisible. Sometimes just sitting there with your head in your hands, you're doing harder work than you do when you're typing. But it's all very much in your head, and it's specifically in letters and numbers and words and things like that. So I can see that connection between the type of work being the same, being pretty compelling.

    Before I interrupted your first answer, when you were discussing your career, you were talking about moving on into more senior roles. And one thing - I looked you up on LinkedIn when I was researching for this interview. I've talked to people who have worked at a lot of different, famous companies. You happen to have worked for Fitbit, and I wanted to ask you a little bit about what's the company culture like there? I've spoken to people about Google, where I know you spent some time. I'd like to just know a little bit about Fitbit.

    Zach: I have nothing bad to say about the culture at Fitbit. I spent a fairly short period of time there, where they had just gone public. I came into work on a very particular project. And during that time, I joined Fitbit, because I was actually hoping to finance my work on the book and my work on some other projects by like working at Fitbit for a bit, and then leaving. Unfortunately in my tenure at Fitbit, the stock went down by 85%. So I joke that I tried to sell out, and apparently I'm not very good at that.

    But despite that, I think that out of all of the public tech companies that I've worked at, or talked to people who worked at or anything, it's probably in the top 5% of healthy cultures. There isn't a focus on tech for tech's sake. There is a very clear understanding of what the technology enables.

    There also is a nice thing, which is not maybe part of the culture so much - it is just a structural aspect of the business plan - which is, because you're selling people fairly expensive pieces of hardware, their data isn't something that you are incentivized to sell on to other people. And that's very hard to find. Especially for me, where my specialty is in building high-throughput, low-latency systems. Most people hear that and they think, "Oh, you work on ATS)." Which I have, and I didn't enjoy it as an industry or as a type of product to build. FitBit was nice in that it allowed me to flex those muscles towards a different end, basically.

    I did want to say, just with respect to what you're saying in terms of programmers being writers. I actually have often said the same thing in a form of being provocative, where I say, there are two kinds of programmers: those who think that programming is fundamentally math, and those who think that programming is fundamentally literature. Where I define literature similar to what Borges says in Tlön, Uqbar, Orbis Tertius, where he describes the culture that says, "Any time that you describe the universe, you are subjugating all other aspects to one aspect." You're basically pulling out something and eliding all the others. You're hiding them from view.

    And it's a game. It's an attempt to go and make people look at things from a perspective which is maybe familiar, maybe unfamiliar. But it is willfully blinding yourself to something, in order to make this one particular thing much more clear.

    Which is not what math purports to do. Math purports to be a completely correct perspective on things. It doesn't talk about the things that are ignored. It doesn't even have any formal representation there.

    This actually gets very much to the heart of the book, which we can circle back to later on perhaps: do we see software as something which is fundamentally concerned with itself, or fundamentally concerned with the environment that it's in? Do we acknowledge that there are parts of the environment that are not in our software, but that nonetheless we have to know about and care about?

    Len: Why don't we talk about that right now? That's a really fascinating aspect of your work. We can get more specifically to the book later, but on this subject - for example, one way into it is - you talk in a number of places about the idea that someone brings part of themself from things they've previously done, to new things that they're doing. This can be as mundane as like if you get a new job you bring the culture of the last company you were in, with you.

    I think you may have mentioned this already, but the people who birthed programming were physicists, and they brought something from physics to their approach to software. I wanted to give you an opportunity to explain a little bit about that.

    Just very briefly, what I partly took away from it was the idea that physics is that it has a constraint, which is the external world, which it has to hew. Whereas, the environment that a software model is interacting with, is itself constantly changing.

    An analogy would be, imagine trying to do experimental science in a universe in which the laws of nature were always changing. Experimental science works perfectly in a universe where the laws of nature don't change. So, I can do an experiment on a water molecule here, or a billion miles away, or a million years from now, and I'll get the same result. But that's a very different model from the way you think people should optimally approach software.

    Zach: Right. I mean this, actually - you can draw through a line between our earlier conversation about the analytic school, and the physics of the mid-20th century. I think that the early to mid-20th century is a fascinating period in terms of almost every intellectual pursuit. Because all of a sudden things just started aligning and making sense. We understood things, and we understood them well enough that we could go and build things and took advantage of this phenomena.

    We went very quickly from this very crude chemical understanding of how matter works, to, say the transistor. And then we went from the single transistor to billions of transistors, very quickly. That one is actually a little bit less interesting. That's almost mechanical. But the ability to go and make that first transistor, to build atop all the understanding of these very amorphous quantum effects that were discussed there, and to actually be able to do that, is amazing.

    I think that there was a period of time where people had this positive view that was very hard to falsify. Things kept on getting clearer. If you go and draw that trajectory, it was a straight shot or maybe even growing exponentially. There's no clear indication that would ever taper off. Maybe we'd just continue to clarify the world and our place in it, to ever more dizzying heights.

    If you look at the analytic school, this was the first time they were playing with any attempt to go and make math something which has a firm foundation. The Principia Mathematica by Russell. And Frege, with his predicate logic. And you keep on going on from there.

    It is, as you said, arid. Especially from our current perspective, it seems filled with this very bizarre hubris, and this almost inhuman focus on taking people out of the equation. But I think that it's very understandable in the context in which it existed. It has value. Like the attempt to go and build these taxonomies - even if we can now look to them and say, "Oh well, here's what they were ignoring," these are really admirable projects people are undertaking.

    If you go and you follow that through, from the early 20th century to the mid-century, the computer arrives. Now, this is a steam engine for the intellectual progress that was being made. Maybe the only real constraint over our ability to math things was the inability to do enough math fast enough and accurately enough. And now we have this thing that does it for us.

    If you look at the early projects with the computer, which are inevitably tied up in defense spending and a whole lot of other things - and that's a tricky thing to disentangle - but all these people who were simultaneously talking about how to use a computer to track radar blips, were also talking about how computers were going to become this engine of exploration, this intellectual territory that had until now been off limits to us.

    This idea that we could go and we could just apply this very simple logic, and apply it very quickly and flawlessly, and this would somehow like make things clearer than they had been ever before - there is a physics aspect to that. Or maybe just more generally, this logical, mathematical perspective on it. I think that - again, in the context which existed - it makes perfect sense.

    However, I think that if you go and then you follow that through to the latter half of the 20th century - which is people's attempts to actually realize this vision that they had - at best it's been a mixed bag. I think that generally you can say that their expectations were completely under-filled.

    Len: It's really interesting you mentioned that. Thanks for that really great introduction to the place of the computer in intellectual history.

    One word you used more than once was "clarity," and versions of it. One of the really interesting aspects of analytic philosophy, speaking broadly, was an epistemological association with styles of language. I'm not putting it very well, because it's been a long time since I've spoken about this, but essentially what they came to see was that unclear language was a sign of unclear thinking.

    And so they had this - I think - perverse association of a style arbitrarily considered to be clear with a correct way of thinking. Not to go too much into the weeds, but this is where you end up with a continental divide in philosophy, between people who write in supposedly clear styles, and people who write in obscure styles. It seems to me that there's a relationship between what in this conversation calling a physics mindset, and wanting to narrow down the range of reference, and somehow link that with a stable reality, in a way that doesn't necessarily match what needs to be done in software, in an environment that's always changing, and more things are always new.

    For example, you write about how software design is an intractable problem, which for someone with the mindset we're describing - whether it's in analytic philosophy or in, let's say physics - the idea of intractable problems is very troubling. Because, what does that say about the underlying reality? I was wondering if you could talk a little bit focusing on that specific idea that software design represents an intractable problem. Why is that?

    Zach: To wave my hands very broadly around this - I mean anyone who's listening to this and hasn't read the book, I recommend that you do, because I'm sure that this is going to presented in a better order in the book than we are here in this interview - but in general, you can go and you can think about physics. Getting back to that idea of something that's trying to pull every relevant aspect of the world inside of a model. That works for physics, because physics is able to oftentimes reduce itself to - what does one frictionless sphere do to another frictionless sphere? It's able to invent a problem wherein all of the aspects are completely defined, and then hope that can get bootstrapped up into a reasonable prediction of what the real world would do.

    With software, we don't really have that luxury, because we are dealing with problems that don't lend themselves to that artificial simplification. We can't go and pull every relevant aspect of the world within our software. We're up against our own mental limitations very, very quickly, well before we even get anywhere near representing the world in our software.

    Then it becomes a question, again - it's a literary question, which is, what do we get to ignore, and/or what can we ignore? What are the very few things that we should not ignore? The problem with that, is that if you do that perfectly, if you do the Antoine de Saint Exupéry, "There is nothing left to take away" - that is beautiful, maybe? But also incredibly fragile, because your judgement there was entirely based upon the context in which you previously existed, or currently exist. And as the world around you changes, as it inevitably does, that invalidates this perfect minimal set of things that you've decided to represent.

    The fact that we don't do that is illustrated by the fact that we are building our software atop decades-old pieces of infrastructure, which were built to solve decades-old problems, atop decades old hardware. There's a lot of weird little cruft in there that is commented out, or paths that are rarely taken. Occasionally, people try to tidy it up. But we don't have the luxury of being completely minimal in any of these cases. We try to be as minimal as possible, because if we let that go too far, then it gets out of control, and all of a sudden we lose our thread. We lose our ability to actually orient ourselves within this thing that we've built.

    One of the things I talk about in the book is that there are actually two different ways you can approach that. You, the person who built it - are you still keeping track of what you've built? Certainly, it's very easy to get past that. But then also, if someone new comes along, can they follow you to where you've arrived? Or have you basically left everyone in the dust?

    I think that it depends on who your audience is. If you're writing a doctoral thesis, you only need to bring a handful of people along for the ride, and they're mostly down that path. If you're trying to bring a layperson down it, then you have to be a lot more parsimonious in terms of what you choose to acknowledge and what you don't.

    All of these combine into this problem where maybe intractable - I mean I've used that word, and I think it's a good word, but the real point here is what we don't have, which is a thing that we want. Because that pokes at the pleasure center of the engineer brain - is a single scale or value that says, "Here's how good my software is." Then I get to twist something or tweak it a bit, and say, "Is it better now, or is it worse?" We don't have that. We will never have that. That is an unattainable goal. I think that that is something which people like to treat as a temporary lamentable state of affairs, as opposed to an intrinsic part of what we do.

    I spend maybe a little bit too much of my book trying to drive that home, because I think that that idea that we are just living in the prelude to a golden age of software is holding us back.

    Len: And if it's holding us back, how would you characterize what it's holding us back from?

    Zach: Well, if you go to a four year program, there's a chance to get exposed to a very broad range of ideas. In the current curriculum for a four year Computer Science program, there is a very narrow focus. There's some mathematics. There's another closely related branch of mathematics, which calls itself Computer Science.

    I took one ethics course when I was there in my four year program, which talked about the handful of times software's killed somebody and says, "Don't do that." It didn't really, I think, deal with any issue which even approaches the complexity of the trolley problem in terms of software. Which means it wasn't much of a course, frankly. And that's about it.

    There's not really any attempt to contextualize software in the world, or to understand the world and how the software that currently exists or will exist within our lifetimes, will shape it. How these things interact with each other. I think that if you acknowledge that, the tools of mathematics are not sufficient. Then you start to reach into the humanities and you can maybe pull it in.

    Maybe you can have people who don't know that they need to be curious, but when they're in their four year program, are forced to be. They're exposed to these sorts of things, because we acknowledge that this is actually very close to the core questions of software, and require a grounding in the wider set of questions and this broader intellectual tradition.

    Len: That reminds me, I'd like to ask you about abstractions and this relationship between the model that one's building in a piece of software and the world. But before that I have just a brief anecdote that's partly related to what you're talking about, where there's a certain desire to achieve a finality or single solution.

    I was interviewing Jerry Weinberg a few months ago now, and he talked about how he the first computer he ever encountered was himself. He was there for the ride when all of this got going. He was there early days at IBM, when IBM was becoming a computing company.

    And he has this great antidote, which I'll get slightly wrong, about the executives realizing that there was going to be this thing called programming that was going to be happening. They wanted it standardized, they wanted it tied off with a nice little bow forever, so that there would be no more thinking that had to be done along those lines. He didn't say this exactly, but effectively, in their ideal world, there's one programming language, and there's one set of practices, and it's final and it doesn't change. It becomes effectively automated.

    You were reminding me of that. There's people who naturally seem to be driven towards that type of situation, as a solution to problems. And then there's the one you're describing, which can be contrasted to that, where there's ever-evolving contexts that are inherently beyond our capacity to fully capture - either in ourselves or in any model that we build.

    On that note, yeah I was wondering if you could talk a little bit about your interesting idea of abstractions and how you talk about it in your talks and in the book?

    Zach: The commonly given definition of abstraction in the context of Computer Science - the definition is broadly mirrored in a lot of other fields, but it varies depending on what the field is - comes from a 1972 paper by Tony Hoare called, Proof of Correctness of Data Representations Notice that in this title it's talking about mathematical proofs.

    He says that there is this thing called a model, which is contained within the abstraction, which is, broadly, a representation of what's outside of it. This thing is populated by its environment. It's meant to reflect a facet of it's environment. And then this model has an interface, and the interface and the model have this flexible relationship, because the interface is a very simplified version of what's going on inside of it. It exposes certain semantic details, but hides most of the others.

    The nice thing about that is that now we can play shell games where we keep the interface, but we change the underlying model, depending on our needs. We can go and we can build software atop the interface. And on either side, there's this flexibility, this ability for us to go and change this.

    But what the paper doesn't discuss at all is, what's going on outside of the abstraction? What we're trying to do in this paper is construct a proof that this is a correct abstraction, and so this relationship between the model and the environment - this is not spoken of.

    This is the focus of my book, the problem with this. Even the problem with the idea of correctness as a property that we have within reach, is that it is an immutable property. And that is also, as you said, what physics is like looking for. It's this immutable truth. What is the ultimate, unchanging truth that we just have to go and re-contextualize within an environment? Fundamentally once we learn this, then we have it.

    This speaks very much to the - I don't know how many physicists that you've spoken with, especially when they're first learning it, but you can choose to find it an amusing perspective, which is, physics is at the core of everything. So chemistry, that's just physics. And biology, that's just chemistry. And psychology, that's just biology. And so really I'm at the end point in all of this, and you're all just a bunch of frictionless spheres bouncing off each other.

    In the most reductive sense, maybe even that's true? I think it's not according to the most recent models. But basically that's an uninteresting perspective. It's an uninteresting conclusion. Because the question is not - is this something where, if we had infinite time, could we actually go and calculate what goes on in this clockwork universe?

    What is a useful thing that we can know? What is a heuristic that we can use here to actually interact with the world around us, as the world is changing around us? Because we are not able, even with the aid of computers, to respond in real time to these sorts of things in any effective first-principles way.

    I like to say that mathematics is concerned with being self-consistent. This thing does what we expect it to do. But correctness implies that there's a property which survives being dropped into any context. I think that that's provably, obviously false.

    A better question is, what makes a useful abstraction. What is an abstraction which is useful in the subset of environments we expect it to be dropped into? That's really what the book focuses on a lot, which is, how do we build useful software, and how we judge whether a piece of software's useful? If we judge it to be not useful, what are some reasonable steps we can take to fix that?

    The problem with this, is that I'm trying to go and talk about software in general. And of course there is no general software. I don't want to go and over-fit. So I end up talking a lot about these generic concepts, and then rely on the reader to add water and be able to actually turn this into a meaningful framework that they can use to think about or talk about their software.

    Frankly it is still an ongoing project. I don't know if I've hit the mark there. But that's a problem, that abstraction is useful, and it is necessary, ecause otherwise all you're doing is just recounting an anecdote about your life. But you're asking more of the people that you're talking to, the more abstract you become. I think the book, as it's written, demands a lot from the reader. But I just don't know another way to accomplish my goal without doing that.

    Len: I really liked the distinction that you drew between correctness and self-consistency. I wanted to ask you, is that related to your thoughts on the difference between software engineering and civil engineering?

    People often go, "Why can't software work like bridges?" Where most of the time, hopefully all the time, you can solve problems in civil engineering in a way that you can't solve problems in software engineering.

    Zach: I think that this question, it's asked a lot. At this point, it's very irritating to me, because I think it's akin to asking, "Why don't you build the plane out of the black box?" Fundamentally, we don't build software the way that we build bridges, because we don't build software that solves a single, fixed problem that is there for all time.

    In the book, in a footnote, I say this is a bad metaphor. If you're looking for a better physical metaphor - and all physical metaphors are flawed because you never run out of space in software; in software, if you build the ideal home, building it once means that everyone gets to use it - it's the way in which these things like deal with scarcity; you're probably just going and misleading yourself by trying to make a metaphor.

    But city design is at least a closer thing. Because the needs of a city, and the needs of the inhabitants of a city are constantly changing. Notably, if you go and give everything that the inhabitants of the city want, they're just going to ask for more. They're going to invent new things that they need to be happy with their lives. That constantly moving goal, that threshold of sufficiency is the fundamental property about software that we need to go and acknowledge and maneuver with respect to.

    There's some interesting things about city design. But again, it's an imperfect metaphor for all the reasons that I mentioned, and probably a lot more that I haven't even thought of. So, use it with caution, I guess?

    Len: Some of the thoughts you're trying to convey in the book are inherently difficult to convey. I don't want readers or potential readers to be left with the idea that your writing is not entertaining, because it is.

    That reminds me of an example I came across first in your talk, where you talk about the idea that possession doesn't imply understanding, and you very colorfully go from King Arthur pulling the sword from the stone, to Beaudrillard, to machine learning. One of the interesting things about your work is that you have all these different things to draw on that one normally doesn't see drawn upon together. Can you walk us through that little sequence of steps from King Arthur to machine learning?

    Zach: Sure. There's a really common scenario when you're building software, where you become very familiar with some tool, and become very familiar with its shortcomings. And learning the shortcomings of a tool is a long and fairly laborious process, because you need to go and use it in a bunch of different situations and get a feel for where it is and isn't effective. But notably, if you go and you look at, say, an open source library, there are very few open source libraries that will say front and center, "Here is what this library is bad at. Here's what it's not meant to be used for."

    What it will do is it will try to pitch itself as this all-seeing all-dancing thing that will solve all of your problems. And if not, that's just because I haven't come out with Version N+1" yet. Again, this speaks to our unwillingness to acknowledge that abstractions are contextual things and are not meant to solve all problems. But it's also, I think, just that culturally it's hard for us to go and say, "I built this thing. I spent a lot of time on it, and here's why it sucks." That's a difficult thing for any creator to do.

    This means that if we go and we use a thing and we become very familiar with its shortcomings, someone comes out with a new thing, and we look at it, and we see the exterior of that thing, the "read me" of that thing, which says, "Here's what I do." Without acknowledging any of those weaknesses, all we see are the good parts.

    What we're comparing now is this very apples and oranges situation where we see through the one thing. We see through to all of its flaws. With the other thing, all we see is its nice, shiny exterior, and we're like, "Oh, well this is better. We should use this."

    In fact, this is almost a clichéd example of what a junior engineer will do. They'll jump on the new thing, because they hate the old thing. And of course that just means that later they'll hate the new thing and look to the new new thing.

    The point that I make is that this is a very common pattern, where what we focus on is not our understanding of a thing. Understanding is not a property that we think is important when judging something. It's the possession that matters. And so, I say that, if we look at King Arthur, King Arthur was able to pull the sword from the stone because he had the right bloodline, because he was Kingly. He was wise, he was brave.

    But, at least in the context of the Arthurian period, that was totally subjective. The only objective fact here is that he holds a sword in his hand. And that's what we focus on. In a very real way, the sword is what makes him kingly, it's what makes him brave. It's what makes him have the right bloodline.

    Another example I give in the book is this concept of the philosopher's stone. You might think that this is meant to be some allegory. Every masterful alchemist goes and builds a philosopher's stone. But in the actual literature, it's meant to be this physical thing - it's red, it's heavier than gold. It can be ground down. It can heal all sickness, it can extend life.

    The implication of that is, if I go and have the philosopher's stone in my pocket, and it falls out and someone else picks it up, now they have all the power of a master alchemist. They don't need to understand all the things that went into the making of it. It's just the MacGuffin. It's the thing.

    I think this might be somewhat more common in the Western canon than it is in other cultures. But it's all the same, I think - a very human thing, to focus on the external aspects of it, to focus on the possession of something as the most important part. That's the slightly meandering version of what you are talking about there. I do mention Simulacra and Simulation by Baudrillard, as

    Len: Which people who are listening who may not have read it, will recognize from its brief appearance in The Matrix.

    Zach: That's right. Yeah. The broader point that I try to make here, is that these questions, we talk to ourselves and we say, software, it's new. Computers they're half a century old. We're still getting our bearings; we have a lot of stuff that we have to go and think about. And we refuse to acknowledge the broader intellectual tradition we sit within.

    The fact that the question that we're asking, which is - how do we deal with something which is too big to fit in our head? is a millennia-old question. There are so many different answers to that, and so many perspectives on that. So many things which we might read and might recognize as a distorted reflection of what we do on a daily basis. But most people who study software will never come into contact with that unless someone shoves it in their face.

    I think that the only real antidote to that is really the conference model, where people go up and give talks and give a small little glimpse into a different path of study that they could've taken that's digestible within 40 minutes or so. I think that that's good, but it's not a solution. That's a band-aid.

    Len: Your invocation of Simulacra and Simulation drew together a couple of interesting things for me about this idea of possession not implying understanding, necessarily. One of the fascinating things about that book, as a historical artifact, is that it was about the first instance of reality TV, I think? What Baudrillard is partly talking about is how the representation of something - and this is crude and wrong - but the representation of something can make it real in a way.

    It also reminded me of one of my favorite examples of that same type of thing is, people often assume that a businessperson understands the economy because they possess a business, because they're presented as being associated with money and its accumulation. And they even assume about themselves that they understand this thing that no one credibly claims to understand, which is the economy. Because it's too big, it's too big for us.

    But as you've invoked, we all have this very deep drive to have things be containable and understandable. And the idea that you can do something actually really easy, like start a business - I didn't say succeed at it, but starting one and owning one is not hard - and then you can present yourself, even to yourself, as understanding this giant thing. It's just very tempting to have these substitutions set up, because they make us feel better about ourselves and the world, when things can be beyond us.

    Zach: Right. There's a lot of commentary we could go off and veer off into - the fact that I think in the gold rush, people who found a successful gold mine here in San Francisco were not then asked, "You seem to understand so much about money." And yet, we seem very willing to ask that same question of someone who's started a start-up, and in many cases, was just in the right place at the right time with their particular idea.

    But I think that, specifically with respect to software, this focus on this idea of possession is especially dangerous, given the current fascination with machine learning, which is omething that you mentioned that I didn't actually go into - which is: we build this thing which has these emergent properties. And we think that because we defined some aspect of how it was created, that our understanding somehow carries through to all the consequences of our decisions, that by defining the seed, we understand the plant or the forest or the - all the other consequences of this, this one action. I think that if you go and you talk it through, people would be like, "Oh, well no that's not actually what I think." We don't poke our own hot balloons are self-regard there very often. We're not wired to do that.

    I think that we're starting to see this now with self-driving cars, which is, frankly, I think a little bit terrifying. The engineers that I know who are knowledgeable about these things, more knowledgeable in most cases than I am about the specific techniques that are being used, are split 50/50 between horror and glee, in terms ofthe fact that there's so much new stuff to figure out, but also we're just pushing forward without first pausing to figure it out.

    We can look back to the early- to mid-20th century where, in the Manhattan Project, they had a ball of uranium - a little sphere that was in the entry way to one of their buildings, and people would rub it for good luck. It was a little bit warm to the touch. And most of the physicists predictably died of cancer, at a relatively young age.

    Because they didn't understand everything. And some of the factors there probably wouldn't haven't been understood, even if they had tried, without decades of actual study. And they didn't have decades. So this was a sacrifice that they made. There's that same hubris, I think, at play there. It worries the hell out of me.

    Len: That's interesting. I don't want to go down - I mean I do actually want to go down this road, but we'll time-box it for just a couple of minutes.

    This is actually a preoccupation of mine in particular. And one thing - my brother and I have a joke that there is an infinite number of ways you can divide the world up into two types of people.

    And I'm definitely on the side of - a million people are dying a year in car accidents. If there is a problem with us not knowing what's going on inside the black box of machine learning, that's a problem that I'm concerned about. But it's inversely proportional to my concern about what is it about us that lets us live with this current situation as though it's okay. Which is just a way of talking about an entirely different dimension of a problem.

    Zach: I don't have a good answer to that. This gets back to the thing where we understand very deeply the shortcomings of the current model, and understand only the potential of the new one. Maybe you could say, "Well, it couldn't be much worse?" Which is the antithesis to "the devil you know."

    But I think that it's hard. Any decision we make there is pretty much, by definition, not a rational one. I think that also as accidents occur, as they just did recently, we can go and we can re-evaluate the different approaches here. But it's more worrisome to me, because there is this veneer of mathematical certainty smeared across the top of this enormous opaque thing that's just accreted. And that's not the right way to be thinking about that.

    If we looked at that with clear eyes, we would still do exactly what we're doing right now. But the way that we're understanding this, the visceral understanding we have of these models that we're creating, are not, I think, accurate.

    Len: You mentioned piercing the hot air balloon of self-regard, and that reminded me of something you wrote about narrative fallacy and open source. I know you've created some open source libraries, and that's an important part of what you've done. I wanted to ask you about that, that specific question - what is the narrative fallacy in open source, and what's your rather pointed opinion about that?

    Zach: I spent a lot of time thinking about that, because I've spent a lot of my life doing open source work. I think that for a while, I was doing it without really having articulated to myself why it was a thing that I felt like I ought to do. It took me some time to reach any level of clarity on that.

    So, there is this very well-defined narrative, which is from this really odious guy named Eric Raymond, who reflects the worst of the early parts of the libertarian Internet.

    But he's famous, and he's famous because he basically built this narrative wherein he was the archetypal hero. Which is to say, someone who had a problem that they were trying to solve, and he inserted himself into this vast network of people who had similar problems. In this emergent way, they solved the problem in a better way than any individual person could have. He coined this term "Linus's Law," which was meant to be about Linux, which is - I'm going to get the exact text wrong, but it's, "With enough eyes, every bug is shallow." The ideas are is basicallythe law of large numbers makes it like possible for us to write better software than any one expert could ever do.

    And he's not wrong, but the narrative fallacy - in terms of how I said this - is that it presumes that basically because the Linux kernel was created, because this enormous community of people built around this, that that was either inevitable or even desirable to most people who create open source.

    You could go and talk about any creative endeavor anyone who's going and writing a little bit of fiction, or writing whatever - who wants to create the great American novel. Anyone who's writing an essay wants to become a public intellectual. I mean these sorts of outcomes, they exist - but they're not the common outcome. To assume that basically there is a strict heirachy where some people just got further up that single ladder than other people, is absurd, if you stop to think about it even for a moment.

    Towards the end of my stint of being very focused on these sorts of open source projects, I started to get a little bit burned out. I realized that the maintenance of the project, the construction of this community around it, wasn't what was motivating me. What was motivating me was that this was an exercise in understanding something. And once I understood it, it wasn't that I would run away necessarily - but it lost a lot of the allure for me.

    In some cases, I would learn enough and I would have this horribly incomplete thing, and it would never even see the light of day. In some cases it would be something where, in order for me to understand whether or not I had built something that was useful, I had to see if people used it. Whether or not it fit their needs, and then go to that then and then update my own understanding.

    I think a lot of my insights, such as they are, about software at this period of my career, where I was extremely focused on building these abstractions, was building these interfaces, that were meant to be these sorts of general solutions to these problems, and then just seeing if people came to me and said, "This doesn't solve this particular case," that you weren't even aware was a thing before.

    I would have to fold that in and figure out how to extend what I had built to fit whatever it was that they were trying to do. But that's it. I wanted to learn. And it was a fundamentally selfish, greedy motivation there.

    In this essay I wrote, which is called, Standing in the Shadow of Giants, was basically me just talking about the fact that I think that most people who do open source work are not trying to go and build a log cabin and then wait for a city to grow around them. I think most of them want to be on the frontier. And that frontier's a moving target, which means that as people start to show up, they inevitably get pushed further out.

    This is absolutely a thing that you can observe in any open source community. The fact is that, I think that we understand this is part of the social mechanics, but we still have this just one narrative that was positive in 1998, as the one true thing that open source projects are aiming for. I think that we need to be more honest with ourselves. I think that we need to not just have this implicit understanding of what goes on in there, but actually start to like create a vocabulary for this.

    Because if we have a shorthand for it, then people can actually have meaningful conversations. This gets back to the question of the questionable merits of the analytics school. Because I lean on that pretty heavily in some parts of the book. They provide a taxonomy. The taxonomy's not perfect. It purposefully ignores a lot of nuance there. But a reductive taxonomy is better than none at all. Because without that, we're just reduced to making grunts and pointing at things.

    Len: That's a really great segue into my next question. But I'm going to pre-empt it with just an interesting observation about that essay. I really enjoyed the way you set it up with the image of - you don't put it in these terms essentially - but an association that people have historically drawn with the advancements on the American frontier and advancements in, let's call it civilization. You have this great little joke where it's like, the only problem was that when civilization arrived on the frontier, the frontiersmen kept moving. Or some proportion of them just kept moving, because that civilization was the last thing they wanted. That's why they were on the frontier in the first place.

    Zach: Right. I think that that's the thing. We say, "Oh wow, these people they were such pioneers. They so clearly wanted the American way of life to go westward." And well, no, they wanted to escape it. They wanted to get away from this structure of civilization. And maybe at some point they had enough property that they wanted someone to come along and protect it for them. That's not to say that people's understanding of what they want might not change over the course of their lives. But to say that basically there is this collective urge towards the creation of this enormous, complex, intertwined thing - is ridiculous.

    If we go and we say that the end is prefigured by everything that was coming before, and everyone knew the best way they were going, and they were trying to get there, as opposed to, "that was just an accidental result of all these things that people did for reasons that are diverse, and sometimes antithetical to the actual outcome."

    Len: I actually know I don't know what I might be stepping in with this, because I've been in the startup space for a while now, but I came from a non-technical background. I remember the jarring moment when I was reading Eric Raymond's The Cathedral and the Bazaar, which I was thoroughly enjoying, and then he referred to Ayn Rand as a philosopher, and I got a little bit of puke in the back of my throat. But at the same time it was like - I know that there's some intellectual background that he's coming from that I don't share. And when you mentioned just now that he was part of a libertarian part of the discussion, that really actually settled a long-time question I've had that I didn't have an answer to. So thanks for that.

    Zach: If you're actually curious about this, he was a maintainer of this thing called "The Jargon File," for a very long time, which was his attempt at defining "hacker culture," or "hacker personality," or whatever. It was basically his way of saying, the ideal archetype is me. "It's me, hi. I am the archetypal hacker." He's not a bad writer, and he is not completely devoid of valid points, but, I mean, speaking of hot balloons full of self-regard, that's him.

    It's frustrating, because I feel like the whole narrative that he built around his heroic journey in the open source community, hijacked a lot of the conversation in a very unproductive way, and hijacked a lot of people who were first starting out. Because people will go and they will emulate, for the same reason that people will go and read the biographies of famous physicists or scientists or whomever. People will try to go and make the early points of their lives coincide with the early points of their hero's life. Even though that doesn't actually control the outcome in the least. I think that you can go and you could admire some small aspect of what they've done. But then people go and try to emulate them - it's counterproductive, at best.

    Len: On the subject of productive emulation, moving back to your book. You had a really interesting approach when you started thinking about writing your first chapter, as I understand it, which was, "I want to write about naming in software production or development, and I want to not start necessarily start entirely anew. What if there's a framework out there for naming and for understanding language that I can draw upon?"

    And as you mentioned just a few moments ago, you found a framework like that in analytic philosophy. Specifically, you talk about Frege's concept of a sense. I wanted to give listeners an example of actually how useful this, to most people, obscure aspect of a philosophy of language actually can be when you're thinking about like doing some very pragmatic, involving other people or you're building a shared vocabulary - which is, naming things in software. Could you talk a little bit about that? About reference and sense and things like that?

    Zach: The classic way that that sense is presented, in the philosophical discussion of names - you have a thing which is a sign, which is like the textual representation of the name. And you have the referent, which is the thing to which it refers. There were a lot of different takes on this. Plato in Cratylus, says, "Names actually reflect your inner character." Now if you have a name like from Dickens, which is like - I can't even think of a good Dicken's character name now, but -

    Len: Mr Krook.

    Zach: I mean yeah let's go with the obvious ones. Those are cratylic names, names which are meant to establish who they are, without you even knowing who they are yet. And that obviously isn't true. So, then the philosophical consensus was, any sign can point to any referent, it's just this arbitrary thing that we agree upon. Dartmouth is not necessarily at the mouth of the Dart River. If the Dart River moved, you wouldn't have to rename Dartmouth. This text happens to mean this thing.

    What that means then is you can have multiple signs that point to the same referent. And these things just become synonyms for each other. But what Frege pointed out was that in ancient Greece they had these two heavenly bodies, which were the morning star and the evening star. They were seen at the beginning and the end of the night. Both of which happened to be Venus, but they didn't know that until a fair bit later.

    And so he said - well, we could construct something where we'd say Homer, who was in ancient Greece, thought the morning star was the morning star, is this tautological, obviously true sentence. But "Homer thought the morning star was the evening star" is not. These sentences are different and therefore these different signs that point to the same thing are not just synonyms for each other. They differ in terms of their sense, which is how they refer to that thing.

    By the way, this is a pretty big oversimplification of a very nuanced subject. I should note that the names chapter is available for free. If anyone wants to go into this, you should just go and download the sample chapter.

    In software you can draw a similar thing as they do in philosophy: counterfactuals. A common one is like, "What if Nixon wasn't Nixon?" They talked about this a lot during the 70s actually. And they sad, "What if Nixon weren't President? Would Nixon still be Nixon?"

    We don't have that. We don't have alternate universes in which our code base is different. But we do have this constantly changing set of implementations. These things are hidden beneath the interface. They change, and their nature changes. And yet we still use the same word to refer to them. So then the question becomes, "Okay well, if we go and we have a list, and the list is a linked list or the list is a sorted list or the list is an array or whatever - is it still a list?" Right? What is the thing that holds it together? What allows us to go and talk about these things interchangeably?

    In my mapping of this philosophical concept onto software, the sense is the essential quality that is that through line that remains valid, even as our software changes. And so if we talk about like a unique identifier - often people use a UUID, which is just a very large random number that is extremely unlikely to have a collision - and we start high with this, we say, "Okay well is our ID represented as a number of this particular size, or is it just some arbitrary string, or what are the things that we know to be true now, versus the things that we believe will be true forever? Or will be true for so long that like if it ever does change, we'll just change what we call it?"

    That's difficult, because as software implementers, we have to go and cleave our understanding of something down the middle. Because we both understand what externally visible semantics are, but also what it is underneath the covers. And when we're talking about it, we can't go and allow our understanding of the innards to leak outside, because now we are enshrining these incidental implementation details as permanent features of the thing.

    The sense is not something that our software enforces for us, because it's captured not just by the code, but by how we talk about the code and how we allow that code to change over time. There are type systems, but there's no type system that constrains what the software can become. Fundamentally this sense exists in our head and nowhere else. And so - this again gets back to the intractable thing, because we are the stewards of not just the code, but what the code becomes. Therefore, the onus to make these things change in a way which is reasonable, it's entirely on us. It can't just be our tools. That's probably clearer if you read it while it's written down. But that's the gist of it.

    Len: I can't resist the

    —Huffduffed by davidwalker

  3. Launch. A Startup Documentary.

    Podcast: Play in new window

    | Download

    Follow a tech startup on its journey from nascent idea to embryonic product to triumphant launch.

    Launch follows Rob Walling and Derrick Reimer as they bring email marketing startup Drip from its first days of inception through the agonizing months of writing code, talking to customers, and the never-ending uncertainty of whether anyone will ever use what they are building.

    From the honeymoon period of “green field” development to the sleepless nights of database failures, Launch captures the often underplayed, but very real angst of building a startup.

    Compiled and edited from dozens of conversations captured over a 9-month period, this 2-hour audio documentary serves as a dose of reality to the often mythical media portrayal of the startup journey.

    http://www.startupstoriespodcast.com/

    —Huffduffed by davidwalker

Page 1 of 4Older