cemerick / Chas

There are no people in cemerick’s collective.

Huffduffed (185)

  1. Elements of Clojure with Zach Tellman · The REPL

    Elements of Clojure

    Ideolalia

    Bifurcan comparison to other data structures

    Jorge Luis Borges

    Standing in the Shadow of Giants

    CLJ-1517 - unrolled small vectors

    CLJ-1415 - Keyword cache cleanup incurs linear scan of cache

    Open Source is Not About You

    Semantic Machines

    Daniel:

    00:00:00

    Hello, welcome to the REPL, a podcast diving into Clojure programs and libraries. This week, I’m talking about Clojure with Zach Tellman, the creator of Manifold and Elements of Clojure, a recent book about Clojure. Welcome to the show, Zach.

    Zach:

    00:00:13

    Thanks for having me.

    Daniel:

    00:00:14

    Yeah, it’s great to have you on. So, I think that in preparation for this interview, I was thinking a little bit about people in the Clojure community and your impact on Clojure. And I would say that you’d probably be the top 10 at least, if not top five people who’ve had impact on Clojure programmers. And that’s, I guess, most Clojure programs running today would have some of your code somewhere in it. Do you think that would be a fair assessment?

    Zach:

    00:00:42

    I think so. I’ve never been sure if people use my libraries because they’re the right libraries to use, or because they think they’re kind of near, or because the name is just more memorable than the other library. Yeah, I think that they’ve propagated pretty far into the community at this point.

    Daniel:

    00:01:00

    Yeah. And so, you’ve been working with Clojure for 10 years, maybe more by now?

    Zach:

    00:01:06

    Yeah, that sounds about right. I started in … It would’ve been late 2008, early 2009 I think.

    Daniel:

    00:01:14

    Right, cool. And so, you’ve kind of covered off your intro to Clojure pretty well in talks in other interviews, so I don’t want to kind of rehash your whole Clojure origin story. But I guess maybe would be able to just sort of give us a brief overview of why Clojure, how you got here, and then maybe start with some of the original libraries that you worked on?

    Zach:

    00:01:36

    I started using Clojure during my first job, where I was working with C# to write desktop software for Windows. I realized a few years in that this was not something I wanted to be doing for the rest of my career. And so, I started looking at alternative languages. I was looking at Ruby because I was in SoMa in San Francisco. GitHub had just been founded like down the street, it was very much in the air. I looked at Ocaml, I looked at Erlang. I looked at Clojure, and Clojure was because I was working at the time with Tom Faulhaber, who wrote clojure.pprint among other things. And he was a fan of Lisp from back in the day with Common Lisp. And he really thought Clojure was something worth looking at.

    Zach:

    00:02:24

    And the absolutely absurd sort to test I did for each of these languages was, I tried to write something with Open Gl. Because in school I had focused on graphics and computational geometry, and I felt like I kind of missed that and wanted to get back into that. And so, I played around with Ruby. I played around with Ocaml, both of those bindings weren’t very good. Erlang didn’t even have them, so, that was a non started. But, Clojure, when I started playing around with it, I was able to just take LWJGL which is the Lightweight Java Gaming Library, which provides extremely literal bindings over the Open Gl speck. Literally it just has a bunch of static classes which correspond to the different Open Gl versions. So, there’s GL-01, GL-11, GL-12 and you have to import with the static methods from the correct one. It’s actually really tedious.

    Zach:

    00:03:21

    But as I was learning Clojure and trying to go and learn how this library worked at the same time, I found that there is this really interesting semantic compression I was getting. I could go and I could say, “Actually, I don’t care what class this is in. I’ll just use macro time reflection to figure out which of the classes this should be. Because I know there’s something named this somewhere, so just find it.” And also Open Gl has a lot of scoped operators where you have to explicitly enter and exit some sort of scope. And of course with Macros it’s very easy to go and just say, “Enter this at the top, exit this at the bottom within some sort of try-finally.” And so, It was actually weirdly a very good way to get familiar with the benefits of using Clojure. At least as a way of interfacing with the Java library ecosystem.

    Zach:

    00:04:11

    And so, based on that I created my very first Open source library on any language, which was called Penumbra which was a wrapper for Open Gl. And it was actually a wrapper for a older version of Open Gl which is called Immediate Mode, where you go and you …. for each frame, you make a call for each vertex that you want to draw. And this is very inefficient and not used by any serious game engine anywhere. But it is a very easy, fun way to go and experiment with it. That’s what I did. I just played around with it and came up with a little graphical demos and figured out how that should work with Clojure.

    Zach:

    00:04:48

    And towards the end of it I was actually trying to create something that transfiled Clojure into GLSL with a GL shading language which is the code that executes on the GPU. To do this you had to type inference and stuff because GLSL is effectively like C99 with like a few extra operators. It worked-ish, but basically I was the only one who understood what sorts of programs would properly transfile and which ones wouldn’t. And so, that also became my first introduction to the fact that, if you write enough Clojures and you build enough Macros and enough compile time logic, it becomes an opaque tool for anyone but yourself. So, hat was a fairly fully featured introduction to the good and bad parts of Lisps and Clojure more specifically.

    Daniel:

    00:05:41

    Nice, and I think probably a feature of your work would be your Macros. You’ve written a lot of Macros, your code is Macro heavy, sounds like it’s a negative thing, which I’m not saying it is, but-

    Zach:

    00:05:52

    I mean it might be. I’m willing to accept that if that’s how you’re going to put it, so.

    Daniel:

    00:05:59

    No, I just think it’s dimly a feature of your work and you’ve probably written a lot more open source Macros that many people have I would guess. And probably have a mature take on Macros by now, I would imagine.

    Zach:

    00:06:11

    I don’t know. I think that the way that I try to approach things, the way I try to approach learning things specifically, is to try to figure out where things break down. What’s the boundary of this thing? Where does it become this absurd thing as opposed to a useful application of some concept? A lot of my open source libraries like ones that I’ve actually released and ones that just never really quite made it off the ground, were me trying to understand like, “Where is this sensible and where is this me doing this for the sake of doing it?”

    Zach:

    00:06:48

    And a great example of that is my catch all utility library called Potemkin, which is actually I think the second library I built or rather released. Because I had this idea for how name spaces should work in Clojure. Because, Again, Open Gl has this huge surface area to cover and so I wanted to be able to have a lot of these operators exposed in some places for my own use. And then I wanted to take a sub-set of those and lift them up into a different name space for public consumption. And so, created this Macro called import-vars, which I thought at the time was just insanely clever idea. I was very proud of it.

    Zach:

    00:07:30

    But I think that it also spoke to a problem I was seeing which was that, Clojure doesn’t really have a out-of-the-box opinion as to how you should structure namespaces. The only limitation of what goes in the namespace is, you can’t have two vars that have the same name. And if you take that to its logical conclusion, you basically get Clojure core, which is thousands of vars, none of which collide with each other. But there’s no relationship between them other than the fact that they are just built ins to the language. And if it were something that were actively developed, I would think that most developers, maybe not Rich, but most developers would find that very ungainly and very hard to navigate. And then I want to go and put in like seek related functions in its own namespace and special formulated stuff into their own namespace and everything.

    Zach:

    00:08:17

    And then just be able to say, “Actually, all of these should get imported and surfaced into this Clojure core thing.” And so, it’s sort of decoupling how your code is organized for your purposes, and how your code is exposed to the consumers of your code. Having those be separate seemed good to me at the moment, and I still think it actually seems pretty valid as I explain it right now. But the rest of the community did not agree. In fact, I think that this was like the first time that someone just like expressed a general, “Ew,” sort of like, “That’s a gross thing you just made there,” sort of reaction, which is not the last time that that’s happened certainly. But, it was the first time that someone just had a very strong negative aesthetic reaction to this idea that I had.

    Zach:

    00:09:01

    In honesty that was actually really interesting to me and motivated me to do this further because I had this question in my head of like, what is good design? Software design has always been something that has really interested me, because it seems like there is a difference between something that is good and something that is bad. Certainly in day to day conversation when we’re collaborating a code, people have these aesthetic reactions. But to really understand, to predict how people will respond to this thing is hard. And so, being able to have this test bed, which was the Clojure community and be able to put something out there and say, “What do you think? Do you hate it? Do you love it?” And just see how people respond to it, what parts people sort of take and run with, what parts they’re just confused by, was actually genuinely exciting to me. I felt this was a way to answer these questions much more directly than just writing some code at work.

    Zach:

    00:09:54

    That was really what drove me to go and build more open source libraries, was the fact that I could get feedback, sometimes explicit and sometimes implicit through just people choosing to use or not use the thing that I have built. Of course, it’s not objective, because once you become established in the community, people use it not because they’ve carefully considered all the alternatives or something like that. They use it because there’s a brand associated with that or whatever. It’s more complex than I think I’m making it out to be, but still I think that it was an opportunity for me to learn about software design much more quickly than I would just if I were heads down coding through the work day and letting it go at the end of the day.

    Daniel:

    00:10:34

    Nice. I’ve heard lots of people talk about why they contribute to open source, and why they create open source libraries. But I’ve never heard anyone talk about that aspect of understanding good design, at least not as clearly as you have.

    Zach:

    00:10:46

    Well, I think that people are motivated to do open source for a wide variety of reasons. I mean this will, I assume come up later in the conversation, but this is something that I don’t think that if you would have asked me when I started doing this like, “Why are you doing this?” I would have had as articulate an answer to that question. I think that at the time it was just weirdly compelling for reasons I couldn’t quite say. In the same way that Clojure as a language is weirdly compelling to me for reasons I couldn’t quite say. The answer I would give when I was just starting out and I was telling people about this cool, new language that I was using, they would say, “Well, cool, pitch me on it.” The best I could come up with was just, “It fits my brain, it fits the way that I think and maybe it will fit yours too if you check it out.” It’s far from the most winning elevator pitch I think, but it’s hard. It’s hard to, I think, be really clear about, “Why am I having anesthetic reaction to this thing?”

    Zach:

    00:11:42

    It’s undeniable that I was and that other people have had this reaction to Clojure. To really break it down I think is a much more complex process, I’m not even quite sure that I’ve fully done it at this point.

    Daniel:

    00:11:55

    Yeah, maybe … diving back into your timeline there, that after Potemkin the other long running Clojure library, that I think many people will be familiar with is Aleph which is a … Would you still call it a Netty wrapper? Wrapper sounds quite diminutive.

    Zach:

    00:12:12

    It’s interesting, so, to start from the beginning, Aleph started in … I want to say 2010.

    Daniel:

    00:12:19

    Yeah.

    Zach:

    00:12:20

    I believe around July. I remember because I wrote it over a long like 4th of July weekend, that was when that happened. The impetus for that was that, I had gone to a Clojure meetup and people were talking idly about what would an async ring look like? And it’s important to remember that in 2010, the new hot news was Node.js. This had just come out I think less than a year prior. It was taking the world by storm, everyone was really excited to async all the things.

    Zach:

    00:12:54

    And I think that there was a sense that Clojure as another newcomer on the stage needed to have an answer to Node.js. What was our community’s thing that was going to be able to tap into the same excitement and be able to use it to grow our community as well. I didn’t have very good answer to it, and it’s worth remembering at this point, I was doing front end … or really I guess desktop development, and my background was in graphics. I hadn’t done systems development or any real sort before. But it seemed like an interesting problem, and other people that I was talking to there who were more experienced of this problem than I was, felt it was difficult and hard to navigate. And so, I thought I’d just play around with it a little bit.

    Zach:

    00:13:41

    And so, I found Netty, which was like the Java async option. And I wrote just enough code in Clojure to expose enough Netty that you could stand up in http server. That was over the course of a couple of days, and I tested. I curled it once to make sure that it were to return “Hello World!”, that was the extent of my testing. And then I just was like, “Hey, here’s the thing.” I think I posted it on the Clojure mailing list. I haven’t looked at this announcement for a while, but I think I was pretty clear. This is just like me playing around with what does async Clojure look like?

    Zach:

    00:14:16

    And someone posted on the Hacker News that David Nolan ran a benchmark, which I had not bothered to do up until that point and said, “It’s faster than Node.js,” which is a total not apples or oranges comparison for tons of reasons. For instance Node.js is single threaded, and he was on an eight core machine or something like that. It was an absurd comparison. But both the announcement and the benchmark made it to the top of Hacker News for a day. I had my little moment in the sun, and it was absurd on some level because literally I had just written a “Hello, World!” demo of how one could interact with Netty and Clojure.

    Zach:

    00:14:57

    But what that did prove to me is that there was an interest in that space, and a much more avid interest than there was in Open Gl, which is where I’d been putting all of my time and effort up until that point. And so, I thought, “Well, if people like this and people are interested in this, maybe I ought to think about this some more.” And so, I started to tinker with it and think about, “Okay, what are the right ways to deal with asynchrony?” and other sorts of things like that. And at some point all of those questions, which were largely orthogonal to Netty specifically got pulled out into a library called Lamina, which was dealing with kind of data flow streams. Now, it wasn’t a queuing library because none of the things there had back pressure or like any of the things you’d assume that a queuing library ought to have. It left that as a, “At your end you should be paying attention to when things come out the other end.” And, “If there’s too much then stop sending stuff in.”

    Zach:

    00:15:58

    And that just was reflective of, again, my lack of experience there. These are not things that I realized were important to have. On the basis of that, on the basis of me just really brazenly trying to solve problems, I had no business or experience trying to solve, people gave me a lot attention. And I got a job offer out of that to go work on Clojure full time. And so, that was very beneficial to me and I think that like also that is very reflective of how I’ve treated a lot of open source libraries, which is a chance for me to go and learn about something I didn’t know very much about before with the idea that if I’m doing it in public, it’s going to be extremely embarrassing if I get it wrong, so, I better not get it wrong. So, I better think about it pretty hard and put the time in to make sure that it’s not at least embarrassingly wrong, which again sometimes it is. That’s a lot of the motivation for me, is that I feel like I learn better in public, I guess, or learn more quickly at least.

    Daniel:

    00:16:59

    You’re working in public, but also what are your thoughts, how do you feel about working with other people? Not just showing your work, but also contributing with others or having others contribute?

    Zach:

    00:17:10

    I mean I’ve done a little bit of it, but I have to confess that a lot of what I’ve worked on … I mean, certainly Aleph by now is a collaborative project. At this point I’m getting a lot of contributions from Alexei [inaudible 00:17:25] and he is, at this point, basically maintainer and all but name of that library. And I’ve been talking to him a little bit about whether or not he would like to make that a little bit more formal.

    Zach:

    00:17:34

    For some of the other ones that I’ve worked on, I think that occasionally someone will just come in with a PR where it’s clear that they just must have spent days digging into the innards of something and come up with the exact two line change that needs to go and fix the problem. I’m always incredibly surprised and impressed when somebody does that. But whether it’s just kind of like the code seems a little bit weird than what people are used to or they just don’t feel like they’re up to the challenge of understanding it … I don’t know what. I have not successfully created many projects that people are comfortable going in, contributing to. I think that Aleph is the one that is exception to that rule, basically.

    Daniel:

    00:18:16

    Right, I guess maybe following on from Aleph and Lamina, another asynchronous streaming library would be Manifold, which looks like some of the things you learned from Lamina.

    Zach:

    00:18:28

    Yeah, basically. It was a chance to do a clean sweep. So, the exact order of operations here is I wrote Lamina, Lamina was a kitchen sink for all the ideas I had about asynchronous everything. Had a ton of Macros. Had a ton of really complex stuff in there. Then core.async came out, and core.async was a different overall approach. But the thing that it really had over Lamina is that it was incredibly simple, which is not to say the implementation was simple, but the API that it had come up with was very direct. It had a handful of operators you had to learn. It had a couple of very big caveats in terms of the way it did [inaudible 00:19:06] writing in terms like not being able to enter into functions inside of a go routine which I think is still the case. But other than that I think it just was a smaller conceptual surface area for someone to have to learn.

    Zach:

    00:19:21

    And I was impressed by that and I certainly wasn’t upset that someone had not taken Lamina and just being like, “This is clearly the way to go.” Because it’s just a ridiculously big sprawling mess. But I had concerns when I looked at it that they were thinking like, “Oh, well, this is just how Clojure’s going to do a asynchronous stuff from now.” Because it had a very tight coupling between the way that it dealt with an event that hadn’t occurred yet or data that we haven’t received over a channel yet and the execution model. Like when does the code that consumes those things run? Notably it had a fixed size thread pull that all that stuff had to run on. And that seemed like a reasonable decision you could make if you were writing an application, but I think a very limiting choice to make if you’re writing a library. Because a library doesn’t go and get to dictate what the execution model of the code that is consuming that library ought to run on. I think that that’s not the right sort of separation of concerns there.

    Zach:

    00:20:21

    And beyond that, I think that core.async is an entirely separate way of thinking about lazy or eventual consumption of data which doesn’t play nicely necessarily with, for instance, [inaudible 00:20:32] or with Java Queues or with a bunch of other sorts of things that are all playing in the same space, all are mutually incompatible with each other. And so, my thought was, “Let’s go and take the intersection at the center of this Venn diagram of all these things and be something that can go and bridge the gaps between all of them, can convey data between them. And also provides something that is a reasonable, unopinionated set of abstractions that you could use in a library because it’s very easy to go and turn that from the manifold representation. And a manifold is just a thing that goes … like sits between a bunch of pipes or conduits or something like this and connects them to each other. It’s just the neutral party there. It’s Switzerland in the asynchronous territories.

    Zach:

    00:21:18

    And that was the motivating factor. It was also just that I felt like Lamina was something where I’d made so many mistakes that I needed to go and just start over. But that was the idea. And so, I wrote that and then I rewrote Aleph on top of that. I think that core.async still is a much more widely used library in terms of the Clojure ecosystem. But Manifold, I think, has a smaller group of fairly avid fans. And I think that people will occasionally reach out to let me know that they’ve used it in one way or another often on a fairly central piece of their infrastructure. And that’s always really gratifying to hear.

    Daniel:

    00:21:54

    Yeah. I remember when core.async came out and for a few years afterwards many libraries would provide, if there was an asynchronous API it would be a core.async API. And that’s maybe … I’m not sure if maybe I’m just paying less attention or it no longer surprises me anymore. But I don’t feel like I see that so much anymore that people are doing less asynchronous stuff maybe, just because it’s already been written or they delegate. It just seems to be less common that core.async is the API for new libraries.

    Zach:

    00:22:30

    I think that’s true. And you could ascribe a lot of reasons to that. I think one of which is just that asynchronous is less cool than it used to be. And so, having that be a necessary component of your API is no longer seen as a requirement. I also think that core.async just hasn’t seen a lot of uptake on the server side of things. There are absolutely counter examples of that. But I think where core.async has seen a lot of use and I think provides the most value is in Clojure script. Like in the front end.

    Zach:

    00:22:59

    And that also comes back to … in that case it’s not going and imposing its own execution model, because JavaScript has its own execution model that is non-negotiable. So, I think that in that case, some of the downsides articulated just frankly don’t exist. And also there’s fewer things that can do what it does. And so, I think that that probably wasn’t how it was conceived of at the time but I think that Clojure script was core.async’s killer app or possibly the other way around.

    Daniel:

    00:23:29

    Yeah. That’s an interesting point to end, so we considered the Clojure side of it so much. I’ve done a lot of work with re-frame, which doesn’t tend to use core.async so much, it has it’s own queuing model and asynchronous execution. But I know certainly many other Clojure script applications that don’t use re-frame and probably some that do use reframe, use quite a bit of core.async.

    Zach:

    00:23:51

    And I think most of the wrappers for making an hv-call or doing WebSocket communication, whatever, they al use core.async because that is a reasonable way to go and expose that in that ecosystem I think.

    Daniel:

    00:24:04

    Yeah. So, there’s other smaller libraries you’ve written. One that I’ve come back to … I’ve used it over the years and still use it today is byte streams, which is just a very useful thing. Especially when you don’t necessarily … I know it’s fast enough that it’s not a core performance tool by any means, but especially when you don’t really care about the transformations and you just want it … so, for people who are not aware, byte streams is a utility knife for byte representations. Is that the tagline?

    Zach:

    00:24:39

    I called it a Rosetta Stone for byte representation. The ideas that … there are many things in Java or in Clojure that represent a collection of ordered bytes. So, a byte array is the most obvious, but a byte buffer is one that got introduced in Java 1.5 and is weirdly incompatible in some ways, or some APIs won’t accept one versus the other. And then you have strings and character sequences which are clearly bytes with some additional meta data atop them. But you want to be able to convert from one to the other. And then when you start getting into Clojure specifically, you have things where it’s like, “Well, what if it’s a sequence of byte containers? What if it’s a sequence of strings? What if it’s a core.async channel of strings? Or a manifold stream of strings or byte arrays or what have you?”

    Zach:

    00:25:35

    And all of these are isomorphic to each other in that they contain the same core information but all of the APIs expect them to look like a very particular type of representation. There is nothing that will go and just take whatever you give it and find a way to go and make it into what it needs.

    Zach:

    00:25:56

    And in fairness, that’s not what you really want in an API. An API should be strict in terms of what it accepts. Because otherwise the performance characteristics there are unknowable. But you as the application writer, as the person who’s gluing together these strict APIs, you don’t want to think overly much about how to convert this. So, the idea was that I would come up with a bunch of these little piecewise conversions. Like, how do you turn a sequence of byte buffers into a byte buffer? How do you turn a byte buffer into an array? How do you turn an array into a string.

    Zach:

    00:26:27

    And so, if you go and give it something which is a sequence of byte buffers and say, “I’d like this to be a string with a UTF-8 encoding,” it’ll go and just compose together the stepwise transformations and poof, you have a string. And because it’s a graph of type conversions and each of them has a cost associated with it. Like, “How much copying of memory are we doing here?” It can find the minimal path.

    Zach:

    00:26:50

    So, and then once it finds the minimal path between point A and B it’ll [inaudible 00:26:55] that so that it’s not having to go and do that search repeatedly. And so, there are some constants here. Certainly, there’s overhead of the initial graph traversal. There’s the overhead of the [inaudible 00:27:07] functionals, all that sort of stuff. And so, if you just really care about performance this is not what you should be using. But if all you really want to do is just take data that’s in some shape and turn it to data that’s some other shape without thinking about it too much, then it’s a very useful tool.

    Zach:

    00:27:21

    And yeah, I think that that’s a very helpful piece and is used extensively inside Aleph to turn from Netty’s own peculiar byte containers into other sorts of things. And the nice thing about this is that it is an extensible graph. You can go and create an edge between the existing graph and some other representation you might come up with and now you get that transitive transformability into all these other things for free.

    Daniel:

    00:27:44

    Nice. Like you said, a more conventional way that this might have been written in Clojure lang would be to use perhaps multimethods or some other implementation writing that [inaudible 00:27:57] and this to this is this transformation but that wouldn’t have been quite so extensible as what you’ve come up with with the graph.

    Zach:

    00:28:05

    Right. And in fairness, I wrote little util name spaces that would do piecewise transformations like you described, a number of times before I finally broke down and tried to generalize this. Because I try not to turn to the most absurd way to go and solve the problem immediately. I try to keep myself a little bit honest there. But it is, I think if you’re doing systems programming in Clojure, it just keeps on coming up. It just keeps on coming up that you have to go and do this because you’re getting bytes over the wire but they’re actually like [inaudible 00:28:37] so, you have to go and do all these other sorts of things.

    Zach:

    00:28:39

    And either you just create this memory palace that has all of the conversions just sitting in it or you create this ever increasingly large utility name space or you just try to create something which is an extensible version of that utility name space. And so yeah, I think that that’s a library that I still get a lot of use out of. And so, I think that that’s probably one of my more successful open source experiments.

    Daniel:

    00:29:06

    Right. And the other thing that you’re pretty well known for is your work on data structures, high performance, functional, data structures. And you’ve worked on quite a few of them over the years and most recently with the …

    Zach:

    00:29:22

    It’s bifurcan, I think is what you’re searching for.

    Daniel:

    00:29:24

    Yes. Yes, yes. That’s the word. I didn’t know that was the pronunciation.

    Zach:

    00:29:28

    yeah, so it’s actually … so, circling back to Aleph, I have two different libraries that are named for a Jorge Luis Borges story. He was this Argentine writer from the first half of the 20th century who was a librarian. But he wrote a lot of these little short stories and other essays about infinities. How things become absurd once they’ve hit their limit of infinity. And so, the Aleph is a story about a guy who discovers that if he walks into his wine cellar and stares just beneath the 12th step into the cellar, he sees a point from which he can see all points, which he calls the Aleph, because the Aleph is the notation for infinity.

    Zach:

    00:30:14

    And obviously it’s a completely ridiculous premise, but he plays around with it. And he has a very playful tone in a lot of his stories. And the idea of a networking library being the point from which you can see all points seemed a propos at the time. And so, that’s where that came from. And bifurcan is from another story of his called The Garden of Forking Paths. Bifurcan means broadly, “It forks,” I guess, in Spanish. It bifurcates. That’s one about this branching narrative where there are many paths through the story that are being explored. Some people actually call it the first narrative or literary example of hypertext.

    Zach:

    00:31:00

    I think there were actually a couple of people who have tried to go and rewrite the story as a hypertext navigable narrative. And the reason that I called it that was … so, Clojure, of course, was I think very much at the forefront of so-called persistent or immutable or functional data structures. Have your pick as to what you call them. I’ve settled on functional because immutable implies nothing can change. And persistent implies that it’s persisted to disk to a fairly large portion of the software community. So, I think functional is maybe the best thing, which is that I take a function, I return a new function. There’s a functional semantics associated with the API.

    Zach:

    00:31:43

    And so, Clojure uses the terms persistent and transient to talk about data structures which do allow for this pure functional semantics versus this mutable functional semantics. Like you give it a data structure, it still returns a new data structure, but it reserves the right to go and mutate that data structure in the process. And the use of transient is a little bit of a weird one. Because if you go and look at the literature around data structures, they actually prefer ephemeral. Like persistent and ephemeral are antonyms to each other. I don’t know. I am extremely fussy about nomenclature. As people who have read my book may be aware.

    Zach:

    00:32:23

    And neither of … the idea persistent versus ephemeral, these feel like things that you talk about, again, with storage devices. Like main memory is ephemeral memory. And it’s the sort of thing where it feels like the wrong analogy to me, basically. And so, the one that I settled on was this idea of thinking about the data flow. So, if we’re going and we have a data structure … typically where you use transient data structures is we have an empty data structure and we want to fill it with a bunch of stuff. So, we go and we take it and we take this empty data structure and we [inaudible 00:33:00] a value and then we [inaudible 00:33:01] a 1,000 more values.

    Zach:

    00:33:02

    And each time we’re going and effectively discarding the previous version of that data structure. We don’t care about it anymore. We only care about the most recent. And in that case we have this linear data flow where each time you’re not holding on to the previous reference. You only care about the new one. And that previous value only exists to go and feed into these accumulated data structures that were building. In my mind that’s a linear chain of that data structure flowing through those method calls. In the cases where we actually want it to be, “Persistent,” where we want it to have true immutable semantics there, is where that chain, that linear chain, forks, where it bifurcates. Where now two people need to be able to own this data structure. And we don’t know what each of them is going to do with it.

    Zach:

    00:33:47

    And so, my terminology, which is entirely of my own invention and I think this is a bad habit to not go and honor what the industry calls it or [inaudible 00:33:56] calls it, but in all those cases I think it is sufficiently niche and confusing that I could justify this, is I called it a linear data structure. Which is one that we assume is linear data flow we allow for mutation, and a forked data structure, which is one where there are multiple owners or presumed to be multiple owners. And therefore we need the more classic structural sharing and partial copying and all that other stuff.

    Zach:

    00:34:23

    And so, bifurcan means forked or, “It forks.” And so, it seemed an appropriate name. Also, of course, all these data structures under the covers are trees. And so, it felt like it had a slight dual meaning, at least that I found amusing. And that’s really the ultimate measure whether I like a name is, “Does it amuse me?” So, that’s what I went with.

    Daniel:

    00:34:46

    Yeah. I had to think a little bit about that, the linear name, was not immediately obviously to me. But yeah.

    Zach:

    00:34:53

    And I think that you could very rightly quibble with that. But it’s something where I was writing it as … it is a Java library. It is aimed at Java programmers because I feel like Clojure has a lot of really interesting ideas and even though its core library is largely written in Java, those APIs were never meant for public consumption. And there are a few people who have gone and taken that and cleaned it up and changed the hashing inequalities semantics back to the standard Java variants and then just exposes as a library. There’s one called Paguro, I think … P-A-G-U-R-O, that does this.

    Zach:

    00:35:30

    It’s fine but it’s a little weird. And it will seem weird to anyone who doesn’t understand the lineage of that code and understand like, “Oh, well that’s what it’s called in Clojure.” And so, the idea was, if we just wipe the slate clean, don’t worry about the conducts, because we’re trying to go and sell this to people who do not have this built-in communal understanding of, “Here’s why Clojure’s data structures are great, here’s what persistent means, here’s what transient means.” If you assume none of that, then I think that you can be a little more free with the terminology and hopefully linear and fork make certain amount of sense. But it’s entirely possible that it doesn’t or a more standard term would’ve been better in that case.

    Daniel:

    00:36:11

    And so, this maybe isn’t necessarily the best measure of a data structure that’s very important, but the performance of these data structures is extremely competitive. With mutable Java often and it’s going to be a lot faster in many cases than Clojure’s built-in collections.

    Zach:

    00:36:28

    Yeah. So, Clojure, unfortunately, suffers from equality semantics, which I’m not going to go and litigate whether those were the right choices. But the fact that it, for instance, says longs and big nums, that represent the same value are equal. Which is not true in Java. You can’t go and check that a 1 and 1N are the same thing. So, if you call .equals() on that, then that will return false. And in Clojure that would seem as a something that was worth preserving. In part, I think, because Clojure having auto overflow was seen as a huge value add for the language, which I think has not necessarily proven out. But, again, these are things that you expect to find in a Lisp, is a rich numeric stack.

    Zach:

    00:37:17

    So, for that reason though, creating hashes and checking equality is significantly more expensive. And largely because it’s just large enough that it can’t be easily end lined. And so, doing simple things in Clojure, like adding a key to a map, which invokes all those equality semantics, just costs more. So, you compare it Java, you compare it to Scala, you compare it to any of those sorts of things and Clojure is just marginally but measurably slower.

    Zach:

    00:37:44

    The additional thing that I do in bifurcan though that is probably cheating in the eyes of anyone else who’s libraries I’m comparing this with, is I say, “Well, we want to be able to switch between this mutable and immutable representation,” like this linear and forked. But there are cases where we never care about forking the data structure. And this is actually where immutable data structure is fine, where using a Java hashmap is fine, is it’s just local to some scope, you’re using it as a little accumulator. No one else will ever see it, no one else will ever write to it. Therefore, “Why are we bothering with immutability in the first place?” And so, I said, “I’m going to write variants of my data structures which share the same API but are just permanently mutable, or rather if you want to make them immutable it’ll create a little wrapper around it that will make it so you can’t write to it directly anymore and we’ll just keep track of which keys have been added and removed atop this base data structure.

    Zach:

    00:38:41

    And this is legitimately cheating if you’re going and just saying, “Who’s written the highest performance tree based data structures?” But I think it speaks, if you were speaking to, “What are the actual workflows that people are using their data structures for?” There are a great deal that don’t require this kind of behavior at all. But we want to have the opportunity to, if we need to go and now take this data structure and pass it off to somebody else, to make it something which has that functionality. And so, by saying I can instantiate this map, this eye map, this generic map, with either something which is permanently mutable or flexibly mutable and having that not change all the downstream code, not change the implications, not change the semantics in a meaningful way, I think is useful. Or hopefully is useful.

    Zach:

    00:39:32

    So, in that case, it’s pretty easy to be competitive with Java, because I’m just writing another mutable data structure. But it has the key difference here is that it has a functional API, one where you go and you pass in a collection and the thing you want to do to that collection, and it passes back a new collection. Or at least passes back a collection which might be the same thing. And so, you don’t have to think super hard about what are the semantics of this thing, except in the sort of, “Is it in this moment a mutable or immutable data structure?”

    Daniel:

    00:40:06

    Right. Another data structure improvement change you worked on was the unrolled tuples. Both a library and in a patched Clojure, if I’m remembering correctly. So, that ultimately didn’t make it into Clojure, I wondered if you had any thoughts on that. Anything you wanted to talk about in relation to that?

    Zach:

    00:40:28

    Yeah. This actually came out of some work I was doing on byte streams. Because in byte streams you’re going and you’re saying, “Hey, I want to look up what is the fastest path between this type and this type for conversions.” And it turns out that in Clojure, doing that lookup is in some cases as expensive as simply doing the conversion. Because some of that stuff is very optimized, like going and turning a string into an array of bytes or something like that. That takes about 100 nano seconds. And the lookup to find out how to do that also took 100 nano seconds.

    Zach:

    00:41:01

    And so, I started looking at why that was and the reason was that I was going and I was doing a lookup where I was instantiating a vector of the from type and the to type. And then doing the lookup. And that was just slow because the tuples had to be instantiated in the way where it’s like, “I’m taking an arbitrarily sized vector and then adding two things.” And then going and calculating a hash on that was a little bit slower. And so, there’s a few things that were just small little losses of performance that were adding up to enough that now byte streams was a measurably slower way to go and do this conversion.

    Zach:

    00:41:35

    And so, my bright idea for how to fix this was, “Well, if we know that it’s just going to be a two vector, this is going to only ever contain two things, why not create a special two vector? And for that matter, why not make a special one vector and zero vector and two and three and so on.” In my case, up to six, which was a fairly arbitrarily chosen thing but I just got sick of going and trying to deal with that stuff.

    Zach:

    00:41:59

    And so, I first wrote this as a macro generated thing. And it worked pretty well. At least worked well for the use case I was coming up with, like that two tuple, or two vector lookup became measurably faster. And so, I talked about this at a conference and Rich was there and I was talking to him over lunch and I said, “Would you have any interest in putting this into Clojure?” And he said, “Yeah, sure. As long as you write it in Java.” Because Clojure data structures are written in Java, that’s just how it is. And I was like, “Okay. Well, I wrote this whole thing using Macros, I’ll get back to you on that. I don’t know if I … ” because even if I were to just go and take a agonizingly long day and type a bunch, I would probably make lots of little mistakes, copy paste errors, all the other stuff.

    Zach:

    00:42:48

    And so, I let that hang there for a while. Probably eight or nine months. Until for a hackathon when I was working for Factual, I decided, “Let’s give this a shot.” And the way that I decided to do that was, “I’m going to write Java that generate … ” I’m sorry, “I’m going to write Clojure,” rather, “That generates the Java for this.” The way I did that is I basically took some code from Eclipse that did Java indentation. I used that as basically a syntax check. So, “I’m going to go and create a big blob of Java that has no new lines in it. And then I’m going to go and pass it into this formatter and if that’s correct, then I’m going to assume it’s reasonably well formed.” Maybe not semantically correct, but that’s something that you can test generatively. So, that’s pretty straightforward to go and do once you have the Java all written out and compiled.

    Zach:

    00:43:36

    And so, it was a total hack. The hack that I feel really kind of pleasing in a perverse way. And so, I had that and then I circled back and I said, “Hey, I’ve got this. I’ve got thousands of lines of Java that I’ve generated. Do you want this in Clojure? Yes? No?” And the response was tentatively positive. Because anytime someone comes to you with a PR which is just enormous, you want to go and say, “Yeah. Okay. Well, maybe.” Certainly you don’t want to just get a, “Yes.”

    Zach:

    00:44:07

    And I said, “Look, I just want to make sure that I’m spending time towards some productive end, so just let me know. I could also do the same thing for maps if you like.” Because we could have a map of one, map of two. And in fact, Clojure has two different types of maps. It has the hash map and the array map, where the array map is just a flat list that you linearly scan. There’s no actual attempt to go and hash locate anything. And for any map smaller than eight elements it will go and use that approach, because that deems to be a more efficient approach overall.

    Zach:

    00:44:43

    And so, this had some prior art to it and so they said, “Yeah, sure. Go ahead. That’ll be helpful just for comparative purposes.” So, I wrote that. And this all took place over about 18 months. I was chipping away at this just whenever the mood took me. There was no one who was willing to commit on the other side to like, “Yes. Let’s go and test this.” But I did it, I wrote some benchmarks. And I’d been pushing on his for a little bit. And then at that point finally Rich entered the conversation. Because the contribution process is that there are some gate keepers, it’s Stu or Alex or whomever, are going and making sure that the PR is sufficiently vetted. At which point Rich will come in and consider it. In this case mostly for the first time.

    Zach:

    00:45:31

    And so, he looked at it and he said, “That’s an awful lot of code. I think I can do this … I can go and get the same effect by doing less.” And so, he wrote up a much smaller thing that unrolled in a much less aggressive way and said, “If we’re going to do this, this is what I’m going to use.” And I was, I think at the time, pretty upset about that. Because it felt to me like if all I was doing was writing a proof of concept, why all of the attempts to go and polish this and make this a very complete and production ready PR … like if all it was just way, “Here’s a thing that Rich might want to write someday.”

    Zach:

    00:46:10

    And I think that I still think that that was reasonable reason to be upset about this. And I think that this is something that people [inaudible 00:46:18] before. The reason that I can’t go and really hold a grudge about it is because once Rich ran it, and he put it into Clojure proper, which I had not done yet. I’d only used it for a couple of cases like this two tuple lookup. And then a couple of other tests that I had run on some code. But I hadn’t gone and taken a version of Clojure with these new data structures jammed in and seen what happened. He found that it wasn’t actually faster on the whole, because having seven different classes that implement vector make that actually less efficient in terms of dispatch. It’s what’s called megamorphic dispatch where Java can no longer do clever things in terms of being able to figure out which implementation it should go en route to when you go and call conj, for instance.

    Zach:

    00:47:00

    And this is not something that I had tested in any way. And, to be fair, I’ve still not seen Rich’s benchmarks for any of that stuff.

    Daniel:

    00:47:09

    Yeah, I was going to ask about that.

    Zach:

    00:47:11

    Yeah. I have not. He just said, “This is slow. I think it’s because of megamorphic dispatch.” And that parses, that is a thing that I think is quite possible. I have no idea what he was testing on, I have no idea what his methodology was. It is genuinely something that I had no thought of [inaudible 00:47:27]. My enthusiasm had pushed me, I had not stopped to consider that side of things. And so, I’m happy to go and say that that was my bad. It was a less good idea than I thought it was. If it had turned out to be exactly as good an idea as I thought it was and then my implementation not made it into Clojure proper, I think I woudlve probably held a little bit of frustration there still because it’s a little bit weird to be trying to contribute and then finding that actually you’re just providing a general sketch of what will at one point be in the code. Because I think that there’s a pride that you derive from saying, “I like Clojure, I use Clojure.” Clojure is in part code that I’ve written.

    Zach:

    00:48:09

    That last part is, at least to a certain person in the open source community, a really key part of what motivates them and makes them feel like Clojure’s ongoing success is something that they are very invested in. At the very least I feel like I’m one of those people and I’ve known other people who I think get frustrated with Clojure for the same reason. But I will say, very explicitly, in this case for the unrolled tuples, that is not something that I harbor any great frustration or resentment about. Because, turns out, it was not nearly as good of an idea as I thought it was, or at the very least, there’s a plausible reason for why it wasn’t.

    Daniel:

    00:48:45

    Right. Yeah, that’s good. I don’t think I had all of that context all in one place and one conversation, I’d picked up different bits and pieces.

    Zach:

    00:48:54

    Yeah. I don’t think … I never did a writeup of it. Possibly that was bad idea because I’ve seen people use that whole experience of an example of being like, “Here’s why the Cognitect contribution process is no good.” And again, there is an alternate version of this thing where that actually, I think, is a legitimate point to make. But in this case where it turns out that my idea was not well suited to be in the core language, [inaudible 00:49:20] only be good as a stand alone library because if you’re using it for something where you don’t have many different sizes of vectors or something like this. It is legitimately faster. But it is not well suited for the general purpose, Clojure implementation because of that.

    Zach:

    00:49:35

    So, I think that it’s not something that people should use as the shining example of why people are getting frustrated with Clojure’s contribution process.

    Daniel:

    00:49:45

    Sure. But one thing in that that … there’s probably a few things we could take from that process, one was I guess the expectations or communication about expectations where it seems like there was perhaps a mismatch of what you thought you were doing and what the likely outcome or response was going to be and then what it actually turned out to be. Those didn’t seem to be aligned.

    Zach:

    00:50:09

    Yeah. Well … So, I’d seen … I’d been working with Kyle Kingsbury, who’s better known as Aphyr, around that same time. And he had gone through a similar process where keyword interning in Clojure for fairly slow for reasons that were not intrinsic to how keywords worked. But if you went and you tried to convert a string into a keyword it would take a long time to the point where converting or parsing, rather, JSON, where you wanted all these keys to be keyword-ized, the major computational cost there was just turning strings into keywords.

    Zach:

    00:50:45

    And so, he went through a similar exercise where he came up with a big … or not even a very large PR, but a 20 change or something like that. Went through all the hoops in terms of demonstrating that this is indeed faster, there are no regressions, etc. And then in the end Rich took his PR and rewrote it. So, then Rich was like, “Well, thanks for the recommendations as to how I could go and I could fix this.” I, having seen that payout, I thought I was being very clever by checking in periodically saying, “You still want me to do this, right? This is still a thing that you want?”

    Zach:

    00:51:16

    And I was assured along the way like, “Yes, yes. This is good. This is great.” What I assumed, I guess, was that when someone who was at Cognitect told me that that was on the basis of some sort of conversation they were having. That was a collective assurance as opposed to a personal assurance from Stu Holloway or something like that. And it turns out that it wasn’t. And looking back I can’t point to anything that made me reliably infer that this was Cognitect as an entity giving me this assurance. But when in fact, basically what it was is that someone was saying, “Yeah. I’m pretty sure Rich will like this when he takes the time to look at it.” And then Rich took the time to look at it and didn’t like it.

    Zach:

    00:51:55

    And so, I think that the assumptions that I had going in were wrong. And I think that it’s interesting because there was a little bit more recent drama with Clojure, which we can talk about if you would like to. But basically I was going and voicing some of my frustrations. Which again, are not because my data structures didn’t make it into Clojure but because I see people who want to make Clojure something that they feel somewhat degree of ownership over are being turned away, basically. And from that they lose a lot of their motivation to continue to invest in the community and end up going elsewhere. Some of them more loudly than others.

    Zach:

    00:52:37

    So, Chas Emerick, for instance, has largely vanished. He’s writing Haskell these days. And he wrote a book. He contributed a ton to the Clojure community and then one day he just stopped showing up. And I can’t speak for him and all of his reasons but I think that he has articulated to me that he’s definitely seen a shift in terms of how people were encouraged to go and help shape Clojure as a collaborative process versus this very top down autocratic process. And it’s undeniable that that has changes. The NS Macro in Clojure was not created by Rich. It was created by Steve Gilardi.

    Zach:

    00:53:18

    Originally you were just encouraged to go and put a bunch of imports and requires and whatever as the prelude to your thing. There wasn’t a single NS Macro that did all of those things. And try to imagine someone today coming up with a different way to do namespace declarations in Clojure. Try to imagine someone going and saying, “I’ve got a great new idea for the ergonomics of Clojure.” It wouldn’t even make it off of the initial post. People would just be like, “Yeah. Sorry. This is never going to happen.” And, in fairness, there’s this concept in neurophysiology called plasticity, which is basically how quickly does your brain reshape itself in response to incoming stimuli? And children have extremely plastic brains. Adults have much less plasticity in their brain, which is probably good. Because when you’re a child you’re changing a lot. You’re going through all these things. You want to reach this level of maturity and stability. You don’t want to go and shake things up all the time just because you can.

    Zach:

    00:54:21

    And so, I’m not saying, “Why aren’t we able to go and rewrite Clojure from release to release,” or something like that. But I think it’s fair to say that there has been a change and that there was a time when Clojure was a more collaborative process. And to pretend that it has never been that, which I think is sometimes a talking point that comes up, is false. To say that it shouldn’t be that is fair. And I think that is a defensible stance, though not necessarily one that I agree with. But some people say like, “It’s always just been Rich’s thing. And there’s never been external input. There’s never been meaningful changes to how the language is written by people who are not working for Cognitect or not Rich himself,” isn’t true. It’s just that that time where that was a reasonable expectation about how the language was maintained has passed.

    Daniel:

    00:55:11

    Yeah. And I think either of those approaches are valid ones to take. But probably my frustration or my feelings about it was that the issue was that wasn’t explained particularly clearly this new model or this new intention. And maybe it wasn’t even consciously understood by Cognitect as they were doing it, it just was a natural shift. But it’s frustrating to see people new to Clojure get excited, come up with some ideas, see some possible improvements and then to hit the brick wall and just not necessarily understand why, what’s going on. They come to Clojure with, “Clojure’s an open source project.” And they have a bunch of assumptions about that works. And there was no documents being extremely clear, until recently, being extremely clear about, “No. This is a very different project. And we don’t work the same as other projects.”

    Daniel:

    00:56:08

    And that’s, again, as we’ve [inaudible 00:56:10] in Rich’s most recent post, he had no obligation to explain himself, but it certainly would’ve saved a lot of time and energy and frustration on a lot of [inaudible 00:56:21].

    Zach:

    00:56:21

    Certainly. And I should say, like you say, it hasn’t been written up anywhere, the only written record of Clojure’s contribution process, which approaches a honest, straightforward articulation, is in a gist on GitHub. And there’s a followup conversation in the comments of that gist. It’s not on Clojure.org, it’s not like … this is not something where I think it is discoverable by people who are coming to Clojure. So, I think that there’s still work that could be done there, unless I’m wrong and there has been some change in Clojure.org without me noticing.

    Zach:

    00:56:54

    But I think that talking about it in terms of incompatible, unspoken assumptions is exactly right. And something that came up repeatedly was Evan Czaplicki, who’s the creator of the Elm language, gave a really great talk at Strange Loop last year called The Hard Part of Open Source. And in it he talked about, what’s hard about open source is not the technology, it’s not the technological decisions, it’s the people and navigating those conversations. And in that he brought up the, “By who’s authority?” Or, better known as the Clojure Post, which is the first sentence in that post. And it doesn’t get much better from there.

    Zach:

    00:57:36

    And people talk about entitlement in open source, and I think it is undeniable a deeply entitled post. And it’s not one that I like, and it’s not one that I’m very happy with. Because I think it poisoned the well for having a more constructive and meaningful conversation where what’s being said by the community isn’t very easily dismissed as just more Clojure-y, basically. And that’s very frustrating to me. But I think that there is … a real point was being made in Evan’s talk which is not, “People shouldn’t be mean to open source creators.” I mean, that is a point that he makes, and there’s a point that people are doing. But it’s not just like, “You should shut up and be grateful.” What he’s saying is that people don’t state their assumptions when they go make an assertion that something is true or ought to be true.

    Zach:

    00:58:28

    People are going and predicating what they’re saying. And they have very strong opinions. But what is left unstated is the assumption that goes and gives birth to that very strongly held view. And I actually talk about this a little bit in my book, not about open source stewardship, but like I say, if you say that software is over engineered, that’s not an intrinsic property of the software, it’s a property of where you expect that software to be used. Something which needs to … a piece of hardware that needs to go and survive cosmic rays, if it’s not going into space or some other place where that’s a problem, then yeah, it’s over engineered, it probably has more complexity or more costs than it needs to. But again, that’s not an intrinsic property of the thing, it’s a property of where we put the thing.

    Zach:

    00:59:11

    And so, similarly, when we’re talking about, “What can we reasonably as a community expect from someone who is the creator and ongoing steward of a language, is not, I think, something that we can talk about from first principles. Or at least it’s not most interesting to talk about from first principles. Because the only first principle that’s really available is, “It’s his la

    —Huffduffed by cemerick

  2. Episode 41 - It’s a Chicken War Outside, No Man is Safe From

    On this inspiring installation of the Unnamed Foamie and Buckets Podcast:

    • Insert topic

    • And some other stuff

    Huge shout to our Sponsor @fatkiddeals on twitter! Best stuff. Crazy deals. Enough said.

    Thank you to Spencer Gill! A genuine music man.

    Connect with us on Instagram:

    @mrfoamersimpson @young_buckets

    Legally Binding Disclaimer: Foamie is not a ginger and will never be a ginger. He looks like Jon Snow, and has a great voice. Show some respek.

    ===
    Original video: https://soundcloud.com/foamieandbuckets/41a/s-5kXBi
    Downloaded by http://huffduff-video.snarfed.org/ on Thu, 19 Sep 2019 01:45:05 GMT Available for 30 days after download

    —Huffduffed by cemerick

  3. D*** Ain’t Worth Nothing!

    • Alexis Sky & Trouble

    • Bizarre Role Play Request 

    • Jay Z’s New Deal With The NFL

    • Return of Power

    • And more!

     

    Call/Text 901-308-4320 and leave us a voicemail/text for our next episode!

    Follow us now on twitter! @RealThinkersPod, @JustCallMeDa

    https://realthinkerspodcast.podbean.com/e/d-aint-worth-nothing/

    —Huffduffed by cemerick

  4. Episode 38 - The Ol’ Switcharoo

    On this inspiring installation of the Unnamed Foamie and Buckets Podcast:

    • 7 Degrees of Kevin Bacon

    • Top 50 Rappers List

    • Italian Food Mt. Rushmore

    • Home Alone vs Sandlot vs Goonies

    • And some other stuff

    Huge shout to our Sponsor @fatkiddeals on twitter! Best stuff. Crazy deals. Enough said.

    Thank you to Spencer Gill! A genuine music man.

    Connect with us on Instagram:

    @mrfoamersimpson @young_buckets

    Legally Binding Disclaimer: Foamie is not a ginger and will never be a ginger. He looks like Jon Snow, and has a great voice. Show some respek.

    ===
    Original video: https://soundcloud.com/foamieandbuckets/episode-38-the-ol-switcharoo/s-PSicc
    Downloaded by http://huffduff-video.snarfed.org/ on Mon, 12 Aug 2019 14:26:39 GMT Available for 30 days after download

    —Huffduffed by cemerick

  5. Episode 35 - New York City Invented Everything

    On this inspiring installation of the Unnamed Foamie and Buckets Podcast:

    • Chance the Rapper

    • Mullets

    • New York City

    • Bank Robbery Movies/Outfits

    • More Movie Talk

    • The Yankees

    • Last Chance U

    • Theranos

    • And some other stuff

    Huge shout to our Sponsor @fatkiddeals on twitter! Best stuff. Crazy deals. Enough said.

    Thank you to Spencer Gill! A genuine music man.

    Connect with us on Instagram:

    @mrfoamersimpson @young_buckets

    Legally Binding Disclaimer: Foamie is not a ginger and will never be a ginger. He looks like Jon Snow, and has a great voice. Show some respek.

    ===
    Original video: https://soundcloud.com/foamieandbuckets/episode-35-new-york-city-invented-everything/s-f4RIa
    Downloaded by http://huffduff-video.snarfed.org/ on Mon, 12 Aug 2019 14:26:23 GMT Available for 30 days after download

    —Huffduffed by cemerick

  6. Podcast 31 - #KawhiWatch is Over, Zion & “Sauces”

    On this inspiring installation of the Unnamed Foamie and Buckets Podcast:

    • #KawhiWatch

    • "Sauces"

    • Spicy Takes

    • And some other stuff

    Huge shout to our Sponsor @fatkiddeals on twitter! Best stuff. Crazy deals. Enough said.

    Thank you to Spencer Gill! A genuine music man.

    Connect with us on Instagram:

    @mrfoamersimpson @young_buckets

    Legally Binding Disclaimer: Foamie is not a ginger and will never be a ginger. He looks like Jon Snow, and has a great voice. Show some respect.

    ===
    Original video: https://soundcloud.com/foamieandbuckets/podcast-31-kawhi-the-clippers-sauces/s-61aTf
    Downloaded by http://huffduff-video.snarfed.org/ on Mon, 12 Aug 2019 14:26:02 GMT Available for 30 days after download

    —Huffduffed by cemerick

  7. The New Trade Paradigm, with Ryan Cooper (PODCAST 08-08-2019)

    Listen now (60 min) | I welcome back Ryan Cooper (@ryanlcooper), national correspondent for The Week magazine and co-host of the exciting new Left Anchor podcast (you can also check them out on Twitter). Ryan wrote a piece last week (his full archive is here) looking at the alternative to the ossified “free trade” consensus that’s emerging on the left in the Democratic primary, and he’ll walk us through the development of that consensus and why it’s so ripe to be challenged now.

    https://fx.substack.com/p/the-new-trade-paradigm-with-ryan?token=eyJ1c2VyX2lkIjoyNDIyNTA0LCJwb3N0X2lkIjoxMDIwMjAsIl8iOiJUUnNlUiIsImlhdCI6MTU2NTI4ODg1NiwiZXhwIjoxNTY1MjkyNDU2LCJpc3MiOiJwdWItNjQ3OSIsInN1YiI6InBvc3QtcmVhY3Rpb24ifQ.TT1-z5lBFOTPiKQEH0IikHJwVASENQX2oAziaxp8MW8

    —Huffduffed by cemerick

  8. Deciphering the White Power Movement | On the Media | WNYC Studios

    Historian Kathleen Belew says that the key to understanding massacres like the one in El Paso is to understand the movement that they come from.

    https://www.wnycstudios.org/story/deciphering-white-power-movement

    —Huffduffed by cemerick

  9. 39 - Mathematical Foundations for the Activity of Programming - Cyrus Omar | Future of Coding

    Are you looking for the real computer revolution? Join the club! Future of Coding is a community, podcast, and research project organized by Steve Krouse.

    https://futureofcoding.org/episodes/039#transcript

    —Huffduffed by cemerick

  10. Nerd Rage! The Great Debates: Something Lawful: The Character Alignment Show

    The only podcast where chaos is never neutral — let’s get ready to RAGE!

    With JPol, Tirumari Jothi, Erik Krasner, and Wonder Dave

    https://nerdragepodcast.com/74

    —Huffduffed by cemerick

Page 1 of 19Older