paulroub / Paul

There are two people in paulroub’s collective.

Huffduffed (510)

  1. Bill Withers | Maximum Fun

    r] Tune in for a rare interview with the soul legend Bill Withers. He retired from the music industry in the mid 1980s and, with a few exceptions…

    https://maximumfun.org/episodes/bullseye-with-jesse-thorn/bullseye-jesse-thorn-bill-withers/

    —Huffduffed by paulroub

  2. By Ken Levine: EP165: The Birth Of The Simpsons

    Ken interviews Jay Kogen, one of the original writers of THE SIMPSONS. He’s worked on many great shows and won an Emmy for a script of FRASIER. In part-one of this two-parter, Jay talks about breaking in, learning from Garry Shandling, and the early days of THE SIMPSONS.

    http://kenlevine.blogspot.com/2020/03/ep165-birth-of-simpsons.html

    —Huffduffed by paulroub

  3. Hiring Is Not Hazing

    Why are manhole covers round? How many golf balls can fit in a 747? Why are job interviews so terrible? In this episode, Aja Hammerly, a developer advocate at Google, talks about the drawbacks of common tech interview techniques like whiteboard coding and trivia questions, and shares her tips for improving the process by making it about discovering the candidate’s best qualities.

    —Huffduffed by paulroub

  4. Hanselminutes Technology Podcast - Fresh Air and Fresh Perspectives for Developers - test && commit || revert with Kent Beck

    The Hanselminutes Podcast by Scott Hanselman - Fresh Air for Developers - Deep tech talk from an inclusive perspective

    https://www.hanselminutes.com/663/test-commit-revert-with-kent-beck

    —Huffduffed by paulroub

  5. The One About Process – Rands in Repose

    In this episode, we talk about process being documented culture. How do we keep all the plates spinning? Who are good plate spinners? Why are there plates in the first place? This and more is answered in this episode of The Important Thing.

    Enjoy it now or download for later. Here’s a handy feed or

    https://randsinrepose.com/archives/the-one-about-process/

    —Huffduffed by paulroub

  6. Spec · Developer Tea · Three Things I Wish I Learned Earlier in My Career As A Software Developer

    Podcasts and resources to help designers and developers level up

    https://spec.fm/podcasts/developer-tea/298247

    —Huffduffed by paulroub

  7. Tim Harford — Article — Cautionary Tales Ep 3 – LaLa Land: Galileo’s Warning

    Cautionary Tales Ep 3 – LaLa Land: Galileo’s Warning

    http://timharford.com/2019/11/cautionary-tales-ep-3-lala-land-galileos-warning/

    —Huffduffed by paulroub

  8. Spec · Developer Tea · Breaking Your Own Rules Accelerates Bad Habit Formation

    Podcasts and resources to help designers and developers level up

    https://spec.fm/podcasts/developer-tea/317362

    —Huffduffed by paulroub

  9. Episode 33: The Measure of a Mac with Special Guest Merlin Mann — The Icon Garden

    Episode 33: The Measure of a Mac with Special Guest Merlin Mann

    Won’t you Listen to what the Man says?

    For our thirty-third regular show, we are joined by very special guest Merlin Mann.

    You may know Merlin as the productivity guy behind 43 Folders, or from such podcasts as Reconcilable Differences, Back to Work, Do By Friday or You Look Nice Today.

    Merlin stops by to talk about his history with Macs, and he and Jake geek out on some HomeKit stuff, and in our main topic we get into a great discussion of Apple’s Tug-of-war between its long-time and newer customers and what that means going forward.

    Can Apple sustain its growth without selling out?

    Originally recorded on February 20, 2020

    Show Notes:

    Merlin on Twitter

    You Look Nice Today

    Reconcilable Differences

    Back to Work

    Do By Friday

    Home+ 4

    HomeRun

    Follow Us:

    Our show is produced by Nathan Mace (antiwraith on Ars)and edited by Dan Vincent. Our theme song is composed by the wondrous Space Vixen, who you can find on twitter @SpaceVixenMusic and their music at spacevixen.bandcamp.com.

    Merlin can be found on twitter @hotdogsladies or on your podcast player of choice on Reconcilable Differences,Back to Work, or Do By Friday

    You can find Dan on Twitter @kefkafloyd. Against his better judgement, Jake is still on twitter @jakepugh. You can leave feedback by tweeting @icongardenshow or posting on the Icon Garden thread at the Mac Ach forum by going to http://bit.ly/tigsthread . You can subscribe to our show on your podcast platform of choice, or visit our website at icongardenshow.com.

    Next Time in the Garden:

    Special Guest Stephen Hackett

    https://www.icongardenshow.com/home/2020/3/4/episode-33-the-measure-of-a-mac-with-special-guest-merlin-mann

    —Huffduffed by paulroub

  10. Whiteboard Confessional: How Cluster SSH Almost Got Me Fired - Last Week in AWS

    About Corey QuinnOver the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.LinksCHAOSSEARCH.io

    Cluster SSH GitHub repository

    AWS Systems Manager Session Manager

    EC2 Instance Connect

    TranscriptCorey: On this show, I talk an awful lot about architectural patterns that are horrifying. Let’s instead talk for a moment about something that isn’t horrifying. CHAOSSEARCH. Architecturally, they do things right. They provide a log analytics solution that separates out your storage from your compute. The data lives inside of your S3 buckets, and you can access it using APIs you’ve come to know and tolerate, through a series of containers that live next to that S3 storage. Rather than replicating massive clusters that you have to care and feed for yourself, instead, you now get to focus on just storing data, treating it like you normally would other S3 data and not replicating it, storing it on expensive disks in triplicate, and fundamentally not having to deal with the pains of running other log analytics infrastructure. Check them out today at CHAOSSEARCH.io.So, once upon a time, way back in the mists of antiquity, was a year called 2006. This is before many folks listening to this podcast were involved in technology. And I admit as well that it is also several decades after other folks listening to this podcast got involved in technology. But that’s not the point of this story. It was my first real job working in anything resembling a production-style environment. I’d dabbled before this, running various environments on Windows desktop style support. I’d played around with small business servers for running Windows-style environments. And then I decided there wasn’t much of a future in technology and spent some time as a technical recruiter, spent a little bit more time working in a sales role, which I was disturbingly good at, but I was selling tape drives to people. But that’s not the interesting part of the story. What is, is that I somehow managed to luck my way into a job interview for a university, helping to run their Linux and Unix systems. Cool. Turns out that interviewing is a skill like any other. The technical reviewer was out sick that day, and they really liked both the confidence of my answers, as well as my personality. That’s two mistakes right there. One; my personality is exactly what you would expect it to be. And two; hiring the person who sounds the most confident is exactly what you don’t want to do. It also tends to lend credence to people who look exactly like me. So I had converted some systems over in the first few months for that role to FreeBSD, which is like Linux, except it’s not Linux. It’s a Unix and it’s far older, derived from the Berkeley software distribution. and managing a bunch of those systems at scale was a challenge. Now understand, in this era scale meant something radically different than it does today. I had somewhere between 12 and 15 nodes that I had to care about. Some more mail servers. Some were NTP servers, of all things. Utility boxes here and there, the omnipresent web servers that we all dealt with, the Cacti box whose primary job was to get compromised and serve as an attack vector for the rest of the environment, etcetera. This was a university. Mistakes didn’t necessarily mean the same thing there as they would in revenue-generating engineering activities. I was also young, foolish, and the statute of limitations is almost certainly expired by now. So, running the same command on every box was annoying. This was in the days before configuration management was really a thing. BCFG2 was out there and incredibly complex. And CFEngine was also out there, which required an awful lot of in-depth arcane knowledge that I frankly didn’t have. Remember, I bluffed my way into this job and was learning on the fly. So I did a little digging and, lo and behold, I found a tool that solved my problems. called ClusterSSH. And oh, was it a cluster. The way that this works was that it would spin up different xterm windows on your screen that you could then provide a list of hosts for, and it would open one for every host you gave it. Great. So now I’m logged into all of those boxes at once. If this is making you cringe already, it probably should, because this is not a great architectural pattern. But here we are, we’re telling this story, so you probably know how that worked out. One of the intricacies of FreeBSD is that instead of running systems that turn things on or turn things off, as far as services to start on boot. For example, with Red Hat derived systems, before the dark times of systemd, you could write things like chkconfig, that’s C-H-K, the word config, and then you could give a service and tell it to turn it on or off at certain run levels. This is how you would tell it to, for example, start the webserver when you boot, otherwise, you reboot the system, the webserver does not start, and you wonder why TCP now terminates on the ground. This was all controlled via a single file—/etc/rc.conf. That controlled which services were allowed to start, as well as which services were going to be started automatically on boot. It would generally be a boolean value provided to the particular service name. Well, I was trying to do something, probably, I want to say, NTP related, but don’t quote me on that, where I wanted to enable a certain service to start on all of the systems at once. So I typed a command, specifically echoing the exact string that I wanted in quotes, so it would be quoted appropriately, and then with the right angle bracket, to that file—/etc/rc.conf, and then I pressed enter. Now, for those who are unaware of Unix-isms and how things work shell, a single right angle bracket means overwrite this file, two angle brackets say append to the end of this file. I was trying to get the second one, and instead, I wound up getting the first. So suddenly, I had just rewritten all of those files across every server. Great plan, huh? Well, I realized what I’d done as soon as I checked my work to validate that the system had taken the update appropriately, it had not, it had taken something horrifying up instead. What happened next? Great question.But first, in the late 19th and early 20th centuries, democracy flourished around the world. This was good for most folks, but terrible for the log analytics industry because there was now a severe shortage of princesses to kidnap for ransom to pay for their ridiculous implementations. It doesn’t have to be that way. Consider CHAOSSEARCH. The data lives in your S3 buckets in your AWS accounts, and we know what that costs. You don’t have to deal with running massive piles of infrastructure to be able to query that log data with APIs you’ve come to know and tolerate, and they’re just good people to work with. Reach out to CHAOSSEARCH.io. And my thanks to them for sponsoring this incredibly depressing podcast. So, I’m suddenly staring at a whole bunch of systems that now have a corrupted configuration. Fortunately, this hadn’t taken anything down, at the moment. And it wouldn’t until one of these systems was restarted. Now, these are Unix boxes, so they don’t tend to get restarted all that often. But it’s got to be fixed and immediately because one, power outages always happen when you least expect them to, and two, leaving a landmine like that for someone else is what we call a career-limiting move in almost every shop, even a university, which is not typically known as a place that’s easy to get fired from. But I could’ve managed if I’d left that lying around. So the trick that I found to fixing all of this was logging into every one of those boxes by hand and taking a look to see what services were currently running on those boxes and then reconstructing what that file should have looked like, which was just an absolute treasure and a joy. Now well, hang on a second, why didn’t I restore from the backups that were being taken of these systems? What part of “first Unix admin job” are you not hearing? Backups were a thing that were on my list to get to eventually. You get really interested in backups right after you really needed to have backups that were working. Also, it turns out backups are super easy. It’s restores that are difficult and if you can’t restore, you don’t really have a backup. So at the end of going through all of those nodes one by one, over the course of about four hours, I’d managed to successfully reconstruct each of their files. Then what I wound up doing was very carefully restarting each one in sequence during a maintenance window later that afternoon, and validating, once I got in, that they continued to do the things that they were doing. I would compare what was currently running as a process versus what had been running before I restarted them. Suddenly, I’m very diligent about taking backups and keeping an eye on what exactly was running on a particular box. And by the time I got through that rotation, I was a) lot more careful, and b) everything had been restored, and there was no customer-facing impact. Now, all of that’s a very long story. But what does it have to do with the Whiteboard Confessional? What was the architectural problem here? The problem fundamentally, was that I was managing a fleet, even a small one, of systems effectively by hand. And this sort of mistake is incredibly common when you run the wrong command on the wrong box. There was no programmatic element to it, there was no rollback strategy at all. And there’s a lot of different directions that this could have gone through. For instance, I could have echoed that command first, just from a safety perspective, and validated what it did. I could have backed up the files before making a change to it. I could have tested this on a single machine instead of the entire production fleet. But most relevantly to the architectural discussion here, I could have not used freakin’ ClusterSSH. The problem, of course, is that instead of having a declarative state that you’re defining what your system should look like, you’re saying run this arbitrary command through what’s known as an imperative style of configuration management. This pattern continues to exist today across a wide variety of different systems and different environments. If you take a look at what Ansible does under the hood, this is functionally what it does—any config management system does—it runs a series of commands and drops files in place to make sure a system looks a certain way. If you’re just telling it to go ahead and run a particular command, like “create a user,” every time that command runs, it’s going to create a new user and you wind up with a whole bunch of users that don’t belong there, that don’t need to exist. Thousands upon thousands of users on a system from one dating back to every time the configuration management system runs. That’s how you open bank accounts at Wells Fargo, not how you intelligently managed systems at significant scale. So, making sure that your systems that are doing configuration management understand a concept of idempotence is absolutely critical. The idea being that I should be able to run the same thing multiple times and it not wind up destroying or duplicating or going around in circles in any meaningful way. That is the big lesson of configuration management. And today, systems that AWS offers, like AWS Systems Manager Session Manager, can have this same problem. The same with their EC2 Instance Connect. You can run a whole bunch of scripts and one-liners on a variety of nodes, but you’ve got to make sure that you test those things. You’ve got to make sure that there’s a rollback. You have to test on a subset of things, or you’re finding yourself recording embarrassing podcasts like this one, years later, once the statute of limitations has expired. No one is born knowing this, and none of these things are intuitively obvious, until the second time. Remember, when you don’t get what you want, you get experience instead, and experience builds character. I am Cloud Economist Corey Quinn, and I am a character. Thank you for listening to this episode of the AWS Morning Brief Whiteboard Confessional. Please leave a five-star review on iTunes if you’ve enjoyed it. If you didn’t, please leave a five-star review on iTunes via a script that continues to write a five-star review on iTunes every time you run it.Announcer: This has been a HumblePod production.Stay humble.

    https://www.lastweekinaws.com/podcast/aws-morning-brief/whiteboard-confessional-how-cluster-ssh-almost-got-me-fired/

    download

    Tagged with devops

    —Huffduffed by paulroub

Page 1 of 51Older