Category Archives: Uncategorized

Simulation Blind Spots

Once upon a time I sort of won an argument with Sandi Metz. By “sort of won” I mean she didn’t lose; I was wrong and she was right, and it was a perfect example of the Dunning-Kruger effect. She had a very simple rule, and I was certain that the simple rule was flawed. I persisted in my argument long enough, and she persisted in teaching me, and eventually she led me to a place where I could see the beautiful interplay of all the complicated things I had been seeing, and then she showed me one other new idea I had never considered… and it made the whole complicated interplay become very simple. So when I say I won, I mean in in the sense that she was right and I learned an amazing thing I didn’t know before, and I sort of count that as the best kind of winning.

Whenever we’re doing some problem-solving activity, there’s a thing we do where we run a mental simulation of the situation or problem to seeing if it works or if we can predict problems with the solution. There’s a name for this, and I’ll ask the real psychologists to correct me if I get this wrong, but I believe it’s called mental simulation. Now, if you go off and google “mental simulation” you’re going to find a bunch of stuff about Folk Psychology and mind-reading phenomena, and none of that is what I’m talking about. I’m talking about the construction and use of mental models to simulate a problem and explore solutions. Anyway, my point is that we do it all the time and it’s usually a wonderful thing.

But there’s an interesting problem with it: there are times when we lack any understanding of some of the fundamental building blocks necessary for the simulation. I’m not talking about lacking all the parts; in fact if you didn’t lack some of the parts you probably wouldn’t be building the simulation in the first place. Usually we have most of the parts, and the missing ones are pretty obvious because we can’t complete the simulation or solve the problem, and this tells us immediately that we need to keep working on the problem. No, I’m talking about times when we lack a fundamental building block of the simulation, and this problem is really interesting because it forms a blind spot: we actually are able to complete the simulation (or so we think) and arrive at an outcome, and there’s no way to know that our simulation is completely flawed.

…except there is. I have found two ways to identify these.

The first one is pretty obvious and so is the solution, but for some reason I often refuse to acknowledge or accept it. Remember a couple weeks ago when I said “Hey, I’ll brb and I’m gonna write another blog post tomorrow?” I totally had this mental simulation of how I was going to blog every day for a while to get back into the rhythm of things. And then life happened–just like it’s happened to me over and over throughout the history of blogging for everyone. The solution here is pretty simple, but not that easy: I have to accept that my mental model, however interesting, does not accurately reflect reality. I could write a whole string of blog posts about this, but for now suffice to say that this is much more easily said than done.

The second method has a much more exciting solution, but it’s also much harder to detect without assistance. Because the solution is so much more effective, however, it is far more interesting to me. The second situation arises when somebody says “Hey, you should try doing it this way,” and you run a mental simulation and identify several problems with it, and you decide “Nah, that way is dumb.”

On a normal day, what happens is this: the person suggesting I try a different approach is less familiar with the problem I’m working on. They make their suggestion, and my mental simulation identifies several problems with their approach. I jump to the conclusion that their mental model is flawed, lacking fundamental pieces, and probably suffering from the aforementioned first blind spot of trusting the model when the model and the experience disagree.

But… what happens if the person I’m talking to suggests that I try a different approach, and I have reason to believe that their mental model does work for them? It’s really counterintuitive, but I’ve learned to trust that they might actually have a better mental model than me, but that there are fundamental missing parts to my version of their model. The solution is tricky, but fun: you have to crawl inside the other person’s head for a while and try to understand what tradeoffs they’re making. For instance, I came to ruby via C and C++, where the language is statically typed and I had learned to put a lot of trust in the compiler. Coming to ruby (eventually; it’s more accurate to say coming to perl and then python and then ruby) I could no longer trust my compiler because there’s wasn’t one. Oh! The problems I foresaw with this approach! Why would you abandon the safety of your compiler! And yet here were all these really smart people getting real work done. What did they know that I didn’t? Well, a lot of things, but the big one was this concept called “unit testing”.

When I was arguing–okay, okay, we were cheerfully discussing–with Sandi Metz, I didn’t understand how you could write a bunch of complicated, breakable code in a private method and not test it. Sandi taught me a bunch of different ways of seeing private methods, primarily that they’re changeable and not a good place to put a firm contract, and if you need to test them to get them working right, go ahead… but delete the test afterwards because once you get them working the test is just dead weight that will slow a maintainer down.

This week I’m playing around with a programming idea that I am sure will not work, except that some programmers I deeply respect swear by it. I can’t wait to crawl inside their head and find out how they make it work.

The Blog Is Dead, Long Live The Blog

So! It took getting my domain name sniped to make me realize how badly I’d let things slip here.

My registrar had a problem processing my credit card for, and I thought I got it cleared up and forgot about it, and what really happened is the domain lapsed and got picked up by a sniper. The domain is now being squatted by someone who wants $500 to probably sell it back to me. I could budget for that, and would do so, if I could be certain the sniper would actually give me back my domain name, but after some reflection I’ve decided there’s a silver lining here. I’m relaunching my blog and there’s no better opportunity to rebrand. Losing my domain name sort of makes the rebranding decision for me.


Hello and welcome to Why, Dave, Why?!?, which for all the renaming and rebranding will probably be about largely the same old silliness. But I might be just a titch more unrepentant about a few things.

TL;DR if you have links or an RSS feed tied to, please update them to, or, failing that, at least to, which WordPress has kindly kept reserved to me.

What’s ahead for this blog? Well, to be honest, about the only thing I can promise is that there IS an “ahead” from here. I have plenty of post ideas rattling around in my head. In the past few years I’ve learned as much about programming as I have in the ten years prior. So that’s two posts right there. 😉

I’ll be cleaning out old blog posts and updating things to reference the new domain name, and maybe the style of the blog itself, who knows. For now, let’s see how many of you have your RSS feed pointed to WordPress. I’ll publish this now, 6:15am MDT on Tuesday, October 10, 2017, but I won’t make an official announcement until tomorrow once I’ve had a chance to clear some dead brush off the site. Comment below if you saw this before the announcement. Oh, but just because I’m not announcing it yet, feel free to spread the word–I’d love to tweet about this tomorrow and find out that some of you were already in the know.

At any rate, welcome back, me! Let’s have some fun together.

Announcement: Daily Posts This Week

Hi Gang!

I’ve been trying to write the “medicine” followup to Loyalty and Layoffs for two weeks now, and it just keeps getting bigger. I have too many points I want to make for a single post.

So… stay tuned: I’m going to challenge myself to post every day this week–and this post doesn’t count! I’ll break the Medicine post up into three separate posts for Monday, Wednesday, and Friday. Later today I’ll put up the most important and urgent piece of medicine: how to build and how to use a professional network. Wednesday I’ll talk about emotional and psychological boundaries and how certain myths we have about them play right into the bad kind of loyalty. And Friday will be the “juicy gossip” post: I’m going to talk about office politics.

That leaves Tuesday and Thursday free to mix things up a bit. On Tuesday I’m going to talk about how wrong my friend Rodney is, and why that’s a good thing, because he’s awesome. And on Thursday… oh what the heck, let’s actually talk about programming computers for a bit. I’ve been writing a lot of Heart and Mind posts, but not many Code posts. I need to clear that up a bit.

So here’s to this week: may I make my challenge, and may the posts I write be worth your time. See you soon! 🙂

Running SimpleCov when COVERAGE=true

I use this trick every time I start a new project, which is just often enough to go look it up but not often enough to commit it to memory, so here it is in blog form:

    require 'simplecov'
    SimpleCov.start 'rails'

Of course if you’re writing a non-rails app omit the ‘rails’ argument. If you’re not using RSpec, put it in test/minitest_helper.rb. (And, of course, if you’re using Test::Unit, stop using Test::Unit and switch to MiniTest!)

Now when you run your specs or tests as normal with ‘rspec spec’ or ‘rake test’ your regular tests will run normally. To run them and get a coverage graph, however, just run

COVERAGE=true rspec spec


COVERAGE=true rake test

And there you go. If memory serves me correctly, this trick is credit due to Jake Mallory, from a project we worked on together last year.

Leaver’s Law

Leaver’s Law: “Everything the system does FOR you, the system also does TO you.”

I first heard this term coined by Don Leaver, one of the best “crusty C hackers” I’ve ever met. He got his start grinding out high-performance unix code, and these days he’s writing (if you can believe it) high-performance Windows code. (Seriously, the man is terrifying. His idea of a good time is making Windows completely surrender one of its CPUs and control of the USB bus so he can process signal data from a mission-critical device without fear of the operating system “taking the app out to lunch while my data puddles on the floor”.) As he modernized from simpler to more luxurious operating systems, this was his lament: that luxury is merely the upside of complexity, and when you gotta get crap done, complexity is the downside of luxury.

I have found that this law is not limited to operating systems, but can be applied to just about anything. I have quoted this law (usually with a curse) at everything from web frameworks to the automatic timer on my car’s headlights.

Leaver’s Law. Now you know.

SVN Users: Why You Should Switch To Git

Recently a coworker of mine told me he was happy with SVN, and had been for years. Why should he and his team switch to git if they were productive and happy? I posted this to our internal message board, but I think the answer is broad enough to merit posting here on my blog. Enjoy. Which vcs do you use, and why do you like it? Are there any ex-git users out there who prefer something else?

Just my $0.02, but I hear this concern from satisfied svn users a lot. I used to be one myself. There is a compelling answer, but unfortunately I don’t know how to articulate it. Almost without exception, every svn user I have seen switch to git has slapped their forehead and said, “My goodness, why didn’t you tell me the world wasn’t flat?!?”

I think the problem is threefold. First, git was very hard to use when it first came out, which turned a lot of people off. Second, it was kind of a hipster trendy thing, which turned even more people off. But most importantly, every advantage that git provides over svn is something that svn users have learned to live without, and so when you say “git can do this”, svn users say “Yeah, but we don’t need or use that.”

You need those things. They will make you happy. Take it on faith until you begin to enjoy the fruits yourself. 🙂 Git offers a ton of incredible things over svn. I’ll mention just my top three favorites.

First, you can branch in git, and you don’t do that in svn. I know what you’re thinking: you CAN branch in svn. That’s not what I said. I said you DON’T. Because it’s such a pain to do and merging is such a nightmare, I’ve only ever met one team that used branching heavily in svn. They were a company with 500+ developers, however, and had IT staff on hand full-time to enforce the engineering discipline to keep their branches under control, and once a week the dev team stopped and had a “merge day” when branches were folded back into the mainline. In contrast, git’s merging tools are so freakishly powerful that branching becomes nearly a zero-cost operation. In the past week, I have created or worked in not less than ten different branches across three projects. Each feature, each bugfix, isolated in its own branch. All of the code is changed and updated, and pushed up to the server. Some of the branches were merged immediately, some are still awaiting QA testing before it can be deployed. So that’s feature number one: Git makes branching and merging so easy that you’ll use it all the time.

Second, and this is a huge implication, because branching and merging are so easy, you no longer have this problem where everybody is syncing and merging with trunk, where every feature change gets deployed to production as soon as you finish it. You might be tempted to lump this in with my first point, but as somebody who occasionally gets dragged back into svn from git, this is totally a separate concern. You can’t do exploratory branches easily, so you don’t do them. With git, you can fork a branch, make some changes, forget about the branch, go back and work in master (git’s word for “trunk”) for a month, then come back to your exploratory branch and type “rebase” and it will MOVE your changes forward in time, updating the trunk and then “playing your changes back” over the new trunk, making it as though you had forked yesterday instead of a month ago. If you’ve ever made a bugfix and then had to hold off pushing your commit because QA was still testing trunk for a deploy, you need to switch to git.

Thirdly, git is distributed. Everybody gets the obvious implication of this, that you could be pushing your code to multiple servers. And big deal, right? You could be backing up your svn repo just as easily. But everybody misses the subtle implications of this, which are earth-shattering: one, what you call your sandbox, git consider to be just another repo. Which means you can be on a plane with no internet access, and you can checkout old revisions, commit code to a feature branch, fix a bug in master, and start two new exploratory branches, all without being connected to the main repo. What svn calls a commit, git calls a push, and it syncs your “local” repo with the remote one. (What git calls a commit is just storing a change from your sandbox to your local repo database to be pushed later.) And two, because you have a full copy of the local repo in your sandbox, you can play amazing games with the commit history. Checked in a file you shouldn’t have? Go back into your repo’s history and remove it from the commit stream before you push it to the server. Wrote the wrong bug number on your checkin? Amend your commit message. Pulled down latest code only to discover that 12 files are in conflict and you just want the version from two days ago? You can jump over to that commit and grab them.

That’s my $0.02, which I guess on a per-word basis appears to be quite the bargain. Sorry. TL;DR: git takes your version control game to a whole new level that you didn’t even know existed. If you’re happy with svn, you don’t NEED to use git. But if you want to STAY happy with svn, trust me: don’t ever switch. You WON’T be able to go back.

(Well, actually, you will. git has a svn emulation module that lets you have a git repo locally and push commits to a svn server. It still has the problem of “the dev team are all committing to trunk”, but features 1 and 3, of branching and distributing, still shine through. It makes working with subversion… bearable.)

I Accidentally Spoke at MWRC 2011

I found out the night before the conference that one more speaker had canceled than the conference had backup speakers in reserve. I found this out because I was rooming with the conference organizer, Mike Moore, and BOY did he call in the favor of not asking me to split the hotel cost!

So, I spoke for about half an hour about Monkeypatching Your Brain. All while dressed in a Hilton Hotel bathrobe:

Photo by Jeremy Nicoll of Smashing Shots

It’s currently up at, here: I am the second speaker, about 38 minutes in. It’s the livestream from the conference and I don’t know how long the link will be valid. Confreaks is in the process of making “proper” videos of the conference. I’ll post a link once those are up.

Git: How to Merge a New Pull Request

I originally wrote this as Git: How to Merge a Remote Fork on LiveJournal, but Google refuses to index my LJ, so I couldn’t find it when I needed it. I’m reposting this here for my own future benefit.

James Britt just updated the documentation to TourBus for me, and he pushed the changes to his own fork. Here’s how I pulled it into my own repository without forcibly overwriting my own work until I was ready to merge it. James’ fork is at

Comments are welcome, especially if you know a better way to do it.

In English: Add a remote to my local repository, for James’ remote. Fetch his changes. Diff his changes against master. Assuming approval of his changes, check out master, merge james’ master, and push it.

In Bash:

$ cd tourbus
$ git checkout master
$ git remote add jamesbritt git://
$ git fetch jamesbritt
$ git diff jamesbritt/master
$ git merge jamesbritt-master
$ git push origin master

[Edit: Thanks to Markus Prinz for pointing out the needless tracking branch. I have removed it.]

[Edit: Markus also points out that you can skip the git fetch jamesbritt step if you use -f in the preceding step: git remote add -f jamesbritt git://]

If you need to make changes to get their stuff before merging and you don’t want to do it on master, you should make your own story branch for it:

$ cd tourbus
$ git checkout master
$ git remote add jamesbritt git://
$ git fetch jamesbritt
$ git diff jamesbritt/master # eep, weird changes seen
$ git checkout -b fixmerge # I'm on master, so this will branch from there
$ git merge jamesbritt/master # now we have a master + jamesbritt/master merge. Make changes as needed, then
$ git checkout master
$ git merge fixmerge
$ git push origin master

The Surprising Truth About Cellular Automata and You

Cellular automata is a way of modeling complex systems by focusing on simple, independent entities called automatons. These automatons are not simple by design but rather by definition; because they are each a tiny, tiny part of a massive system, and because the number of possible interactions between these automatons rises exponentially with each automaton added to the system, there is no possible way that a single automaton can possibly understand the complexity of its own system.

Here’s are some of the things a cellular automaton DOES know:

  • It knows what it wants. An automaton is independent and has goals of its own that it wishes to satisfy.
  • It interacts with the world. This varies depending on the model of the system and the goals of the automaton, but often an automaton will consume resources, attempt to obtain more, and may expel waste.
  • It interacts with other automata. I originally wrote “its neighbors” but “neighbor” is too restricted a term. An automaton may communicate with automata at great distances, or may receive summaries of the behavior of groups of automata, in much the same way that you or I can send an email or watch the news.
  • It has a staggering amount of hubris. The automaton cannot possibly understand the system in which it lies, but in spite of this limitation, it will attempt to interact with the world and other automata and attempt to achieve its goals.

The complexities of the system are beautiful and staggering. Readers of “The Land of Lisp” will recognize this example: consider a world in which resources are scarce everywhere except in a tight jungle area, where resources are very rich. Now consider an automaton which has only one choice in life: to move straight ahead or turn. The automaton consumes a resource if it finds one, and it will die if it goes for too many turns without finding a resource. Each automaton will have a random tendency towards turning or going straight. Now let’s say that every so often, the surviving automata create offspring with tendencies similar to its own.

In even this simple an example, the automata will VERY quickly–in just a few dozen generations–specialize into two different species. One species adapts to living in the jungle. It almost always turns, trying to keep itself from wandering away from the jungle. The other species almost always goes straight, trying to cover as much distance as possible to find resources in the desert.

With regard to the title of this post, that’s “The Suprising Truth About Cellular Automata”. What about the “And You” part?

Take a deep breath, and reread the rules of cellular automata. Now turn your head and look out the window. Stare off into the distance and think for a long time. Now say these words aloud:

“This is why I don’t understand politics or economics.”

Of course, the hubris rule still applies. Just because we don’t understand it, doesn’t mean we aren’t going to try and it definitely doesn’t mean we’re not going to manipulate it. But for me, this is a perfectly adequate explanation for why economic theory makes so much sense on paper and yet the world economy can be in the tank. It explains why I get offended when someone reminds me to unit test my code, even while my unit tests reveal so many defects in my code.

The scale of complexity explains why terrorist bombings and economic collapse don’t necessarily have to be the work of conspiracies, and hubris explains why I would much rather believe that they are. Conversely, hubris explains why campaigning politicians look good denouncing the ails of the system, and complexity explains why incumbent politicians look incompetent trying to fix them.

App-a-Day Challenge, Day Screw It

Yeah, so, you know that app-a-day challenge thing? So, about that….

It’s like this, folks: I thought I was putting my mad skillz on the line; it turned out I was putting my madness on display.

I got my butt handed to me by this challenge in two equal portions. First, real life interrupted in a big way. One of my clients called me to tell me his airport software was being audited by the FAA in 24 hours–and there was a bug in my product that needed fixing. It sounded like a reasonably small tweak, so I agreed to handle it, thinking I’d even have time to ship my app afterwards. Turns out it was a redesign of the entire access control system which took me about 20 hours. I delivered it around 9am the next day and then went to bed and slept through a SECOND day of not shipping an app.

Fair humility where it is due, however, the other equal part of me getting served was that there really are just so many moving parts to an Android application that it’s insane. I’ll blame the tools and documentation here, hopefully convincingly, but the argument will stand that lack of experience is why I could not quickly overcome tool and documentation issues. But here’s the argument anyway: The Android development environment is epic shit. Not epic THE shit, just epic SHIT. Like somebody’s been feeding their mastodon too much fiber.

High-ceremony languages require high-tooling support. Low-ceremony languages require high-documentation support. For example, C# is the highest-ceremony language I’ve ever worked with–followed closely by Java. C#, however, is supported by Developer Studio, which is the best freaking IDE ever made. Seriously, I don’t like Microsoft on principle, and I don’t care for any of their languages professionally anymore. But I used their tools for over a decade and I’ve never seen their match. Don’t even bother posting a knee-jerk comment in response about this unless you know what you’re talking about, because you don’t, so just shut up. Oh, and if your knee-jerk comment was to say that Eclipse is pretty good, you also need to fuck off. And then get help. It’s called Stockholm Syndrome. Look it up (after you fuck off).

Ahem. Moving on.

Low-ceremony languages need high-documentation support. All you really need there is a good text editor (which means emacs, since I’m still in a mood to start shit) and off you go. You have the freedom to write expressive code that is readable enough that you don’t NEED all the tooling support. (Detractors of languages like Lisp and Ruby will say that you also have the freedom to write awful code. When you’re older, and learn big words like economics and sustainability, you’ll understand. For now, just understand that there are a lot of freedoms which one learns quickly to not exercise if one wants to keep one’s job.)

So anyway, yeah. Didn’t mean for this to turn into a rant, but Java is very high-ceremony, Android’s documentation is still rather in its infancy, and the tool support is Eclipse, of which my opinion has not changed in 4 years.

Will I still code in Android? Yes. The documentation will get better, and my experience with knowing how to fix Eclipse’s brain damage (syntax errors that are fixed but won’t go away until you clean the project or restart eclipse, etc) will get better. I’ll probably even stop complaining about it at some point.

But yeah. For now… let’s just say that the experiment ended with “findings” rather than “results”.