Once upon a time I sort of won an argument with Sandi Metz. By “sort of won” I mean she didn’t lose; I was wrong and she was right, and it was a perfect example of the Dunning-Kruger effect. She had a very simple rule, and I was certain that the simple rule was flawed. I persisted in my argument long enough, and she persisted in teaching me, and eventually she led me to a place where I could see the beautiful interplay of all the complicated things I had been seeing, and then she showed me one other new idea I had never considered… and it made the whole complicated interplay become very simple. So when I say I won, I mean in in the sense that she was right and I learned an amazing thing I didn’t know before, and I sort of count that as the best kind of winning.
Whenever we’re doing some problem-solving activity, there’s a thing we do where we run a mental simulation of the situation or problem to seeing if it works or if we can predict problems with the solution. There’s a name for this, and I’ll ask the real psychologists to correct me if I get this wrong, but I believe it’s called mental simulation. Now, if you go off and google “mental simulation” you’re going to find a bunch of stuff about Folk Psychology and mind-reading phenomena, and none of that is what I’m talking about. I’m talking about the construction and use of mental models to simulate a problem and explore solutions. Anyway, my point is that we do it all the time and it’s usually a wonderful thing.
But there’s an interesting problem with it: there are times when we lack any understanding of some of the fundamental building blocks necessary for the simulation. I’m not talking about lacking all the parts; in fact if you didn’t lack some of the parts you probably wouldn’t be building the simulation in the first place. Usually we have most of the parts, and the missing ones are pretty obvious because we can’t complete the simulation or solve the problem, and this tells us immediately that we need to keep working on the problem. No, I’m talking about times when we lack a fundamental building block of the simulation, and this problem is really interesting because it forms a blind spot: we actually are able to complete the simulation (or so we think) and arrive at an outcome, and there’s no way to know that our simulation is completely flawed.
…except there is. I have found two ways to identify these.
The first one is pretty obvious and so is the solution, but for some reason I often refuse to acknowledge or accept it. Remember a couple weeks ago when I said “Hey, I’ll brb and I’m gonna write another blog post tomorrow?” I totally had this mental simulation of how I was going to blog every day for a while to get back into the rhythm of things. And then life happened–just like it’s happened to me over and over throughout the history of blogging for everyone. The solution here is pretty simple, but not that easy: I have to accept that my mental model, however interesting, does not accurately reflect reality. I could write a whole string of blog posts about this, but for now suffice to say that this is much more easily said than done.
The second method has a much more exciting solution, but it’s also much harder to detect without assistance. Because the solution is so much more effective, however, it is far more interesting to me. The second situation arises when somebody says “Hey, you should try doing it this way,” and you run a mental simulation and identify several problems with it, and you decide “Nah, that way is dumb.”
On a normal day, what happens is this: the person suggesting I try a different approach is less familiar with the problem I’m working on. They make their suggestion, and my mental simulation identifies several problems with their approach. I jump to the conclusion that their mental model is flawed, lacking fundamental pieces, and probably suffering from the aforementioned first blind spot of trusting the model when the model and the experience disagree.
But… what happens if the person I’m talking to suggests that I try a different approach, and I have reason to believe that their mental model does work for them? It’s really counterintuitive, but I’ve learned to trust that they might actually have a better mental model than me, but that there are fundamental missing parts to my version of their model. The solution is tricky, but fun: you have to crawl inside the other person’s head for a while and try to understand what tradeoffs they’re making. For instance, I came to ruby via C and C++, where the language is statically typed and I had learned to put a lot of trust in the compiler. Coming to ruby (eventually; it’s more accurate to say coming to perl and then python and then ruby) I could no longer trust my compiler because there’s wasn’t one. Oh! The problems I foresaw with this approach! Why would you abandon the safety of your compiler! And yet here were all these really smart people getting real work done. What did they know that I didn’t? Well, a lot of things, but the big one was this concept called “unit testing”.
When I was arguing–okay, okay, we were cheerfully discussing–with Sandi Metz, I didn’t understand how you could write a bunch of complicated, breakable code in a private method and not test it. Sandi taught me a bunch of different ways of seeing private methods, primarily that they’re changeable and not a good place to put a firm contract, and if you need to test them to get them working right, go ahead… but delete the test afterwards because once you get them working the test is just dead weight that will slow a maintainer down.
This week I’m playing around with a programming idea that I am sure will not work, except that some programmers I deeply respect swear by it. I can’t wait to crawl inside their head and find out how they make it work.