Tag Archives: complexity

Software and Chicken Entrails

I had a collection of epiphanies today about Informational Software.

“Informational Software” is a term I use to describe software that helps you understand and make decisions about information. It is not a product and does not make your business money, but it can be used to help you understand your business and therefore, in theory, help you make money. For example, analytics software does not make you money, but you can use it to understand your traffic and hopefully to then minimize your costs and maximize your revenues. (Note that if you are selling an analytics package, this definition is still true, because in that case the software is a product, not an information tool*.)

Informational Software comes in two types: software that interprets trends and data and makes decisions for the user, and software that collects and reveals data so the user can make their own decisions. Expert systems are an example of the former, analytics packages are an example of the latter.

Let’s call this mess of data the chicken entrails. We want to read them and predict the future, right? Sure we do. That’s what chicken entrails are for.

Okay, enough definition. Here are the epiphanies:

First: If your users are untrained and the data is simple, your system can advise the user and/or make decisions for them. If they cannot read the entrails, or the entrails are too simple to bother the entrail readers, do not let/make them read the entrails.

Second: (Pay attention, this is the important bit) If your users are highly trained and the data is very complex, do not attempt to interpret the entrails for them. Just show them the entrails. Highly trained users who have asked you for information software do not want you to do their job, they want a tool to help them do their job better.

Third: My instinct is always write software that interprets entrails, regardless of the complexity and regardless of user knowledge. Learn to stop and figure out what it really needed.

Fourth: Interpreting complex data is really, really hard. On a small, knock-it-out project, it is almost certainly doomed to fail. Now reread the 2nd epiphany: On small, knock-em-out projects, it is completely unnecessary.

Right now I’m working on software that reveals entrails to some truly arcane masters of entrail reading. They have become masters because the information system currently available to them is literally designed to protect the data from their eyes. I have found two pieces of data, correlated them, and put them into a report, and they think I’m a super genius. Not because my software is smart, but because it is smart enough to be dumb.

I really like epiphanies where I suddenly realize that it is not necessary to be doomed to failure.

* And if you’re smart enough to reason “but what if you’re using your analytics tool to analyze the sales of your analytics tool”, I congratulate you on your cleverness. Can you also see the flaw in this reasoning?

The Surprising Truth About Cellular Automata and You

Cellular automata is a way of modeling complex systems by focusing on simple, independent entities called automatons. These automatons are not simple by design but rather by definition; because they are each a tiny, tiny part of a massive system, and because the number of possible interactions between these automatons rises exponentially with each automaton added to the system, there is no possible way that a single automaton can possibly understand the complexity of its own system.

Here’s are some of the things a cellular automaton DOES know:

  • It knows what it wants. An automaton is independent and has goals of its own that it wishes to satisfy.
  • It interacts with the world. This varies depending on the model of the system and the goals of the automaton, but often an automaton will consume resources, attempt to obtain more, and may expel waste.
  • It interacts with other automata. I originally wrote “its neighbors” but “neighbor” is too restricted a term. An automaton may communicate with automata at great distances, or may receive summaries of the behavior of groups of automata, in much the same way that you or I can send an email or watch the news.
  • It has a staggering amount of hubris. The automaton cannot possibly understand the system in which it lies, but in spite of this limitation, it will attempt to interact with the world and other automata and attempt to achieve its goals.

The complexities of the system are beautiful and staggering. Readers of “The Land of Lisp” will recognize this example: consider a world in which resources are scarce everywhere except in a tight jungle area, where resources are very rich. Now consider an automaton which has only one choice in life: to move straight ahead or turn. The automaton consumes a resource if it finds one, and it will die if it goes for too many turns without finding a resource. Each automaton will have a random tendency towards turning or going straight. Now let’s say that every so often, the surviving automata create offspring with tendencies similar to its own.

In even this simple an example, the automata will VERY quickly–in just a few dozen generations–specialize into two different species. One species adapts to living in the jungle. It almost always turns, trying to keep itself from wandering away from the jungle. The other species almost always goes straight, trying to cover as much distance as possible to find resources in the desert.

With regard to the title of this post, that’s “The Suprising Truth About Cellular Automata”. What about the “And You” part?

Take a deep breath, and reread the rules of cellular automata. Now turn your head and look out the window. Stare off into the distance and think for a long time. Now say these words aloud:

“This is why I don’t understand politics or economics.”

Of course, the hubris rule still applies. Just because we don’t understand it, doesn’t mean we aren’t going to try and it definitely doesn’t mean we’re not going to manipulate it. But for me, this is a perfectly adequate explanation for why economic theory makes so much sense on paper and yet the world economy can be in the tank. It explains why I get offended when someone reminds me to unit test my code, even while my unit tests reveal so many defects in my code.

The scale of complexity explains why terrorist bombings and economic collapse don’t necessarily have to be the work of conspiracies, and hubris explains why I would much rather believe that they are. Conversely, hubris explains why campaigning politicians look good denouncing the ails of the system, and complexity explains why incumbent politicians look incompetent trying to fix them.