When not to optimize

Software development has some consistent goals. We want the software to be as fast as possible, as small as possible, to work as well as possible, and to be done as soon as possible. Naturally there tend to be trade-offs between these goals: the fastest algorithm may require more storage space, the algorithm that requires the least space may be complicated and prone to errors, extensive testing may delay the release. We decide which goals take priority and what values are acceptable for each. But how to choose?

In some cases, optimizing for speed and size makes a lot of sense. If a loop will execute ten thousand times, a one-millisecond delay in each loop is likely to be unacceptable. If there are ten million items to be stored, requiring an extra byte for each item can make a difference, even with today’s storage allocations. Even when not under tight constraints, all things being equal, we’d like to optimize our performance.

In some cases, though, it’s better not to optimize. I’m not talking about avoiding premature optimization – I’m talking about not optimizing at all. For example, suppose you have an object which will be instantiated a few dozen times (with each instantiation being stored on disk) and there are two ways you can code it. One will require an additional string value, the other will simply perform some calculations (perhaps a dozen lines of code) each time the object is loaded. The time to do this calculation will be unnoticeable. Should the value be included or not?

At first glance we might say, sure, the calculation time is insignificant so we might as well do it. On the other hand, since the object will only be instantiated a few times, the extra space requirements are also insignificant; it simply makes no difference to the performance whether we store the extra item or not.

Does that mean it doesn’t matter which decision we make? In this case….no. We also have to factor in the complexity of the calculation: not to the computer, but to the programmer. Adding extra code makes it that much more likely that a bug will slip in someplace; given two options where neither has a performance benefit, we should opt for the simpler one. In this case, then, I would choose to use the extra item, not to save the calculation time for the computer, but to save the mental energy of the programmer by making the code that much simpler.

The importance of downtime

As I write this, it’s the day before I return to work after a sabbatical. I’ve spent the last two weeks in Europe, exploring castles in England and munching crepes and escargot in France. What I have not done is anything related to work.

Cat relaxing
By Kreuzschnabel (own work) via Wikimedia Commons

In fact, before I left home I did two things. I removed the sim card from my phone (I got a temporary one in Europe to provide a data plan there) and I shut off my work email, so I wouldn’t get any emails from work even when connected to wifi. In my last team meeting before I left, I made it clear that I would be unreachable while I was gone.

Over the last five years, once thing I’ve noticed is that even when I’m on vacation – whether I’m visiting my family in Colorado or an art museum in Chicago – I still end up answering work emails, which means some of my attention is still on work. It might not take that much time to respond to a few emails, but how much can you relax when you’re still thinking about the job?

Tomorrow, I’ll have many emails to respond to. Tomorrow, people will need my input on many things, and I’ll be busy all day. Today? When I finish this blog post, I’ll be diving into a pluralsight course, without worrying about what I need to do tomorrow. Today, I’ve freed my mind from work for two weeks, and I am relaxed.