Things I Wish I’d Known When I Started Programming

codeI’ve been programming for more than a few years now, but sometimes I like to look back at things I wrote when I was much younger, and reflect on how much I’ve learned since then. Of course, no one expects you to start out knowing everything, but there are a few things I wish I’d known when I started, or that I’d learned a bit sooner than I did. So, just for fun I thought I’d come up with a list of a few of the things I wish I’d known when I started programming.

Use Source Code/Revision Control

Like a lot of young developers, when I first started writing code I didn’t use any sort of source code/revision control system (just zipping up the project files every so often doesn’t count). But once I did, I realized how wrong I’d been – having a complete history of all changes is just sooooooooo helpful, even without any of the other features (branches, merging, etc.).

Now, there is a bit of a learning curve with any revision control system, and when you’re just starting out that learning curve can be a bit steep, especially in addition to all the other stuff you’re learning at the same time – but it is so worth the time and effort. Being able to undo changes, or make different changes to the same code (via branches), and just in general having a nice, detailed history of what you’ve done and what’s changed over time is simply invaluable. This is especially true if you’re part of a multiple programmer team, but it’s also just as true if you’re working by yourself.

For anything beyond 5-minute throw-away projects, use source code revision control!

Write Comments So That You’ll Understand Them Months or Years Later

Writing comments is like writing a log entry that you’ll inevitably need to review much, much, much later – so taking the time to write it clearly is well worth it. Countless times I’ve run across old comments of mine and wondered “what was I thinking?”

Same goes for commit messages – brevity may be the soul of wit, but not when it comes to commit messages! You shouldn’t need to do a diff on a particular commit just to figure out what you did; the commit message should at least give you a general idea.

On the flip side, though, you don’t want to be writing novels in your comments or commit messages, either – so write enough to be understood, but don’t be needlessly wordy, either.

Automate the Build & Deployment

It doesn’t matter how many times you’ve gone through the process of compiling or deploying a build, someday you will forget a step. And it’s also just so nice to be able to kick off a complete build & deploy with a single click and then go grab a cup of coffee or something, instead of having to sit there and go through all the steps manually. So take the time to write a script to automate the process – it doesn’t need to be super-complicated, just something to do the work for you. Believe me, it’s worth it.

No Fix or Change is Ever Too Small to Not Test

Also known as “TEST EVERYTHING,” this one’s a hard rule to stick too, especially for really small things like changing a typo in some text – but you still have to do it. It might seem like a tiny change that couldn’t possibly cause any problems anywhere else – but code is complicated, and complexity breeds bugs in the weirdest ways. Maybe that little text change you made causes a display issue at higher DPI settings, or when using a different font – you won’t know until you test it.

If You Find a Problem, no Matter How Small,  Put it in the Bug Tracker!

When making a fix – even for a small, seemingly insignificant problem – if it’s in production code (or in a released beta build), you’d better be writing up a bug report for it. Just making a comment in your SVN commit log isn’t enough – you need a proper report that can be searched and found and referenced, because someday, maybe years from now, you’ll need to know how or why you made this change.

(I will make an exception here to “fixing a typo” text-only changes, but only just.)

Always Write a Spec (But Don’t Write a Novel)

Just as fixing (seemingly) simple bugs can have unexpected consequences, designing and adding simple features or changes can turn out to be unexpected complicated. Writing a spec – even if it’s just a quick outline on paper or on a whiteboard – can help immensely in heading off these unexpected complexities. (Or, at the very least, you’ll know there are complexities and can design around them.) Good software should always be designed first, then coded – and writing a specification is part of the designing step (if not the entirety of it).

At the same time though you don’t need to write a novel’s worth of specifications, even for complicated changes or features. If the spec is too long (or goes into too much detail) then no one will read it – yourself included. If your spec just has to be that long, then maybe the feature or change you’re planning should be broken down into smaller parts first. Finding the right balance between length & specificity is a skill that takes practice to get right, but it’s one that’s worth learning.


These are just some of the things I wish I’d known when I started programming all those years ago – if you have things you wish you’d known, feel free to share them in the comments!

The Night Watch (Programmer Humor)

Ran across this article/story recently (it’s a PDF link) and had to share some of the more entertaining little excerpts from it – though you might need to be a programmer/sysadmin in order to appreciate the humor in some of these…

Systems people discover bugs by waking up and discovering that their first-born children are missing and “ETIMEDOUT” has been written in blood on the wall.


This is a sorrow that lingers, because a 2 byte read is the only thing that both Republicans and Democrats agree is wrong.


When it’s 3 A.M., and you’ve been debugging for 12 hours, and you encounter a virtual static friend protected volatile templated function pointer, you want to go into hibernation and awake as a werewolf and then find the people who wrote the C++ standard and bring ruin to the things that they love.


One time I tried to create a list<map<int>>, and my syntax errors caused the dead to walk among the living. Such things are clearly unfortunate.

Honestly I can never get enough of good programmer humor like this.

Know Your Code

Q: As a .NET programmer, why do you care about being familiar with the Win32 API?

A: Because the .NET framework is just another abstraction, and I like to think that I’m a good programmer – and good programmers know that all abstractions are leaky.

Treating our Legal Code like Computer Code

Looking at some pending legislation through the eyes of a computer programmer… and finding it… wanting.

I’ve posted before about the idea of treating our legal system (legal code) like a computer system (computer code):

Our legal code is almost entirely like an entire operating system written in undocumented Perl.

  1. There are no hints as to what any part of it is supposed to do and it is written in a language that to most people looks like line noise.
  2. Every significant patch is applied by adding an additional Perl module that overrides an existing method in an existing module, replacing all of the code in that method with a complete new copy of the method that is almost identical to the old one but adds or removes a backslash in a single regular expression.
  3. The entire core logic was written in a crunch session by a bunch of geeks locked in a room together and forced to design it by committee.
  4. The application was a rewrite of another application that never really worked well in the first place.
  5. Every function name is chosen explicitly to provoke an emotional response in the developer, e.g. thisFunctionSucks() or callMeNow().

Although that was somewhat tongue-in-cheek, there was a certain grain of truth to it.

It seems that I’m not the only one to think this – and indeed, someone has taken the idea even further by applying systems design principles to the new health care reform legislation that the US Congress is working on at the moment.

Bruce F. Webster writes:

On the occasions where I have reviewed the actual text of major legislation, I have been struck by the parallels between legislation and software, particularly in terms of the pitfalls and issues with architecture, design, implementation, testing, and deployment. Some of the tradeoffs are even the same, such as trading off the risk of “analysis paralysis” (never moving beyond the research and analysis phase) and the risks of unintended consequences from rushing ill-formed software into production. Yet another similarity is that both software and legislation tend to leverage off of, interact with, call upon, extend, and/or replace existing software and legislation.  Finally, the more complex a given system or piece of legislation is, the less likely that it will achieve the original intent.

He then goes on to talk about some “design flaws” in HR 3200 – otherwise known as the “America’s Affordable Health Choices Act of 2009.” (Brings to mind point #5 from the “Legal System as a Perl OS” quote from above, doesn’t it?)

Bruce then goes on make a point which is basically the same as point #2 in the “Legal System as a Perl OS” quote above:

Much of HR 3200 makes piecemeal modifications to existing legislation, often with little explanation as to intent and consequences.

Or to put it another way, entire sections of HR 3200 do nothing other than override some existing legislation in some incredibly small way, which will (presumably?) have huge (and in all likelihood, unintended and unforeseen) effects – much like how adding or removing a single backslash from a regular expression can have huge (and often unintended and unforeseen) effects on its pattern-matching behavior.

Bruce’s entire article (it’s the first of a 3-part series – as I write this, only parts 1 and 2 are done) is well worth reading – and in fact I highly recommend it, even for non-programmers.

Of course, if you ask me, I really think all legislators should be required to take a programming course or two – because, as I’ve said before (in my “A Programmer’s Perspective on Politics” article), laws are effectively the “operating system” of our society… and right now, the people writing our society’s “operating system” don’t seem to be particularly good programmers!!

Why Does Software Break?

An essay on “Why Software Breaks” that touches on the complexities of software and the computer systems on which they are built – a complexity that is inherent to their flexibility, and therefore can never really be reduced or removed.

It’s only natural to wonder why, after all this time and our collective experience, that we still produce buggy, brittle software that breaks and crashes. It’s also only natural to point at “software engineers” and then the other kinds of “engineers” – as in, the people who build bridges, skyscrapers, cars, planes, etc. – who can build things that work for years and don’t (generally) break down and crash, and ask “why can’t we do the same thing with software?”

To answer that question, it’s important to make a distinction between the physical world of bridges, skyscrapers, planes, and such, and the “thought-stuff” world of software.

While software is, to use the words of Frederick Brooks in The Mythical Man-Month, made purely of insubstantial “thought-stuff,” it is, ultimately, made by man – and as man is fallible, so to are the things that he creates. (After all, some bridges fall down, some skyscrapers collapse/leak/shake in the wind, and some planes crash.)

There’s also the “layer” aspect to keep in mind – software may be “thought-stuff,” but it doesn’t exist purely in a vacuum. It relies upon the perfect function of millions (or billions) of tiny, often microscopic physical components, which have been engineered with great specificity and tight tolerances. A few cosmic rays (or a clumsy user pulling out a cord) can screw up the perfect balance of all these components in unimaginable ways – sort of like pulling out the main support for a bridge, or blowing out the tire of a car. (Or, perhaps like having a few large birds fly into the engine of a plane!) When these sorts of things happen, the system – be it bridge, plane, car, or computer – fails, often spectacularly.

So, it’s less accurate to think of a computer system (hardware and software together) as being like a bridge, and more accurate to think of it as being like a giant clockwork mechanism – a huge Rube Goldberg-type device – with hundreds of finely inter-meshing gears and sprockets. If just one gear pops out of place, or one sprocket cracks a tooth, the system stops working properly – perhaps just a little bit, or perhaps so much so that more gears are forced out of place, and more sprockets are broken, until the entire thing collapses in a pile of ruin.

To carry the bridge metaphor in the other direction (as it were), it might be more accurate to think of a computer system as being like a bridge that not only functions like a bridge (gets people from one side to the other), but also functions as a musical instrument capable of producing both classical, jazz, and electronic/techno music; predicts the weather; washes your clothes; generates electrical power; can be quickly reconfigured into a skyscraper home for people or a hospital, as needed; can float up and down the river to a new crossing (dynamically expanding or shortening its length as it goes, of course); and can also fly, carrying everyone on it to a new river, with new road signs that instantly match the new language and traffic patterns of the new location. It also has to do all this while not disturbing the environment around it, while simultaneously accepting any impact its environment puts on it, even if such impact might cause it to function in a manner contrary to the one for which it was designed.

If you were to try to build a physical bridge to do all of these things, it would probably break in much the same ways that software does.

To use a different analogy, consider the difference between a typewriter (a machine designed to do just one thing – type words) and a computer. No one would argue that the computer is a more reliable typing instrument – after all, the typewriter is fairly simple, and because it is designed to do just one thing, it can do it well. Also, when the typewriter fails, the cause is generally immediately apparent (e.g., out of ink ribbon) and can easily be understood – and fixed – by the user.

On the other hand, the computer – while on the surface just the same as the typewriter (keyboard on which you type words), is infinitely more flexible. There is almost an infinite number of other things that the computer could do in addition to typing – it could play music, calculate your taxes, control millions of tiny light-producing elements to display an interactive 3D environment – or a photo of your dog, talk to you using a synthesized voice, control complex machining equipment, participate in a global network, and almost anything else you could imagine.

When you consider that, it’s no wonder that computers have so many ways in which they can break. It’s exactly because they are so flexible that they are so fragile at times – their flexibility is their greatest strength, and at the same time, their greatest weakness. Because they are so generalized, getting them to do any one specific thing involves a lot of re-building of concepts (we call them “metaphors” in the world of software) just to get any useful work done, never mind actually taking care of the main task at hand.

In the end, software breaks because it (and the computers on which it runs) are general purpose machines which we ask to do an enormous number of things (some often contrary to one another!), and even though we might only be asking it to do something simple at the surface (e.g., type a few words onto the screen), in reality there are innumerable hidden complexities involved in getting a general-purpose machine to do something so specific (and, we would hope, do it well) that it’s only natural that there will be errors – both human induced and artifacts of the system itself.

In other words, softare breaks because computers are fantastically flexible general purpose machines that, by their very nature, require complexity in order to do anything specific – and no layers of abstraction, big-M Methodologies, frameworks, or whatever else we come up with – are going to change that simple and immutable fact.