As programmers, we are in a perpetual state of learning. Technology moves so fast, new languages and techniques are developed faster than we can learn them. So I always find it interesting to reflect on what the past has taught us, and what changes have been for the better or worse. Many people often say that if you don’t look at code you wrote 6 months ago and feel sick, then you haven’t grown as a programmer. While this is true, there are many larger truths and lessons that can be learned out of a long career in programming.
This is the exact topic of a post I read recently by John Graham-Cunning. Granted, John has been programming for over 30 years – my measly 7 years of serious programming (and 5 of messing around in VB before that) doesn’t stack up to the wealth of knowledge and experience that 30 can give you. However I feel some of his sentiments are somewhat misplaced, depending on the context of the programming that you’re doing.
Not all programmers have the luxury of being honest to themselves while developing something. John uses the “classic example” of inserting calls or printf statements that magically make bugs and crashes disappear, and that programmmers should endeavour to understand whats going on with their code. He encourages rigor in development, but sometimes its not always possible to be rigorous. Does that make you a bad programmer? Does lack of rigor make you bad at your craft?
As programmers we often work with large, unknown, poorly documented libraries and code bases . There is so much undocumented behavior in so many languages and libraries that we simply can’t endeavour to understand it all. A couple of great examples come to mind – one minor and one major. A colleague of mine was recently integrating a library that queries a proprietary database. The format for matching a string in the query was totally undocumented – it required single quote marks around the string itself when added to the query builder. This was one of the “trial and error” or “shotgun” development strategies that John is against. In an ideal world, every API would be documented and we could easily understand every interaction that every library has with our application. But in reality, we run into issues all the time and this is often as a result of poor documentation or implementations that we have very little control over, and there is only so much rigor that can be applied.
A second and more significant example was a fix I recently integrated into the game we’re currently working on. We’re using a 3rd party game engine thats very broad, and many parts of the engine are unfamiliar to us. This certain fix ended up causing a crash in a completely unrelated area – code that I hadn’t touched and didn’t really have any idea what it did. This matches in with the “classic example” John describes in his post. Is it realistic for me to endeavour to completely understand code that I haven’t written, that I’ll likely never need to look at again, when I can fix it with one simple line of code change, while working towards a deadline in a production environment? If you think the answer is yes, where do you propose we draw the line?
Many programmers advocate for the same thing John does in this respect. But I have to question this advice in a practical sense, and often wonder what scope and scale of projects these developers have worked on in order for them to truly believe the advice they are giving. When working with codebases that are hundreds of thousands, or even millions of lines of code, honesty cannot always be the best policy. The best policy often has to be doing in order what needs to be done for a product to ship, even if that means being dishonest to yourself. Ending up with a great product seems like the best policy to me – even if its as odds with being honest to yourself.