Red Flag

A red flag. It’s a warning. An alert. An indication of danger. A notification that something is amiss. There are red flags in the code we work on and the processes we follow. But do we see them? I missed a red flag recently. It happened like this:

I had this curious bug I was trying to fix. The behavior suggested that it was most likely corrupted or uninitialized memory. That’s what intuition borne of experience was telling me, anyway. Randomly timed incorrect behavior in code that was processing a static stream of data. The input data was constant from one run to the next, the bits flowing through the code always the same, but the end result varied pretty much randomly in where and when it failed.

This suggested to me that we were processing someone else’s data or uninitialized data (which is really just someone else’s data from within the same process).

This body of C++ code was unfamiliar to me, so I found myself picking the brains of a coworker who had been around a while. In discussing the bug I found myself looking over his shoulder as he scrolled through some of the code in question, and he commented on a variable assignment that wasn’t used later in the function.

It was one of those pfft moments. “Been there, done that, seen it a million times.” A thoughtless assignment statement that someone typed in but then lost their train of thought. It looked something like this:

void fn()
{
    size_t cbBase;
    void* pvData;

    if (get_value("base", &cbBase, &pvData))
    {
        store_data("base", cbBase, pvData);

        size_t cbExtended;
        void* pvDataExtended;

        if (get_value("extended", &cbExtended, &pvDataExtended))
        {
            store_data("extended", cbExtended, pvDataExtended);
            cbBase = cbExtended;
        }
    }
}

And quickly we moved on to discuss what might really be wrong with the code. And that quickly I’d dismissed the red flag.

In a world where most of the code that I interact with is not my own, where dozens of changes wrought by numerous hands happen over a period of years can I really pass off a small, unexplained assignment like that above as an innocuous error? Any moderately complex code base will transmogrify over the years. Initial errors may indeed be simple coding issues that we wish would have been corrected by code review, but over time source code changes not randomly but with specific intent. And with any luck you have both bug reports and a source code revision system on which you can rely to find that intent.

The red flag, of course, was the meaningless assignment statement. More than a day later as I waded through diffs of check-ins from ages past I ran across the rationale for the assignment. In previous check-in an attempt was made to correct some bad behavior. A previous version of the code looked more like this:

void fn()
{
    size_t cbBase;
    void* pvData;

    if (get_value("base", &cbBase, &pvData))
    {
        store_data("base", cbBase, pvData);

        size_t cbExtended;
        void* pvDataExtended;

        if (get_value("extended", &cbExtended, &pvDataExtended))
        {
            store_data("extended", cbExtended, pvDataExtended);
            cbBase = cbExtended;
        }

        if (cbBase < MINIMUM_EXPECTED_DATA_SIZE)
        {
            backfill_missing_extended_data();
        }
    }
}

Ah, the unexplained assignment was orphaned by a previous check-in. In an effort to correct a particular problem a developer had removed code but left behind an ineffective assignment. Interestingly–partly because I like a tidy ending–the bug the developer was fixing was strongly related to the bug I was pursuing. The original author’s intent for the assignment, it turns out, was probably not

    cbBase = cbExtended;

But

    cbBase += cbExtended;

I reintroduced the missing code and patched up the assignment to find that, very conveniently, my bug was fixed as well. In the end, yes, it was incorrectly initialized data. It just wasn’t where I expected to look.

Funny thing, those red flags. They’re hard to see. Where have you seen them lately? (Or not?)

Disclaimer

All postings are provided “AS IS” with no warranties and confer no rights. Opinions expressed herein are those of their respective authors.

Nobody is obligated either to read this publication or to leave comments. Don’t be a jerk. Comments may be disabled or moderated. Offensive comments, spam, misrepresentation and spam will likely not be tolerated. I reserve the right to edit, ignore, or delete any comment without notice.

Though you may suspect a post discusses your code, it does not. Perhaps some posts here discuss issues that may appear in real product code, but none of them show or discuss your code. Relax.

Curt Nichols
Contents are copyright Curtis Nichols, all rights reserved.

On Writing Specs

[This post is reproduced from a previous blog of mine, originally published August 20, 2005. I'm moving it here for good measure.]

I recently re-read Joel Spolsky’s series on Painless Functional Specifications. It’s a few years old now, but is still a pleasant reminder of what I think are the best reasons to write specs:

  1. Writing a spec forces us to think about what the software does. We have the opportunity to declare what the software does, how it does it, and why it does it. Should we need to change the design, it is preferable to do so up front, while we’re spending cycles thinking about these things. Why? Because changing the design before construction begins is much less expensive than changing actual code, whether it be during the construction of the product, during testing, or after release. The easiest, cheapest, and certainly funnest time to fix design errors is before we’ve spent time and money constructing software based on a faulty idea of what the product does.
  2. Writing a spec provides us with a means of communicating about and refining what the product should be. We can share this document with product management, development, quality assurance, and even marketing. With input from all of the above, we can come to an agreement that the software described in the specification is indeed something that is, all at once, a) useful to the intended users, b) saleable to the intended users, c) can feasibly be constructed, and d) is testable. In order to satisfy these goals we can discuss the spec and adjust it accordingly. Change and compromise are expected.
  3. Writing a spec gives us a recorded document from which we can derive a list of things to do while we construct the product. We could attach dates to the items in that list, do some resource balancing and call it a schedule, if we’re the sort who create schedules. Managing software construction and testing tends to be much easier to do when we have a list of tasks that need to be completed in order to deliver the product.

I realize that not everybody believes in doing some or all “design” work up front, but my experience tells me that every project of any complexity needs some sort of spec before the construction work starts in earnest. Call it what you will and record it in whatever form you wish, but do yourself the favor of writing a functional specification.

Code Never Written?

“Code Never Written” is a mindset. It stems from an off-the-cuff “rule” I first spouted all too long ago:

Code never written never needs maintenance.

Of course, code never written never needs

  • the approval of the architectural oversight committee
  • design artifacts
  • peer code review
  • security review
  • functional testing
  • debugging
  • performance testing
  • optimization
  • documentation
  • user education
  • patching
  • or retirement.

Not to mention the grief it can bring to the author and the hapless victims of his code. You can see what an advantage it is not to code something, if that’s a viable option.

“Code Never Written” is not a design methodology. It’s perfectly acceptable to find in your effort to produce a useful or even saleable body of software that some code must be written. There’s no shame in that.