Debug Documentation: Signal vs. Noise Edition

Debugging an issue can generate a lot of content. This technique helps you keep track of and find the parts that really matter.

Debug Documentation: Signal vs. Noise Edition

Debugging a problem–especially a logic error–can generate a lot of content:

  • Screenshots
  • Code samples
  • Client emails
  • Developer commentary

Inevitably, whenever you have a lot of content, you also have a lot of noise.  Scrolling through pages of content can make it difficult to spot the most important part of the case: the expected behavior versus the actual behavior.  This technique makes both things stand out in a clear, easily identifiable way.

It all comes down to two words: GOOD vs. BAD.

Nature of the Problem

My team and I use FogBugz to record, document, and resolve bugs.  

It doesn't really matter which bug tracking database you use–GitHub, FogBugz, Microsoft Access, Jira, etc.–so long as you are using one.  Don't take my word for it; it's question number 4 on the venerable Joel Test:

4. Do you have a bug database?
... If you are developing code, even on a team of one, without an organized database listing all known bugs in the code, you are going to ship low quality code. ...

Most commercial bug tracking databases make it easy to include all manner of content in the case file.  And when you're working on a particularly thorny issue, that content can grow quickly.  It can grow so fast that the actual and expected behavior can easily get lost.

Ideally, the actual and expected behavior will be documented at the very beginning of the case.  But in some instances–again, especially with logic errors–a big part of the problem is trying to either:

  • A.) Figure out what the expected behavior should be (i.e., the GOOD)
  • B.) Reliably reproduce the actual behavior (i.e., the BAD)

Because we don't operate in an ideal world, either or both of the above may not reveal themselves until well into your case's documentation.  

Signal vs. Noise

This is a classic signal vs. noise problem:

Signal: information that conveys meaning
Noise: items of no value that obscure useful information

In this example, the "signal" is limited to:

  • GOOD: The expected behavior
  • BAD: The actual behavior

The "noise" is everything else:

It's a bit unfair to say that the "noise" in this case is "of no value."  

After all, that documentation is solid gold when an issue re-occurs.  Or when you are consulting with a colleague on a problem and you need to get up to speed quickly on where they're at.  Or when your colleague is on vacation–or they no longer work at the company–and you are tasked with figuring out why their "fix" actually caused other problems.

But when it comes to focusing on the problem at hand, all those other comments only distract from what you really want to know:

  • What is the expected behavior? Good
  • What is the actual behavior? Bad

GOOD vs. BAD

As I'm sure you've figured out by now, the technique is little more than using the words "GOOD" or "BAD" to identify the expected behavior or the actual behavior, respectively.

A couple notes:

  • The word "GOOD" or "BAD" must be in ALL CAPS so that it stands out
  • If possible, use bold formatting for "GOOD" and "BAD"
  • But don't use italics, as that makes "GOOD" and "BAD" blend in
  • GOOD and BAD are most effective on the far left as they are easier to scan for
  • Make "GOOD" and "BAD" part of a header if possible

With these tips in mind, scroll back through the rest of the article looking for all the instances of "good" and "bad."  Which ones stand out?  Which ones blend in?

GOOD and BAD and Nothing Else

Finally, I strongly recommend you specifically use the words "GOOD" and "BAD."

Not "CORRECT" and "INCORRECT."  

Not "ACTUAL" and "EXPECTED."  

Not "PASS" and "FAIL."

Don't use different words for different situations.  

Don't worry about whether "GOOD" and "BAD" make grammatical sense where you are using them.  

In fact, don't think about it at all.

Just use "GOOD" and "BAD" because:

  1. Decision fatigue is real
  2. Naming is hard
  3. Consistency matters
  4. Programmers are lazy
  5. Keep it simple, stupid

You'll thank me later.


External references

The Joel Test: 12 Steps to Better Code
Have you ever heard of SEMA? It’s a fairly esoteric system for measuring how good a software team is. No, wait! Don’t follow that link! It will take you about six years just to understa…

Cover image created with Microsoft Designer

All original code samples by Mike Wolfe are licensed under CC BY 4.0