How to Tell the Technical Abilities of a User Without Asking

How do you assess the technical abilities of someone who might not even know how to accurately assess themselves? By making the garden weed itself.

How to Tell the Technical Abilities of a User Without Asking
"Everybody lies."
- Dr. Gregory House

In the above clip, TV doctor Gregory House shares his cynical credo with a group of medical students:

"It's a basic truth of the human condition, that everybody lies.  The only variable is about what."

House refers to this line regularly throughout the series to make the point that you can never trust what someone tells you, especially when it comes to their medical history.

Now, I'm not a medical doctor.  Or any kind of doctor.  I don't play one on TV.  I didn't even stay at a Holiday Inn Express last night.

But I have seen a similar "everyone lies" dynamic play out within my own area of expertise: self-assessed technical abilities.

Lies, Damn Lies, and the Dunning-Kruger Effect

To be clear, most people are not really lying when it comes to self-assessing their technical abilities.

Rather, they tend to fall victim to a form of the Dunning-Kruger effect:

"The Dunning–Kruger effect is defined as the tendency of people with low ability in a specific area to give overly positive assessments of this ability. ...
But some theorists do not restrict it to the bias of people with low skill, also discussing the reverse effect, i.e., the tendency of highly skilled people to underestimate their abilities relative to the abilities of others.

What this means in practice is that if you ask a user whether they have the ability to perform some technically-challenging task, their answer may be useless.  

The answer (whether positive or negative) could be useless for any number of reasons, including:

  • Misplaced over-confidence in their own abilities (the Dunning-Kruger effect)
  • Misunderstanding of the scope or nature of the challenge
  • Misplaced under-confidence in their own abilities (the reverse Dunning-Kruger effect)

What's more, the presence of the first reason can further exacerbate the second.

So, if you can't trust a user to answer such a question honestly, how can you gather that information another way?

To take the advice of Freakonomics author Steven Levitt and Stephen Dubner, you make the garden weed itself.

David Lee Roth and the Infamous M&Ms Clause

In the Freakonomics podcast episode, "What Do King Solomon and David Lee Roth Have in Common?" host Stephen Dubner interviews a raft of individuals who worked for and with the rock band Van Halen during their (potentially literally) ground-breaking live shows.  

The audio and full transcript of the episode is available at the link above.  However, this Quora answer from Brian Hepler provides a good synopsis of the Van Halen story.  Here's an excerpt from Hepler's answer:

One of the rumors about Van Halen in particular was that the band demanded that there be a bowl of M&M candies on the refreshment table with no brown M&Ms in it and this was written into the contract that stadiums had to sign in order to book the band. Sounds like a ridiculous request, right?

... Van Halen at the time had the largest traveling road show in the USA. ... There were serious practical considerations as to whether or not a venue could support a Van Halen concert. So the contract was written to include electrical & structural requirements. The thing is, a lot of venues would not live up to their end of the contract and the consequences could be severe. ...

Van Halen management put the clause about the M&Ms into the middle of their contract, surrounded by other technical requirements. When the band & crew arrived at a venue, they would look for the M&Ms. If the M&Ms were not there, or they had the accursed brown M&Ms present, the band had a big red flag that someone had decided to skimp on their responsibilities. And if the venue was not willing to live up the M&M clause, then there was a good chance that there was another clause - one that mattered - that was also given little consideration.

That is the self-weeding garden approach: rather than go through a long process where you have to verify every little detail with another party, devise a way for the other party to self-identify that they are a problem.

Making the Garden Weed Itself

Let's bring this back to the software development world.

Like many Access developers, we provide business consulting in addition to pure software development.  Part of that responsibility is understanding when it makes sense to program an automated solution to a task–and when it makes sense to, instead, let the client take care of the process manually. Depending on the task, that decision can hinge on the client's own technical abilities.

For example, let's say we have a prepackaged Access application.  

Older versions of the application use a Microsoft Access backend, while newer versions use a SQL Server backend.  Migrating from one backend to the other is relatively straightforward, though it does involve many steps that require careful attention to detail.  It's more complicated than complex.

It's the sort of thing that requires a minimum level of technical competency.

That raises an obvious question: how do you accurately gauge whether a particular user possesses this minimum competency?  You could ask them, but we've already established that can be a dicey proposition.  Plus, you risk coming off as condescending or patronizing.  

A better solution? Use technical jargon to let the garden weed itself.

When "Use Plain English" Is Counterproductive

As business-consulting software developers, we must constantly remind ourselves to use plain English when communicating with our clients.

That's a critically important concept.  I like to remind me clients that while I may be the subject matter expert when it comes to software development, they are the subject matter experts when it comes to their businesses.  If I can't speak to them about software in terms they understand, they won't be able to make good decisions for their businesses.

But if I'm trying to assess whether they can accomplish some technical task, using plain English can be counterproductive.  

It may lure them into thinking the technical task is easier for them than it will be.  Or they may realize they are about to get in over their heads, but are too embarrassed or prideful to admit it so they fake their understanding of the problem.  Or they may misunderstand the nature of the task altogether, but you have no way to see that.

However, if you use task-specific technical jargon, you will easily identify those who are not up to the task, without offending those who are.

What This Looks Like in Practice

If I were describing the process of migrating from an Access to SQL Server backend to a typical client, I might use the term "programming tools":

We will be using a combination of programming tools to convert your data from an Access file to a SQL Server database.

However, if I was trying to assess whether a particular user could do the same task themselves, I might word an email to them like this:

You will need to use a combination of SSMA and SSMS to complete the task.

If the response I get to that email is, "What is SSMS?" then I know the user is probably in over their head, and I can proceed with tactfully (but firmly) recommending that maybe they sit this one out.

Cover image generated by DALL-E-3.

All original code samples by Mike Wolfe are licensed under CC BY 4.0