ChatGPT: Genius or Fraud?
There's an ongoing debate over whether developers can trust the current crop of LLMs. The problem is people are asking the wrong question.
One of the truisms of software development is that code is harder to read than it is to write.
Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
-Brian Kernighan, "The Elements of Programming Style, Second Edition"
The opposite is true for ChatGPT and other LLMs.
ChatGPT: Part Mentor, Part Intern
With the sudden rise of AI coding "copilots", there's an ongoing debate over whether developers can trust the current crop of LLMs.
That debate fundamentally misses the point, though. Whether LLMs can be trusted is too broad a question. The better question to ask is what things can LLMs be trusted to do. And, at least with ChatGPT 3.5, that answer is clear.
ChatGPT is better at reading code than writing it.
ChatGPT, the Mentor
Have some code that you are struggling to understand? Feed it to ChatGPT and ask it for an explanation.
ChatGPT is great at this sort of thing because it requires no creativity, only pattern recognition. And pattern recognition is the bread and butter of an LLM.
ChatGPT, the Intern
Have some code that you already know how to write but don't want to mash keys on the keyboard to write it? Ask ChatGPT to do it instead.
The big caveat is that you must know what right looks like. Depending on the task, ChatGPT will get you 80-90% of the way there. If it's a programming task that requires writing a lot of boilerplate code–such as building a class module with property getters and setters–it can save you a fair bit of time.
But, please, don't ask it to do your thinking for you. It's not there yet.
Cover image created with Microsoft Designer