Into the modern coder cockpit: my take on Github Copilot

Into the modern coder cockpit: my take on Github Copilot

Despite covering some neat ChatGPT feats a few months ago, I didn't take the next logical step to try Copilot, Github's AI coding support tool. However, my interest was sparked last week, as a coworker of mine shared a screenshot of an AI-generated test suite. The prompt was reasonably simple and the generated test cases were relevant: enough to make me want to subscribe to GC free trial and contribute to the advent of the singularity event.

I will avoid covering in detail what Copilot can or can't do: others already anticipated me by a long shot. The interesting part to me is how I can get to terms with this new technology. In this post, I will also use Github Copilot as a benchmark to infer what an "AI for coding" can do today, as I think it represents the state-of-the-art in the field.

The real role of AI in programming jobs

Many spontaneous questions come to mind when thinking about AI in the software domain. Is it useful? Is it dangerous? Will it take our jobs? The most important thing is not cornering ourselves with a limiting mindset, letting our emotions forge some weird narrative around AI - either as neoluddists or utopian techno-enthusiasts. Instead, I find other types of questions more interesting, and way easier to answer: these questions don't imply a "yes" or "no" reply, but usually start with "what", "when", and "where". For instance, how can I integrate Copilot into my day-to-day work? What tasks does it facilitate, and what personal skills does it demand?

The value of AI in our work is mostly clear to software engineers. Still, I understand that people from other walks of life may grossly misjudge what the possibilities are, especially as many content creators or journalists like to attract eyeballs by suggesting catastrophic scenarios (or promising that building software is now accessible to anyone). Yes, it's true that tools based on GPT-3 - and just recently GPT-4 - can write code, and yes, the code being generated will often compile and do what is expected from it. However, this doesn't necessarily represent an immediate threat to my job security, since writing code isn't what software engineers really do. Let me elaborate better.

Our job is not about typing code into a text editor. We are not stenographers. The job is about deciding what to code. In fact, whenever I am designing software and I finally start typing something into my IDE, I already hold in my mind what the next snippet of code will be like. At this point, AI is just connecting the dots, instantly generating the method or data structure I intended, minus involuntary typos or other obvious mistakes. Even if AI could contribute to more than the 80% of some code base, giving the appropriate prompt (the initial 20%) is essential, as it carries enormous information that GC needs to operate correctly. And you won't get that initial prompt without an engineer behind the keyboard.

Copilot... or Autopilot?

I believe that a good comparison can be made with airplane autopilots. AFCS (Auto-Flight Control System) can perfectly perform a variety of flight operations: adjusting the flight path, approaching an airport, or even landing. I still wouldn't dream of jumping on a commercial flight without any human supervision. The fact that the computer is able to control the plane doesn't mean that the pilots sit idle on the flight deck. There is still plenty to do: checking the weather ahead, communicating with air traffic control, planning alternate routes, and, most importantly, supervising the autopilot and taking the reins if need be. Therefore, we can see how pilots are still the ones flying the plane.

In a similar guise, AI doesn't develop software, engineers do. AI is a tool. Engineers are trained to streamline their workflow by using automation but they will always monitor what their tool is doing. If it's not behaving as intended, the engineers will jump in and redress the situation. This is the mindset that I have when using Copilot: I will gladly delegate to it the menial tasks but, in the meanwhile, I will be vigilant about what output it will produce.

Conclusion

As you may know, the underlying ML algorithm of Copilot is called GPT-3 and makes predictions of the most likely sequence of tokens to be generated. While the generated code is usually syntactically correct, the tools have no cognition of what the specs are and whether the produced code is valid. A possible countermeasure is helping GC with some prompt engineering: for instance, writing a comment stating what the next snippet does is a powerful technique to help the AI. Even better, sometimes even the comment could be autogenerated after typing in the first few words.

However, it must be said that, even under the most ideal conditions, Copilot doesn't always offer a helpful suggestion, or any suggestion at all if that's the case. While GC is pretty amazing, I believe it is still highly imperfect as a coding tool. For now, I didn't decide whether to fully incorporate Copilot in my toolset or not: the hit-or-miss nature of its suggestions can sometimes be a hindrance. In any case, I'd suggest everyone to get their feet wet and at least try it once: think of it as a utility tool like your syntax highlighter or your favorite IDE's autocompletion feature.

What's your opinion on Github Copilot? Did you incorporate it into your daily job or was it a hard pass for you? Don't be shy and let me know in the comments!