Coding has never been an easy task, but with the advancement in technology, it has become more manageable. The introduction of Github Copilot has brought a significant change in the way we code.
“GitHub Copilot is an AI pair programmer that helps you write code faster and with less work. It draws context from comments and code to suggest individual lines and whole functions instantly. GitHub Copilot is powered by OpenAI Codex, a generative pretrained language model created by OpenAI.”
This is according to the GitHub website. It seems like a perfect tool for everyone who codes, but… can we use it with full trust in its intelligence and power? I mean it’s artificial intelligence, but it’s still intelligence, right?
I started using Copilot a few weeks ago when the company I code for wanted to try it out. The list of benefits presented seemed endless, so I had no objections to using it. I was genuinely curious about it and couldn't wait to try it out.
Finally, I gained access to Copilot through my organisation, and all I needed to do was add an extension to Webstorm. I was so excited to start using it and thought to myself, “Now my work life is going to be so much easier!”
"With great power comes great responsibility” is a phrase that many people associate with the Spider-Man comic book series. As you may know, it was famously attributed to Peter Parker's Uncle Ben. However, I have come to realise that this statement is applicable not only to those with great spider skills but also to those who have access to powerful tools like GitHub Copilot.
At first, I wanted to explore how this tool functions and what its capabilities are. Additionally, Copilot has been a topic of discussion for the past few months, so I wanted to form my own opinion on it. Alright then, let's get started!
The first thing that immediately caught my attention was the auto-completion feature. Here's my attempt at creating an example array of odd numbers.
Oh, I almost forgot to mention that the pieces of text with the blue-ish colour are hints provided by Copilot.
As you can see, the process was seamless. In many other examples where I tested Copilot’s auto-completing, it worked quite effectively. It also includes filtering or mapping of the arrays.
Auto-completion can be incredibly helpful for documentation purposes as well. For instance, if you have a component and start typing comments just above the type of a prop, the auto-complete feature can effortlessly add the remainder of your comment based on the context in which the type is used. This can save a significant amount of time when it comes to ensuring that the code is well-documented for others to understand and use.
Auto-completion is undoubtedly an amazing feature, but we can also add regular comments that Copilot will interpret. Building on our first example with the odd numbers, let's mix things up a bit.
The previous example was relatively straightforward, so let's try something a bit more challenging. How about fetching data, filtering it, and displaying it in a table? Let's give it a go.
Consider how long it would have taken you to write the code from scratch. With Copilot, we got a result in just about 5 seconds! Of course, it's not perfect, and adding types is necessary, but it's an excellent starting point for future work.
To be honest, I believe Copilot's ability to assist with writing tests is its best feature. First, let's create a function that will enable us to sort objects based on their values, and then we'll write some unit tests for it.
What just happened? We completed this task (I admit, writing unit tests can be as tedious as cleaning my apartment) in a matter of seconds. This made me think, "Wow, this tool is simply perfect."
After allowing the initial excitement to settle down, I spent a few more weeks working with the tool and came to a realisation. It's not a “perfect" tool in every sense. What I have in are not only some technical issues but also the concept of the tool, which might be disturbing to certain groups of people.
AI is not perfect. That’s a fact. It continuously heads to perfection, but it’s not quite there yet. While using Copilot on a complex project with thousands of files, I encountered some problems.
First of all, context can be an issue. Sometimes when working with a complex file containing hundreds of lines of code, Copilot's autocomplete suggestions can be overwhelming. The tool constantly tries to use the current file, directory, or project context you're working in. If you’re a quick tab clicker, this might cause some frustration.
Why? If you’re used to the basic autocomplete of your editor, you can imagine how often you will accept Copilot’s prompts now. Unless you strictly specify what you have in mind, the generated code might not be accurate. In these situations, using the "Undo" function becomes necessary, which can be annoying.
Secondly, there are instances where Copilot's hints can be irrelevant and unhelpful. This contradicts the issue mentioned above, but it's worth noting. Here's an example:
I attempted to import a component and received a hint that it was from 'react-bootstrap', a package that is not included in my project. I'm perplexed. What’s going on?!
That's not the only issue with the hints. Sometimes they just don't work. For instance, I tried to retrieve weather data from an API, but all I got were the first two lines of code. After that, nothing happened.
The Hard Work
Consider how many hours you've spent scouring Stack Overflow in search of anything that might aid in resolving a programming issue. Being able to search for information is essential in the programming profession. Now, imagine if this was never necessary and you began using Copilot from day one of learning to code.
In my opinion, this could limit your ability to learn how to locate valuable information independently. With Copilot, anything can be presented to you with minimal effort. However, doesn't the struggle of searching for solutions contribute to developing better programmers?
None, or almost none, of the teams working on projects, are built with very experienced developers. All of us have been junior developers before, all of us were on our very first project at some point in our careers, and all of us used to make dozens of mistakes when trying to get better at coding. From my perspective, using Copilot as a junior developer is very tricky.
One major concern I have is related to over-confidence. Some junior developers may be tempted to trust Copilot blindly without question. This can lead to problems, such as importing components from outside packages that are not necessary for the project, as demonstrated by the "import" example I mentioned earlier. Even though we are already using a different UI library in the project, someone may still think that they should install the new package, and this can cause further issues.
Finally, a common issue with using Copilot as a junior developer is not understanding what the generated code actually does. Some developers may accept the prompts generated by Copilot without giving them a second thought, which can lead to issues in the code. If someone later asks why a particular piece of code is structured in a certain way during the review process, the person who implemented it may not know the answer.
As developers, we must be accountable for our work and ensure what we are doing at all times. A great analogy that comes to mind is when I recruit new candidates for our company. Many of them are excellent at using React but struggle with basic JS questions. That's when we aim to speed up development without having to revisit the basics.
On the flip side, we do have a plethora of experienced developers in the industry. In my opinion, Copilot is an excellent aid when it comes to dealing with repetitive tasks. There inevitably comes a time when we must perform a task that is relatively simple but time-consuming or requires a lot of boilerplate code. The more tedious coding we do, the less satisfaction we derive from it.
Overall, Copilot is a powerful tool that can take care of your code chores and allow you to concentrate on tackling new, challenging problems instead of rewriting the same CRUD repeatedly.
In addition, we can also learn from it. Let's suppose you've implemented a function in a certain way, and you'd like to test yourself against Copilot. Perhaps it will write the same functionality but with better readability or performance, and you'll use it instead. Why not?
Seniors Fifth Wheel
When you have years of experience in the industry and a wide range of knowledge, it's reasonable to assume that your duties may shift away from spending a lot of time writing code. However, if you do end up working on code, you may not always expect straightforward tasks.
The higher the abstraction level of your work, the more challenges Copilot may encounter. If you're working on a unique part of the codebase, Copilot could potentially be more of a distraction than a help. In such cases, there's a high likelihood that you may end up disabling the tool as it will offer suggestions that may not be applicable in the context of your work.
While Copilot can certainly offer valuable insights and suggestions, it may not always be able to understand the nuances and complexities of codebases or abstract work. Ultimately, the usefulness of Copilot will depend on a range of factors, including the individual's specific role, experience level, and the context in which they are working.
The final topic that I wanted to discuss is the legal aspect, specifically from a developer's standpoint. AI tools such as Copilot can be incredibly beneficial, but they also come with certain drawbacks. For instance, the code that you are working on may be used to train the AI model, which some companies may find acceptable while others may not.
It is important to check with your company before using Copilot on a particular project. If you work on multiple projects, you need to be mindful of whether it is okay to use AI assistants on each one, and if not, remember to disable the tool. While it may be a hassle, it is crucial to do so in order to comply with legal requirements.
GitHub's Copilot is an immensely powerful tool. If it works correctly, of course, that is unlike anything we've had before. In the midst of the AI craze, it's critical to remain vigilant and avoid the pitfalls that we can easily overlook. This article focuses solely on my personal experience and cannot be regarded as the only source of truth. I believe that anyone using AI tools should take a moment to reflect.
Do I enjoy using it? Is it necessary for me to use it? Am I still enjoying what I'm doing? Do I like where it’s heading? Especially when the next generation of Copilot, the Copilot X, is on the way.
As Uncle Ben said in Spiderman, 'With great power comes great responsibility' which applies to our self-development, our projects, and our future in the context of using artificial intelligence.