Computers can write their own code. So are programmers now obsolete?

At University, I studied engineering, like the majority of my peers. I discovered that there were instances when I needed to develop computer programs to perform specific types of computations. I discovered from the experience that I was not a natural hacker. These pieces of utilitarian software were developed in Fortran, Algol, and Pascal. They are today thought of as the programming equivalent of Latin. Like Rory McIlroy might do if forced to play a game of golf with an 18-handicapper, the software I built was awkward and ineffective, and more skilled programmers would look at it and roll their eyes. But it accomplished the job, and in that sense, it was “excellent enough for government work,” as the renowned computer scientist Roger Needham described it. And I gained a lifetime admiration for programmers who can build elegant, efficient code as a result of the experience. Anyone who believes programming is simple has never tried it.

All of this explains why I sat up when someone discovered last year that Codex, a descendant of GPT-3, a large neural network trained on vast troves of text gathered from the web that could generate plausible English text, could write apps, i.e., short computer programs including buttons, text input fields, and colors, by remixing snippets of code it had been fed. It, for example, you might ask the software to write code to perform a basic task, such as “create a snowfall on a black backdrop,” and it would do so in Javascript. In no time, there were software businesses like SourceAI attempting to capitalize on this new programming tool.

This was remarkable, unique, and perhaps beneficial in some situations, but it was simply picking low-hanging fruit. Apps are simple programs that can do activities that may be explained concisely in common English. All the program needs to do is search through its massive bank of computer code to discover a match that will accomplish the job. There is no genuine inference or reasoning necessary.

DeepMind, a London-based AI startup, got interested in the subject at this point. DeepMind is best known for creating the Go-playing world champion AlphaGo and AlphaFold, a machine-learning system that appears to be better than humans at predicting protein shapes. It recently revealed the development of AlphaCode, a revolutionary programming engine capable of exceeding many human coders.

Overall, AlphaCode performed at the level of the average contestant in Codeforces competitions.

In typical DeepMind fashion, the firm opted to test its system on 10 tasks on Codeforces, a site that sponsors global competitive programming contests. Despite the fact that these challenges are not typical of a programmer’s day-to-day work, the capacity to creatively solve the issues they present is a strong indicator of programming aptitude. The first machine learning system that can compete with humans in this field is called AlphaCode.

This is how it goes: Contestants have five to ten natural language problems to solve, and they have three hours to construct programs that can solve as many of them as they can in an original way. This approach is far more challenging than simply choosing an app. Participants must read and comprehend the following for each problem: a natural language description (spanning multiple paragraphs) that contains a narrative background to the problem; a description of the desired solution that competitors must carefully understand and parse; a specification of the required input and output format; and one or more example input/output pairs. They must next develop an efficient program to tackle the challenge. Finally, they must execute the software.

The main stage – getting from issue description to solution – is what makes the competition such a difficult test for a computer, because it necessitates knowledge and reasoning about the problem, as well as a thorough understanding of a wide range of techniques and data structures. The architecture of the Codeforces contests is noteworthy in that it is not feasible to solve challenges by taking shortcuts such as repeating previously observed answers or trying out every potentially related method. To succeed, you must be inventive.

How did AlphaCode fare? That is the correct answer. According to DeepMind, it performed “at the level of the median competition. Although this result is far from winning contests, it marks a significant jump in AI problem-solving skills, and we hope that our achievements will encourage the competitive programming community.”

Leave a comment

Your email address will not be published. Required fields are marked *