Helpful at all costs - ChatGPT and the end of the software engineer

Johannes Stiehler
Cover Image for Helpful at all costs - ChatGPT and the end of the software engineer

The world has lost its head over the launch of ChatGPT. I understand that, the power of this new LLM is impressive and has something magical i.e. human especially for non-natives of AI land.

Inevitably, there is a flood of posts and articles about ChatGPT. For the most part - as usual with AI launches - wholehearted applause for the "new era" or dire warnings about the end of one or more professions.

In the case of ChatGPT, this time it is not the turn of stock photographers (Dall-E has already taken care of them), but software developers.

Who needs them when ChatGPT is just as good at writing programs as it is at writing poetry?

The assumption that current LLMs could be a threat to the future of software developers sounds so baseless to me that I'm afraid I have to add one more to the huge list of ChatGPT articles.

First of all: software development, amazingly, does not consist in churning out snippets of code that do individual clear tasks ("sort this list alphabetically"), but in for instance combining many such pieces of code into useful modules and service interfaces. I'm not aware of anyone even attempting to present this task to a conversational AI.

So the only actual question is: To what extent can ChatGPT or its predecessors (good old GPT-3 can already generate code) and successors (Sparrow will surely be able to do this as well) at least unburden and accelerate software developers in their daily work?

At first glance and for simple examples, this actually seems to work to a certain degree:

How to decode html entities in plain javascript?

In this case, the fact that you can have ongoing conversations with ChatGPT is helpful in refining the solution:

How about server applications in node.js?

Let's recall at this point that ChatGPT consists of the essence of countless training elements (including many such software snippets) and outputs the most probable "continuation" for each input based on these training elements. This works very well for the Javascript question mentioned above.

But ChatGPT is only trained to generate plausible and probable output, not “true" statements. In its mission to help the user, it can't be bothered with such banalities as correctness. The only thing that matters is the probability of each token.

If we move a bit away from the problems for which presumably large amounts of examples were included in the training, the air becomes thin and so does the answer:

How to cast an integer array to a string array in bigquery?

This answer is wrong for two reasons:

  1. ARRAY_TO_STRING does not accept integer arrays, so the proposed solution is not valid BigQuery SQL.
  2. Even if it worked, it would not create an array as requested, but a single string value (by joining the array elements using a delimiter).

Obviously, the question was interpreted correctly, but in its eagerness to help, the AI chose to generate lies rather than not answer anything plausible at all.

Let's try to make the question a little more precise, since that worked so well with the Javascript problem:

I don't want a single string but a string array as output.

I won't bore readers with an explanation of what this code snippet really does (unlike the one above, it is valid Big Query SQL). In any case, it doesn't do the right thing, it does something rather useless, but consumes a lot of resources.

ChatGPT firmly insists on producing false solutions throughout the conversation in order to be helpful. It reminds me a bit of people you ask for directions and they don't dare admit their ignorance of the area. Instead, they point you in an arbitrary direction. After all, it would be embarrassing to have to admit that they don't know their way around.

So here's my conclusion, as so often a different one from that of many other "experts": No, ChatGPT will not replace software developers or even junior programmers. Maybe it will save some entry-level developers a trip or two to Stackoverflow. But even then, you have to ask yourself if trying out and eliminating such wrong solutions doesn't cost more time on average than the right answers save.

Johannes Stiehler
CO-Founder NEOMO GmbH
Johannes has spent his entire professional career working on software solutions that process, enrich and surface textual information.

There's more where this came from!

Subscribe to our newsletter

If you want to disconnect from the Twitter madness and LinkedIn bubble but still want our content, we are honoured and we got you covered: Our Newsletter will keep you posted on all that is noteworthy.

Please use the form below to subscribe.

NEOMO is committed to protecting and respecting your privacy and will only use your personal data to manage your account and provide the information you request. In order to provide you with the requested content, we need to store and process your personal data. If you consent to us storing your personal data for this purpose, please check the box below.

Follow us for insights, updates and random rants!

Whenever new content is available or something noteworthy is happening in the industry, we've got you covered.

Follow us on LinkedIn and Twitter to get the news and on YouTube for moving pictures.

Sharing is caring

If you like what we have to contribute, please help us get the word out by activating your own network.

More blog posts


ChatGPT "knows" nothing

Language models are notoriously struggling to recall facts reliably. Unfortunately, they also almost never answer "I don't know". The burden of distinguishing between hallucination and truth is therefore entirely on the user. This effectively means that this user must verify the information from the language model - by simultaneously obtaining the fact they are looking for from another, reliable source. LLMs are therefore more than useless as knowledge repositories.


Rundify - read, understand, verify

Digital technology has overloaded people with information, but technology can also help them to turn this flood into a source of knowledge. Large language models can - if used correctly - be a building block for this. Our "rundify" tool shows what something like this could look like.


ChatGPT and the oil spill

Like with deep learning before, data remains important in the context of large language models. But this time around, since somebody else trained the foundation model, it is impossible to tell what data is really in there. Since lack of data causes hallucinations etc. this ignorance has pretty severe consequences.