5 reasons why ChatGPT is (often) not the solution

Johannes Stiehler
Technology
#TextAI
#LargeLanguageModels
#ChatGPT
#Autocomplete
Cover Image for 5 reasons why ChatGPT is (often) not the solution

Following the release of GPT-4, uncritical fans and AI doomsday prophets are lashing out at each other on social media with renewed enthusiasm.
High time to take a balanced look at the technology and its viability for consumer applications.

The following are some terms that may not be clear to everyone:

ChatGPT: A variant of GPT designed for conversational, multi-step interactions with users.

GPT: "Generative Pre-Trained Transformer", a Large Language Model that can generate text output and can be used for many use cases without additional training.

(Large) Language Model: A digital representation of one or more natural languages. Essentially, its function is to predict the most plausible continuation of an input text.

Deep Learning: An approach to machine learning that combines many previously separate processes (e.g., feature engineering and input data abstractions) and "learns along" with them. Neural networks, especially transformer architectures are currently the most successful approach to this.

These terms contain each other: ChatGPT is a GPT is a Large Language Model is a Deep Learning application.

1. ChatGPT is anti-UX

An online presence lives on user guidance.

ChatGPT conversations are guided by the user.

It's as simple as that.

As described in other posts, ChatGPT is primarily "helpful at any cost". One might even call it subservient to a certain extent.

How is that a problem?

For my online offering, let's say a news portal, I want to keep control of the user experience. I want to use technologies that on the one hand allow visitors to satisfy needs quickly, but on the other hand also allow me to steer user behavior in certain directions.

Example: an autocomplete functionality saves the user typing and mental effort (How do you spell "Volodymyr Zelenskyy" again?), but at the same time allows to promote certain products or articles.

The operator of the offer has control over where he wants to direct streams of visitors.

A chat AI, on the other hand, is notoriously difficult to restrict at all, it can be completely controlled by the user, and in the case of ChatGPT, even to completely devious content. OpenAI does provide some windy guardrails to restrict hate speech and other naughtiness. But there is no way for integrators to make ChatGPT restrict its conversations to certain topics.

2. GPT is still full of bias and hallucinations - who is assuming responsibility?

Large Language Models have at their core only one purpose:

To find the most plausible continuation available for an input.

"Plausible" in this case means in particular statistically derivable from the training data (mostly web pages and Wikipedia). These continuations are always impressive - for example when ChatGPT constructs a whole essay from a simple prompt - but sometimes also ideologically questionable (e.g. racist) or factually wrong.

That is, outputs from such models must always be critically reviewed before being presented as "truth". In some cases this can be imposed on the user (e.g. when a legal AI creates a draft contract on behalf of an attorney), but in many cases this is not possible, especially not in B2C.

3. Chat is one of the slowest forms of user interaction and GPT is even slower

For a long time, people have been trying to sell chat interfaces as a great form of user interaction. Unlike ChatGPT (see point 1), it is true that many ChatBots can also be configured by the operator to provide real user guidance.

However, this does not change the fact that most users would rather click than type, look than read, and spend as little time as possible searching for information.
None of these preferences are catered to by a ChatBot.

Especially regarding the last point, ChatGPT is a new low: I haven't waited this long for single words in a long time.

This is especially drastically evident with Bing Sydney:
Instead of search -> click, the interaction via ChatBot is ideally type -> wait -> read -> click.

The path to the first search result is significantly longer, and in many cases the summary text generated by the AI is no consolation.

4. Large Language Models are data mongers

Until recently, OpenAI reserved the right to use all user data to improve their services, specifically to train new models. I think most GPT enthusiasts were unaware of this while feeding private and corporate data into interactions with GPT APIs.

This policy has now been changed to an explicit opt-in model, but copyright and privacy remain critical issues around such tools.

After all, Dall-E and ChatGPT are trained on data available on the web whose creators have not given any consent for this. The outputs produced are always indirect derivatives of this training data. Copyright attribution is only one of many problems that arise from this.

5 Deep Learning is a Black Box without Access

In many applications it is essential to make decisions made by software traceable. This is only possible if outputs can be derived directly from inputs in a relatively simple way.

I remember well trying to explain to lawyers in an electronic evidence session what a "Support Vector Machine" is and how it came to mark certain documents as relevant to the proceedings and not others.

In comparison, multilayer neural networks, such as those used for large language models, are several orders of magnitude more complex. Of course, both technologies are "machine learning," to the same extent that a bicycle and a space shuttle are both means of transportation.

For some applications, this is an absolute deal breaker.

Following the release of GPT-4, posts on LinkedIn and Twitter are moving back toward the usual hype recipe:

  • a cup of uncritical jubilation
  • a pinch of considered criticism
  • a teaspoon of doomsday fantasies and other horror scenarios
  • a bucket of self-promotion without any real reference to the subject matter

In more critical posts, the discussion primarily revolves around such things as bias and hallucination, i.e., the fact that I can get any Large Language Model to give both ethically questionable and factually incorrect information.

But let's assume these problems didn't exist. Would ChatGPT then be the magic bullet for any website? Certainly not, because ChatGPT and similar tools by their very nature are almost impossible to integrate meaningfully into an active user experience, produce results that in any case still need to be checked and approved for lies and tendencies by a human user, and do all this without transparency or regard for intellectual property.

Johannes Stiehler
CO-Founder NEOMO GmbH
Johannes has spent his entire professional career working on software solutions that process, enrich and surface textual information.

There's more where this came from!

Subscribe to our newsletter

If you want to disconnect from the Twitter madness and LinkedIn bubble but still want our content, we are honoured and we got you covered: Our Newsletter will keep you posted on all that is noteworthy.

Please use the form below to subscribe.

NEOMO GmbH ("NEOMO") is committed to protecting and respecting your privacy, and we'll only use your personal information to administer your account and to provide the products and services you requested from us.

In order to provide you the content requested, we need to store and process your personal data. If you consent to us storing your personal data for this purpose, please tick the checkbox below.

You may unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

Follow us for insights, updates and random rants!

Whenever new content is available or something noteworthy is happening in the industry, we've got you covered.

Follow us on LinkedIn and Twitter to get the news and on YouTube for moving pictures.

Sharing is caring

If you like what we have to contribute, please help us get the word out by activating your own network.

More blog posts

Image

ChatGPT "knows" nothing

Language models are notoriously struggling to recall facts reliably. Unfortunately, they also almost never answer "I don't know". The burden of distinguishing between hallucination and truth is therefore entirely on the user. This effectively means that this user must verify the information from the language model - by simultaneously obtaining the fact they are looking for from another, reliable source. LLMs are therefore more than useless as knowledge repositories.

Image

Rundify - read, understand, verify

Digital technology has overloaded people with information, but technology can also help them to turn this flood into a source of knowledge. Large language models can - if used correctly - be a building block for this. Our "rundify" tool shows what something like this could look like.

Image

ChatGPT and the oil spill

Like with deep learning before, data remains important in the context of large language models. But this time around, since somebody else trained the foundation model, it is impossible to tell what data is really in there. Since lack of data causes hallucinations etc. this ignorance has pretty severe consequences.