Despite the hype surrounding advanced Large Language Models like GPT-4, there is yet to be any evidence that these kinds of AI can think. In this post, I’m exploring how chatbots are built and trained, and why a little technical understanding goes a long way.
Knowing even the broad strokes of how large language models are constructed helps to understand why they “hallucinate” (I prefer fabricate) information, why the output isn’t grounded in any real truth, and why, ultimately, they don’t make “sense”.
Inspired by opening remarks from Emily M. Bender, I’m using the term “sense” in a very literal way here: chatbots are incapable of producing meaning. Chatbots produce words – very convincing, structured words – and we attribute meaning to them. As I wrote in an earlier post, the language that we ascribe to chatbots tends to anthropomorphise them but it’s important to recognise that these things are basically algorithms sifting large piles of data.
There are many sophisticated processes involved in developing and training large language models, and I’m only going to focus on a few of them here in brief. I’m also not a data scientist or machine learning engineer, so if I flub any of the descriptions feel free to send me a comment and tell me about it. There is also a great graphic article exploring these concepts here.
I’m going to focus on the following areas:
Tokenisation

Tokenisation is an important early process in the development of Large Language Models (LLMs) like GPT-4. It involves breaking down input text into smaller units, known as tokens, which the model can easily process. Tokens are not just words but can also include parts of words or even punctuation marks. This process allows the model to analyse and understand the text at a granular level, enabling more accurate interpretations and responses.
Methods of tokenisation are designed to strike a balance between the handling of common and rare words. For common words, they are kept whole, while less common words are split into smaller, more frequently occurring subword units. This approach reduces the model’s vocabulary size and helps it manage a wide variety of words, including new or rare words, more efficiently.
Once tokenised, these tokens are then converted into numerical representations, generally referred to as embeddings. These embeddings capture semantic and syntactic information about the tokens, allowing the model to understand the context and meaning of the input text. The sequence of embeddings is then fed into the neural network of the LLM, where it undergoes complex layers of processing.
Weights and probability

Weights in neural networks are a simplified version of the neural connections in the human brain. In a neural network, each neuron is connected to others via these weights, which effectively dictate the strength and direction of the influence each neuron has on the other. When the network is being trained, these weights are continuously adjusted. This process, which is driven by algorithms like backpropagation and optimisation techniques like gradient descent, is fundamental to the network’s ability to learn from data. The final set of weights, after training, essentially holds all of the knowledge the network has acquired about the ways words (or tokens) fit together, enabling it to make predictions or decisions based on new input data.
The network generates a probability distribution to indicate the likelihood of various possible outcomes. This probabilistic approach allows the network to express uncertainty and make more nuanced decisions.
During the training phase, probabilities also play a crucial role in guiding the learning process, helping the network improve its accuracy over time. In generative models like LLMs, probabilities determine the sequence of words or characters generated, making the text output appear more natural and contextually relevant.
Join the mailing list for more articles like this
Transformer architecture and attention

The transformer architecture, first introduced in the paper Attention Is All You Need in 2017, revolutionised the field of natural language processing. Unlike earlier methods, the transformer architecture architecture consists of two primary components: the encoder and the decoder, each comprising multiple identical layers. A distinctive feature of the transformer is its ability to process all parts of the input data in parallel, a stark contrast to the sequential processing in other models.
A core part of the transformer is the attention mechanism, specifically the multi-head self-attention mechanism in its layers. The attention mechanism allows the model to focus on different parts of the input sequence when producing each part of the output sequence, enabling it to capture complex dependencies and contextual relationships within the data. Transformers also use positional encodings to maintain the order of the input data, as they lack the inherent sequential processing capability of RNNs. This feature ensures that the model is aware of the position of each word in a sentence, which is crucial for understanding language structure and meaning.
Reinforcement learning from human feedback

In Reinforcement Learning from Human Feedback (RLHF), human feedback is used to create a more nuanced and contextually appropriate reward signal. For example, in the case of language models, human evaluators might rate or rank the outputs of the model in various scenarios, indicating preferences for more accurate, coherent, or contextually appropriate responses. This feedback is then used to adjust the reward function that guides the model’s learning process (including the weighting discussed earlier).
Integrating human feedback allows the model to align more closely with human values, preferences, and expectations, which is particularly important for applications involving complex human interactions. This method can significantly improve the performance and reliability of AI systems in tasks like conversation, content generation, and decision-making, ensuring that their outputs are not just technically accurate but also contextually and socially appropriate.
RLHF also addresses some of the limitations of purely automated reinforcement learning, such as the risk of developing unintended or undesirable behaviours. By incorporating human judgment directly into the learning process, RLHF helps in shaping AI behaviours that are more aligned with ethical standards and societal norms. As a result, RLHF is becoming an increasingly important tool in the development of user-centric, responsible AI systems.
Why these things matter

There’s a whole lot of technical discussion in this article, and like I said at the start I’m not a computer scientist or a developer so much of it is beyond me. The reason I’ve written this article, though, is not to provide a how-to manual for constructing a LLM: it’s to highlight that chatbots don’t make sense, they make words.
Even a broad understanding of the concepts outlined above helps to understand why LLMs fabricate information (“hallucinations”), why they don’t work like search engines, and the justification by which AI developers are getting away with the apparent copyright infringements such as a recent Book 3 dataset scandal. For example:
Tokenisation: Understanding tokenisation clarifies that chatbots process text at a very mechanical level, breaking down sentences into smaller units without any innate comprehension. This process is more about data handling than understanding, contributing to the idea that chatbots, including LLMs, do not truly ‘understand’ or make sense of the text in the way humans do. They simply manipulate these tokens based on learned patterns, which can sometimes lead to coherent outputs but lacks true comprehension.
Weights and Probability: The concept of weights and probability in neural networks highlights that the output of chatbots is largely a result of mathematical calculations and probabilities rather than genuine understanding. The weights in the network adjust how information flows based on training data, but this doesn’t equate to the model ‘making sense’ of the information. Instead, it’s generating outputs based on statistical likelihoods, which can often seem sensible but are not grounded in true understanding or reasoning.
Transformer Architecture and Attention: The transformer architecture, particularly the attention mechanism, allows chatbots to generate contextually relevant text by focusing on different parts of the input. However, this doesn’t mean the chatbot comprehends the text. It’s programmed to mimic patterns of human language, not to genuinely understand or make sense of it. This mechanism can produce text that appears meaningful but is essentially a sophisticated pattern-matching process.
Reinforcement Learning from Human Feedback (RLHF): RLHF involves adjusting the model’s outputs based on human feedback, aiming for responses that align more with human expectations. While this can improve the relevance and appropriateness of the outputs, it doesn’t impart the ability to understand or make sense of content. The chatbot is still operating within the confines of its programming and training, adjusting to feedback in a way that’s more about optimisation than genuine comprehension.

I hope this brief overview of some of the core concepts of Large Language Models has helped you to understand why chatbots don’t make sense: they make words. It’s the starting point for some excellent philosophical discussions on words, meaning, and consciousness, which I’m sure I’ll be writing more about in the future.
If you’d like to get in touch to discuss generative AI, please use the form below:
Leave a Reply