CDP for personalised customer journeys and effective campaigns across all channels.
Optimise search results on the website to provide a better shopping experience.
Webcare, messaging, social media publishing and monitoring in one clear and concise tool.
Although our products are easy to use, we offer a wide range of services to help you succeed even more in using our software.
The verb “generate” means “to bring forth”. Generative AI is an umbrella term for all AI applications that allow you to generate or bring forth different outputs. For marketers, this usually involves text, music, images, video and programming code.
The best-known generative AI application is, of course, ChatGPT. But what do we mean by generative AI and ChatGPT, what are the benefits for companies and how do they apply it? Also: how do Large Language Models work, what about privacy and what actually is “prompt engineering”?
For marketers, generative AI is a great tool. If you need to quickly create a social post or a content block in an email, generative AI will help you with that right away. But also, if you want to edit or create an image, generative AI is a great assistant
In addition to assisting marketers with creating texts and images, generative AI can get involved in a lot of other marketing tasks. Think of applying better personalisation techniques, developing campaigns, creating content strategies and working out customer analytics.
However, an AI machine does not perform these tasks automatically. You will have to give it a command. To use the jargon, you will have to do prompt engineering. This is a very particular skill: describing accurately what you want the AI to do. If you’ve ever been told that a bit of content “needs more pizzazz” or “I’ll know what I want when I see it”, you’ll understand the need to get prompts just right!
Generative AI is the collective term for all AI applications that allow you to create content – in any form. ChatGPT is one of them. Most of us use ChatGPT primarily to produce texts. Another well-known AI application that is often used as a text generator is Google Gemini. In this paper, you will mainly read about ChatGPT.
OpenAI – the organisation that markets ChatGPT – had no more than 10 employees in 2015. Ten years later, there are around 3,500. Fun detail for all marketers: OpenAI’s marketing department consists of 87 people. Where will those marketers be in 10 years? They’re clearly onto something: ChatGPT had as many as 123 million daily users at the end of 2024.
Some things to consider when getting started with generative AI and ChatGPT:
Context dependence is easily explained with the following example:
Scenario 1: Johnny is walking down the street with his broken bicycle. The tyre on his bicycle is flat. He walks to the bike shop on the high street, swings open the store door and says to the bike mechanic, ‘There’s a big hole in my bike tyre. I don’t know how it happened.’
Scenario 2: Johnny walks across the schoolyard with his broken bicycle. The tyre on his bicycle is flat. He walks far too late into Mrs. Thompson’s classroom. She looks at him questioningly, and then Johnny says, ‘There is a big hole in my bicycle tyre. I don’t know how it happened.’
In scenario 1, the bike shop may interpret Johnny’s comment as a call for repairs. After all, Johnny is in a bicycle store. In scenario 2, Ms Thompson is unlikely to perceive Johnny’s remark as a call for repair but rather as an excuse for being late.
In other words, the same words can be interpreted differently depending on the context in which they are being used. In ChatGPT, you get the following:
ChatGPT “chooses” a context itself if you don’t specifically enter it. When it comes to this simple example, you can start a chat yourself and help ChatGPT with the correct context. But if you ask ChatGPT to help you with something you know less about, then it is much more difficult to understand how you should interpret the given answer.
This shouldn’t be a surprise: if you want to create good prompts, you have to provide clear context yourself. That applies not only to texts but also to any other output you want.
Consider the following example:
Knowing that Valentina Tereshkova was the first woman in space, we would treat these two questions as roughly the same. If you’re asked, “Who was the first woman in space?” you’ll say “, Valentina Tereshkova”. If you are asked, “Who is Valentina Tereshkova?” (perhaps at a pub quiz), you’re more likely to say “the first woman in space” than to start talking about her teenage employment at a textile mill.
ChatGPT treats both questions differently. Remember: if you want to make good prompts, you need to be as specific as possible or use follow-up questions to help Chat GPT hone in on what you need.
The better your input or prompt, the better ChatGPT answers. GIGO (Garbage In, Garbage Out) is an important maxim to remember.
For a good prompt, it is important to give ChatGPT enough context and be specific enough in your question. You can do this with PULI, a made-up acronym that stands for Personas, Exclusions, Length, and Inspiration.
In the example of Johnny and his broken bike, it would help ChatGPT a lot if you indicated that you expect a response from the bike shop or the teacher. A common way to do this is to start your query with “Acting as a…”. It can also be handy to explicitly state what you don’t want. If you indicate in your prompt that you don’t want a repair schedule, you won’t get one.
Provide inspiration and predetermine the length of response
Especially as a marketer, you often work with a certain tone of voice. Examples can undoubtedly be found on your website. You can always include this information (the URL) in your prompt for inspiration. And help ChatGPT by saying how many words you want.
Many organisations that have incorporated generative AI into their business processes help their customers with prompting. Spotler does the same. In the editors of our software, which includes a generative AI module, we help users by providing key prompting components by default.
You no longer have to remember to add things like tone-of-voice, translations, URL input and the maximum number of words to a prompt yourself. You can set these via convenient menus, and the AI module will do the rest.
To give you an idea of well-known AI tools, below we have listed 20 organisations known in marketing as generative AI software providers
Many companies already offer some form of generative AI in their services. Spotler is one of them. We offer generative AI in Spotler Mail+, Spotler Engage, and our chatbot technology, Spotler Aigent. AI in chatbots is also known as conversational AI.
Our AI chatbot builder works with URL input, allowing us to automate up to 40% of all customer queries. However, you can also set up your own dialogues. In addition, our chatbot can handle customer data well, which is useful for support questions.
In the editors of Spotler Mail+ and Spotler Engage, we’ve standardised key prompting components. And you don’t have to build the prompts yourself, either. All components are easy to set up via convenient menus. And it’s even possible to have our AI module write a complete social post for you.
Spotler is not the only provider of AI tooling. Every marketer should also visit the following 20 sites:
Artificial intelligence is making an impact in virtually all professions and industries. For example, oncologists use AI to recognise patterns in cancer cells, the transportation industry experiments with self-driving cars, and government agencies use AI to control crowds and traffic. But you also use AI when you unlock your smartphone with facial recognition. Or what about Google Translate? It is no exaggeration to say that artificial intelligence can be found in all sorts of places – both business and personal.
When we focus on AI and our industry (software and SaaS), we often distinguish between generative AI, predictive AI, conversational AI, and assistive AI. We have written a separate guide to predictive AI because we also deploy this AI technology in our software.
Looking specifically at generative AI, there are two key differences from all other forms of AI. For one, generative AI offers more than just data analysis. The output of a prompt is always unique content. Generative AI allows you to create content.
Many applications of generative AI are only possible with the deployment of a Large Language Model (LLM). If you have ChatGPT create subject lines for your email, if you create summaries of texts by entering URLs, or if you deploy ChatGPT for translation work, this is only possible because ChatGPT works with a Large Language Model (LLM).
The definition for the connoisseur: ChatGPT is a chatbot service powered by OpenAI’s GPT backend. The Generative Pre-Trained Transformer (GPT) is based on a Large Language Model (LLM) consisting of the following four components: transformer architecture, tokens, a context window and a neural network.
Okay, not exactly the most accessible definition. One thing is clear, though: the beating heart of ChatGPT is a Large Language Model. But what exactly is it? What is an LLM? Below is a brief and simplified explanation of the main components of an LLM, plus some links to more in-depth information.
An LLM is not a dictionary where individual words are stored. An LLM is a text generator. In other words, an LLM produces texts by making statistical relationships out of a huge mountain of data. And that data is, in turn, texts. To understand all this, it is useful to imagine an LLM as a roadmap. That way, we can also explain all these different components:
Step 1 | Gathering information |
Step 2 | Applying tokens and embeddings |
Step 3 | Giving weight to text expressions |
Step 4 | Making predictions |
Step 5 | Generating output |
This is where the first L of LLM comes in: Large. Output generation succeeds only if the model is trained on a large amount of source material. Think about web texts, public forums, Wikipedia, eBooks, and news sites. You’ll be glad to know that ChatGPT does not copy or store this source material. The model uses the data only for training purposes. An LLM accesses this training data through crawling.
One well-known organisation that offers this crawling technique is Common Crawl. It is a small nonprofit organisation that certainly provided most of the training data in the early versions of ChatGPT. The website states: The Common Crawl corpus contains petabytes of data, regularly collected since 2008.
Of course, Common Crawl is not the only organisation offering this kind of dataset. Microsoft, for example, offers it. However, many organisations do not want their copyright infringed by AI. For that reason, the New York Times has started a lawsuit against organisations such as Microsoft and OpenAI. The discussion around access to content is still ongoing.
Spotler wants its customers to be able to take advantage of the latest technological developments. For this reason, when deploying generative AI, we collaborate with OpenAI. Every marketer who sees for the first time how amazingly fast and easy it is to adjust the tone-of-voice of a content block, who gets instant suggestions on how to set up a social post for Facebook, for example, and who translates a text with the push of a button, is enthusiastic. But what about privacy?
In our email and social editors, we ensure that all data is first fully anonymized before it is sent to OpenAI. OpenAI also does not store or deploy the data for purposes other than the capabilities we provide through our AI module. Information remains secure and complies with privacy regulations.
Spotler has a crystal-clear agreement with OpenAI for data processing. Again, data is not stored or used for model training. In addition, you can add an extra layer of security. As soon as the chatbot detects sensitive information such as ID numbers or bank details, users are instructed to reformulate it. Interactions thus remain secure.
From step 2, we are in the LM of LLM: Language Model. All data is cut up into pieces. By data here, we mean text. The texts available to the LLM are chopped up into pieces, or tokens, as AI calls them. A unique string of numbers is added to those tokens. If you like it, you can try it out for yourself via a tokenizer.
To give a completely arbitrary example:
The text ‘The King of the Netherlands is called Willem-Alexander’ consists of nine tokens or the following sequence of numbers (token IDs): [1923, 148872, 1164, 16760, 109217, 121853, 9406, 3179, 9330]. As language users, we would say that the entered sentence consists of six or seven words. So, tokens are more than just words. They can also be parts of words or even punctuation marks. It is important to remember that not every tokenizer works the same way. There is no single linguistic standard for breaking down language into tokens.
An LLM cannot read as we do but tries to put the right sets of numbers in sequence. And because this huge network resembles human transmission between axons and dendrites, it is also called a neural network. The question then is: how does an LLM know how to make the right choices from that huge network? Answer: via word embeddings.
A brief explanation: the words king and queen often occur together and also have a clear relationship. Just like the relationship between the words husband and wife. Or the relationship between words like modest, finicky, paltry, paltry, small, void and summary. They all have something to do with the meaning of small. Words are embedded in a certain meaning as well as in a certain usage. For example, it is easy to imagine that the word Trump often goes together with the president and with Kamala Harris.
The above relationship is expressed with vector representations in a multidimensional space. Now, if your math bug is triggered, learn all about the Word2vec technique.
Remember: LLMs “learn” the relationship between words through the way we use them in the large amount of data on which the LLM is trained. Suppose we ask something as simple as ‘What is the capital of the Netherlands?’ the model will first have to recognise this set of words.
Less logical would be: What capital the is Netherlands of? To this the model attaches a different weight, because we do not type it that way and because it does not follow from all the text examples the model is already trained on. In AI, those weights are called parameters.
For the enthusiasts: background article on parameters in ChatGPT.
ChatGPT-3 used 175 billion parameters, but ChatGPT-4 has already advanced to trillions of parameters. That’s a number with 12 zeros.
In addition, the given input (the question) still needs to be linked to the desired output (the answer). In LLM, this is made possible by a transformer. And because ChatGPT’s LLM is already trained with a large amount of data, the transformer is also called Pre-Trained. Or alternatively, we do the post-training.
With 170,000,000,000,000,000 parameters (October 2024), ChatGPT’s LLM can predict the end of a sentence when it is given the first half—something we, as human language processors, can do just fine, too.
The question from Step 2, ‘How does an LLM know how to make the right choice from that vast neural network?’ does not just have word embeddings as its answer. The model also trains itself by recognizing fixed patterns in language. Visually, you could show this as follows:
All languages have examples of these fixed patterns. In English, if you see the “The cat sat on a …” and then you know that the word mat follows.
ChatGPT’s LLM did not have geography lessons from Mr Rob in Year 5. Yet an LLM knows the answer to the question, “What is the capital of the Netherlands?” An LLM is able to do this because the token IDs (the number strings) of the individual words have similar embeddings, the set parameters recognise a coherent sentence, and the pre-training eventually collectively generates an answer.
By the way, the LLM has no idea whether the output generated is correct. The LLM can, however, “learn” or something more precise: produce different output. This is because an LLM has a digital memory. In AI, this is called a context window. The LLM can bring up previous interactions and take them into account when giving answers. Or in AI jargon: when generating output, an LLM takes into account all tokens within its context window.
ChatGPT-4’s context window is 16K in size. About 8,000 words. This means that you can easily list this entire article as context. For a little more interpretation, read the article on CheatSheet.
The writer of this guide is a linguist. I have been interested in language models since my college years. I can set up and work out the brief explanation you just read about an LLM by myself. But if I want to add a section on the benefits of generative AI to that, I’m perfectly capable of asking ChatGPT for help. Which is what I did then. Here’s what it said;
Generative AI offers businesses many benefits, ranging from improved efficiency to new creative opportunities. To list them:
Generative AI tools such as ChatGPT or DALL-E can produce texts, images, or videos quickly, allowing companies to respond to market trends faster. In addition, generative AI helps with repetitive tasks such as generating customer service answers to frequently asked questions.
Generative AI can come up with ideas for slogans, social media content or blog topics. It’s a great help when you want to brainstorm on topics. You can also use generative AI to create content for a variety of formats. From videos to infographics, AI enables versatile output.
Generative AI is widely used in chatbots and virtual assistants. Generative AI allows you to have natural conversations with customers, solve problems and provide information, and always 24/7. Furthermore, there is multichannel support. AI can consistently speak to customers on different platforms.
AI can automatically translate and adapt content to different cultures, allowing companies to reach international markets. In this way, smaller companies, in particular, can use AI to stay competitive by producing high-quality content without large budgets.
Using AI for tasks such as copywriting or design reduces the need for large teams or outside agencies. Furthermore, it allows you to optimise ad budgets. In fact, with AI-generated A/B testing and improved targeting, you can increase ad returns.
Of course, the section on the benefits of generative AI is not a direct copy of ChatGPT’s generated output. It has been adapted to fit the writing style and structure of this guide. And so it is with all new technologies. We improve our work with them and adapt them to our needs and conditions. An organisation like Spotler does exactly that.
Generative AI offers our customers a powerful way to work more efficiently, be more creative, and better meet their own customers’ needs. By deploying this technology intelligently, our clients can not only cut costs but also strengthen their competitive position and achieve growth in new markets.