If you ask most marketing or communications professionals how they ended up in this industry, you’ll inevitably hear this phrase at some point in the conversation: “I hate math.”
Irony upon irony, marketing and communications are undergoing a massive transition in how they operate due to, you guessed it (or maybe you didn’t), math.
I recently came across some information that stated—AI, especially generative AI like ChatGPT, thinks and creates.
But AI doesn’t “think,” based on what I know about it. It uses math to find patterns in data and make predictions based on the patterns. And it’s one of the reasons that, while I appreciate its ability to speed workflows, I also have a lot of questions about it.
AI is sophisticated math (I mean, really sophisticated).
For this blog, when I talk about AI, I’m talking about the Large Language Models (LLMs) that most of us are now using (or should be using) in our everyday workflows, like Claude, Gemini, ChatGPT, and Grammarly, to name a few.
Large Language Models are AI systems trained on massive datasets to understand and generate text that resembles human language. And when I say large, it’s mind-boggling.
If you have the patience and curiosity to explore advanced mathematics, check out 3blue1brown on YouTube. In Large Language Models explained briefly, Grant Sanderson, who is a rockstar in the multivariable calculus education circles, put it this way, “For a standard human to read the amount of text that was used to train GPT-3, if they read non-stop, 24-7, it would take over 2600 years. Larger models since then train on much, much more.”
Wait, that’s a reading analogy? What does this have to do with math?
Computers store information in bits – ones and zeroes. All the text that has been fed into Large Language Models to train them has been broken down into smaller units called tokens, which can be words, part of words, or individual characters. Then, every word or character is assigned a number (or vector) that the computer can process. Somewhere on the backend, a massive lookup table exists that correlates every word, part of a word, or character with a specific numerical representation. So, a query you input to ChatGPT will look like 14 9 29 34 to the computer. (That is vastly oversimplified, but you get the gist.)
Next, all these numbers are run through different nodes in the neural network AI. The number of nodes in a neural network depends on what the AI is designed for. You can think of these nodes like neurons in the human brain. Each layer of the neural network helps AI understand context, sentiment, and the meaning of the text. Throughout the AI training process, AI also assigns a number to each node. For example, during training, if AI was asked to write a sentence about dogs and it wrote one about carrots, someone would tell it that it was wrong, and AI would know to pick the number associated with dogs to write the sentence the next time. (Again, agonizingly oversimplified.)
AI then takes all these numbers in its network of nodes, does a bunch of linear algebra, particularly vector math, and new sets of numbers are created. Again, the math is staggering. According to 3Blue1Brown, even if you could perform one billion additions and multiplications every second, it would still take over 100 million years to complete all the operations involved in training the world’s largest language models. WTF!?
Once all the math is done, the new sets of numbers generated by the AI represent the probability of which words should be included in the response to the given prompt.
So, to sum it up, AI is not thinking. It’s using math to do some really fancy word guessing.
I think you can argue that AI is creating in a sense. If you rearrange words enough times, you’ll get content that has never been created before. But it’s not thinking, at least not in the way I define thinking – using one’s mind to consider or reason about something.
Bringing an AI Expert to the Conversation
So, here’s where I’d like to introduce someone far more intelligent than myself, who has been following the development and trajectory of AI since around 2006: Aaron Perkins, M.S., CISSP. (That stands for Master of Science and Certified Information Systems Security Professional. I had to look it up.)
Aaron, is my conclusion about AI correct? It doesn’t think.
I’ll start by saying that I cannot confirm the veracity of your claim about my being more intelligent. I’ve seen your work, and there’s A LOT I can learn from you!
To your question about whether AI can think – in short, no, though the answer itself is nuanced.
You see this clearly in base models like OpenAI’s ChatGPT 4o, where it is doing exactly what you explained earlier, predicting what it understands to be the next most logical response to the user’s query.
As we learned pretty quickly though, when the LLMs are trained on everyday conversations (like those on Reddit), you can get some wildly inaccurate responses. Spend a few minutes on Reddit, and you’ll see what I mean. Redditors are rigorous about demanding receipts (i.e., requiring sources), and they (we) are also hilariously brutal with sarcasm and satirical responses.
Other models, such as ChatGPT o3 (yes, they should hire a marketer to name their models better) does simulate thinking, or at least it appears to simulate thinking where it spends time considering/selecting what the response should be by iterating through the various possibilities of the user’s query and finally responding with what it ‘believes’ to be the most appropriate response.
If AI can’t think, how much content creation should we outsource to it?
For now, outsourcing content creation to an AI tool without human oversight is not something I would recommend. While AI is hallucinating less and less, it is still strongly recommended to have a human in the loop.
(Hallucinating, by the way, is a nice way of saying that AI makes stuff up.)
A better way to think about AI — and in this case, generative AI — is to think about it like a digital assistant. Rather than trying to determine, “What can I outsource to AI?” a better approach is to ask, “What are the things I find difficult to do? What problems do I need to solve?”
By starting with the business problem and working forward from there, you will come up with a far more robust solution. Maybe it’s AI. Maybe it’s a new hire. Maybe it’s a vacation.
Many teams are working backwards by starting with AI and trying to figure out what they can use it for. Working forward from the business problem will not only lead you to a better solution, but also give you a more solid foundation for when you do implement AI and how to get the most out of it.
At some point, if we use AI to create most of our content and it isn’t fed new inputs, won’t we keep recycling the same thoughts over and over again?
In a vacuum, yes, this would be the case. These models are getting regular updates, with new models and new capabilities at what feels like a breakneck pace.
The humans who are exploring this technology, though, and learning more about it, will continue to learn about how they can get the most out of it.
We (humans) are continuing to learn how to write better prompts, develop custom AI tools/assistants, so our results continue to get better.
To your point though, there is a limited amount of information these AIs can train on, which is a very real concern in the industry right now.
For those new to AI, I would recommend focusing more on the problems you’re trying to solve, and if AI comes out on top as the best solution for your problem, then invest in getting better at using the tool – better prompting, fine-tuning, etc.
I refused to use AI to create this blog because I wanted to learn. For me, I knew I’d only start to grasp the concepts if I dug into the topic myself and forced myself to watch YouTube videos and listen to presentations that were waaaaaayyyyy over my head. How would you have used AI to write this blog, and how would it have improved it?
Haha, this is a good question. Using generative AI (I use several tools on a regular basis for different use cases) has completely changed my workflow, especially with how I go about creating content.
For a blog like this, if I didn’t understand the topic, and I wanted to understand it at a level at which I could write about it, I would start by explaining to the AI what I needed to do. (THE PROBLEM)
Then, I would give the AI contextual information that would help the AI create a blog I would likely be happier with (FRAMING).
Once I put all of that into my prompt, I would require the AI to ask any clarifying questions before getting started. (UNDERSTANDING OF TASK)
I’m torn on the value of having the AI tool draft an outline vs. the content, but for blogs, I usually just outline what I want to say, and then if my prompt was more free-flowing thoughts than a structured outline, I’ll have the AI respond with the outline in a compelling, logical flow. (STRUCTURING)
Once I am satisfied the AI understands what to do, I’ll have it draft the content. (OUTPUTS)
After that, I’ll review the blog to ensure it meets my standards of quality, that it will resonate with my audience, and that it is written in whatever style/voice I’m needing to create the content in for this particular piece. (VERIFICATION)
Aside from the verification, which can take some time, depending on the length of the blog, all of that can take place in minutes, and for me, that is the most direct way that the blog ends up ‘better’ than what I could have developed myself.
So, what is the next step for marketing and comms folks like me? I certainly don’t want to bury my head in the sand when it comes to AI, as I don’t fully understand it. But I also don’t want to become over-reliant on it to the point I check my brain at the door.
Play. Just play. Stop overanalyzing and just play with it.
Think about the last time you had an incredible amount of fun. Not the last time you had fun working. But the last time you had FUN! What were you feeling? What were you doing? What made it fun?
Chances are, you weren’t thinking about all the things you didn’t understand about whatever it was you were doing at the time. You were just having fun with it.
For creatives, playing with AI is the best way to learn what it can do, what you can accomplish with it, and how to integrate it into your workflow.
To take it up a notch, pick a problem you’re trying to solve…and then play with AI tools to see if they can solve that problem. Maybe you’re stuck on messaging for a difficult client, or you need fresh angles for a campaign that feels stale, or you’re dreading writing another quarterly report. Don’t approach it like “I need to learn AI first, then apply it.” Just throw your real problem at it and see what happens.
This approach does two things: it keeps your brain fully engaged because you’re evaluating everything through the lens of your actual challenge, and it reveals something crucial about AI – it’s incredibly good at generating raw material and variations, but it still needs your professional judgment for strategy, brand voice, and knowing what will actually resonate with your audiences.
The sweet spot isn’t choosing between your expertise and AI capabilities. It’s discovering how they amplify each other when you’re genuinely curious and experimental rather than trying to master some abstract concept of “AI.”
Wrapping Up and Taking Aaron’s Advice
So, instead of getting bogged down in the intricacies of its mathematical underpinnings, and whether it “thinks” or not, my best route forward is to play with these tools. Experiment, challenge them with real-world problems, and discover how this sophisticated math can amplify creativity and efficiency.
In that spirit, I’ve already signed up to join Aaron’s hands-on AI Bootcamp on September 2. I’m excited to learn about the differences between ChatGPT, Claude, and Gemini, and to build custom GPTs that will tailor these tools to be more helpful in my daily routines.
Hopefully, the future of AI in marketing and communications isn’t about replacing human ingenuity but rather augmenting it. As Aaron insightfully suggests, viewing AI as a digital assistant rather than a magic bullet for content creation empowers us to leverage its strengths, like generating raw material and variations at lightning speed, while retaining our critical human oversight for strategy, brand voice, and audience resonance.
