Marketers should use ChatGPT and other generative AIs thoughtfully.
To paraphrase Jeff Goldblum in “Jurassic Park”, don’t be fixated on the fact that you can write search-optimized marketing copy with AI; ask yourself if you should.
“Deloitte emphasizes that work connected to AI will come with serious baggage until society as a whole can fully trust the computer programs and what they produce.”
Generative artificial intelligence programs have come a long way, with tools such as ChatGPT now capable of writing a facsimile of human thought. The computers predict which word would most likely come next in sentence, based on a “reading” of thousands of websites and other bodies of human-generated text.
At first glance, replacing your human content developers with ChatGPT might seem like a pretty sweet deal. The overhead costs obviously would go down significantly, and content generated by bots can cater directly to search engines, which are also bots.
Furthermore, embracing AI would mean your organization would be entering on the ground floor of a burgeoning technology. The AI renaissance is just starting, according to the American Association of Advertising Agencies’ Look Ahead report.
“From the intriguing saplings of AI’s potential in 2022, we’ve woken up in a forest of solid tools creating astounding work, dramatic images, engaging copy and video,” the 4As say. “In the next few years, there will be a hurricane of new AI-driven work, use cases and creators as software such as Stable Diffusion, MidJourney, DALL-E and so many more tools become commonplace and in the hands of just about anybody.”
However, the actual writing of a piece of content – the only thing that AI can do – represents a small part of the very tail end of the creative process. Everything until that point is still firmly in the domain of the human brain of a copywriter sitting at a keyboard. The thought leaders at Deloitte say as much in their 2023 Tech Trends Report.
“Even the most sophisticated AI applications today can’t match humans when it comes to purely creative tasks such as conceptualization, and we’re still a long way off from AI tools that can unseat humans in jobs in these areas,” the report says. “A smart approach to bringing in new AI tools is to position them as assistants, not competitors.”
Deloitte also emphasizes that work connected to AI will come with serious baggage until society as a whole can fully trust the computer programs and what they produce. The pervasiveness of AI platforms still reeks of novelty. It’s difficult to explain how they work, it’s difficult to rely on them, and there’s always the possibility that AI vocabulary could veer toward dangerous territory.
Harmful human inputs have arguably been an issue with AI since the early days of its public debut. Seven years ago this month, Microsoft was forced to shut down its Twitter-based Tay AI chatbot after just 16 operational hours. Simply because they could, internet trolls were exploiting the way Tay learned new ways to talk by telling her horrible things so she would reflect them in her own tweets.
You’d think technology would progress over time, but just weeks ago the chatbot embedded with the latest version of Microsoft’s Bing was chatting with a reporter when suddenly things turned nasty. This problematic bot was not the final public version – it was a soft launch for a select group of journalists/testers – but the incident still apparently shows that Microsoft didn’t anticipate the bot being engaged in conversation for several hours at a time.
Granted, chatbots are different from generative AI bots like ChatGPT. However, there is always a risk that an AI with no human motivations could insert problematic, awkward, or incorrect text into copy – a higher risk than a human copywriter with bills to pay. For now, AI should support, not supersede, human writers in marketing.