• Blog & Resources
  • Blog
  • Navigating AI in content: finding a balance between innovation and authenticity

Navigating AI in content: finding a balance between innovation and authenticity

Navigating AI in content | The Marketing Pod
6:42

We’ve written about artificial intelligence (AI) a fair few times now. We’ve wondered whether robots will ever replace writers, explored the need for authenticity in a world saturated with AI-generated content, and updated you on Google’s latest generative AI announcements. Since it’s clear that AI is here to stay, as content marketers we need to keep exploring its influence on our industry. 

But there is a point we keep returning to: how to balance authenticity with the need to take innovation on board. 

With smartphones and social media breaking through at the beginning of the last decade, decision makers in teams from journalism to marketing saw opportunities to cut costs by relying more heavily on the “free” content and images doing the rounds on then-Twitter and Facebook. However, a demand for quality and reliability from the public forced a balance. 

Now, despite the rapid evolution of AI-generated content, public concern on AI-generated content and misinformation is once again encouraging us to reflect on the future of these technologies.

Trust issues – the UK public on AI-generated content and misinformation

A recent YouGov survey conducted among more than 2,000 UK adults sheds light on the growing public anxiety about the authenticity of online content and the potential for misinformation. According to the survey, nearly three-quarters (73%) of respondents say they are worried about AI-generated content, while 76% are concerned about manipulated photos and videos. It’s clear that while AI content raises eyebrows, the public is more suspicious of digitally altered visuals. And with good reason – deep fakes and altered videos have already been used to manipulate public opinion, causing widespread distrust.

Interestingly, the survey reveals a socioeconomic divide in perceptions. Those in the higher (A, B, C1) social grades are significantly more likely to see AI-generated content (70% vs. 62% of C2, D, E) and digitally-altered content (79% vs. 69% of C2DEs) as major contributors to misinformation. This could reflect a heightened awareness among higher social grades about the nuances of digital content.

The YouGov article also touches upon “labelling”, a proposed solution which sees AI-generated content marked as such. The survey says opinions are split, with half of respondents believing it could help reduce misinformation, while 29% are sceptical. This mirrors sentiments about digitally altered content, where 50% think labels might be useful but 29% disagree. But here’s the kicker – nearly half (48%) of those surveyed wouldn’t trust the labels on AI-generated content, compared to just 19% who would.

And how do people react when they do encounter AI-labelled content on social media? Perhaps surprisingly, 42% said they “wouldn’t take any immediate action”, which suggests a certain level of neutrality – but 27% said they would block or unfollow the account. Unsurprisingly, the survey reveals a generational divide with differing levels of acceptance, with younger users saying they’d be more likely to engage with AI-labelled posts.

Our take: balancing AI and authenticity

We make no secret of it at Pod. We recognise the incredible potential and power of AI tools like ChatGPT and Google Gemini. We’ve always maintained that where these tools can improve our output and practices, we won’t be afraid to use them. For example, we’ve found them helpful in areas such as summarising text, suggesting options for alternative phrasing, generating ideas, and streamlining processes.

But we’ve also consistently written about our belief in authenticity, the human-touch, and bringing our and our clients’ expertise into our content. We think well-crafted content needs to incorporate human nuances that AI still doesn’t offer – such as your tone of voice, anecdotes and firsthand expertise. This helps our clients to differentiate themselves, engage audiences and, crucially, to build trust.

And when it comes to trust, there are two sides of the coin to consider. As a marketing agency, we need to keep nurturing the trust our clients have in our content. This is why we believe in keeping the utmost standards of transparency, meaning that if AI were to be used at any stage of the content creation process, our clients should be fully informed of the extent to which it was involved. 

The second side of the coin is that our clients should be transparent with their audiences, too. The survey results suggest labelling content as “AI-generated” will reduce trust in their marketing content. However, being transparent about the extent to which AI was involved, for example by specifying whether it was used to speed up research or to edit the final text, could have the opposite effect. It could generate trust for the very fact that the company is being honest about its content creation process. 

Last but not least, it’s important to remember that when it comes to trust, security is also critical. Sensitive information should never be shared with AI-based tools. AI systems are still prone to breaches and misuse, which is why ensuring data security and confidentiality is non-negotiable in today’s digital landscape.

The future: embracing AI with caution

Looking ahead, it’s clear that AI-generated content is here to stay. But with Google’s March 2024 search update targeting AI-generated "copycat content," the landscape is set to change. Google aims to reduce unhelpful, low-quality content by 45%, pushing for higher standards and originality.

So, will there be a backlash against AI-generated content? Possibly. As public awareness and scepticism grow, the demand for authentic, high-quality content will likely increase, and marketers will need to adapt. 

While AI offers exciting possibilities, it’s essential to strike a balance – embracing the efficiency that tools like ChatGPT and Google Gemini provide, but without compromising on authenticity and originality. 

At Pod – we’ll always be on the side of trustworthy, engaging and genuinely human content. We think it makes our output better and we know it’s what our clients want from us. The future of content marketing lies in our ability to blend authentic storytelling with the possibilities offered by rapidly improving technology, and we’ll keep working to strike the right balance in each and every piece of content we write.

William Tomaney

William Tomaney

An all-round wordsmith, Will knows what makes a great story. He’s passionate about the difference good copy can make to any campaign and cherishes the opportunity to raise awareness of important issues. If you've got something important to say, he’s got what it takes to get the word out there.

   

You might also be interested in…