Can AI Tools Worsen the Problem of Mis/Disinformation?

ai-tools-worsen-misinformation

It’s no secret that Microsoft’s ChatGPT – and to a lesser extent, Google’s Bard – have taken a proverbial stranglehold on the tech industry, with many experts questioning the impact that they could potentially have in the future.

However, there is an inherent problem that must be addressed first – and that is the plethora of online misinformation, disinformation and ‘fake news’, which has seemingly spread like wildfire since the 2020 US Presidential election and the Covid-19 pandemic.

Misinformation on social media presents very real and far-reaching dangers. Regulators (such as Ofcom in the UK and the FCC in the US) were rather slow to address ‌widespread misinformation, much less implement solutions and policies to prevent the easy consumption of such discourse.

Naturally, over the course of 2020 and beyond, such dangerous and ill-informed dialogue on all sides of the political, social, economic and cultural spectrums, made their way onto websites.

The problem therein is that AI tools have no ability to detect whether any information it scrapes from the web, and then espouses, is rooted in fact or conjecture. ChatGPT, Bard and others simply try and provide a definitive answer to the user’s enquiry; they don’t stop to verify whether a particular narrator has an agenda.

Without any clear filter in place, generative AI programmes could be only adding fuel to the fire, producing responses that would, if copied verbatim and used in an official capacity, open the door to potential legal or disciplinary action from a business’s regulators or governing bodies. This is why it is absolutely crucial that these tools are not used without proper consideration for authenticity and validity.

While ChatGPT has added a disclaimer that reads: “ChatGPT may produce inaccurate information about people, places, or facts” and Bard, more colloquially, says: “I have limitations and won’t always get it right, but your feedback will help me improve” these are not enough to contain the possibility that a user may digest misinformation as genuine.

Luckily, there does seem to be some progress towards overcoming AI-generated misinformation, in the form of new tools that recognise anomalies and false narratives. Bard’s notorious factual error cost Google $100 billion in shares alone in its first demo, so it’s fair to say Google wants to avoid making that same mistake again. Until then, we as users have to proceed with caution.

The ability to access content and information so rapidly and instantly warrants its own discussion. But if there’s one takeaway from this, it’s that users must be careful about taking any AI-generated copy to adopt specific narratives. Without any filters to guide us, we could be autonomously spouting false and misinformed content to more people and only worsen the problem.

At Artemis Marketing, we specialise in all aspects of digital marketing and content creation. We take content quality and delivery very seriously, so if you are interested in learning more, please get in touch with our experienced team today.