AI might not dream, but it does hallucinate: What wrong answers can tell you about choosing the right LLM

Sacha Cody, chief operating officer at BrandComms.AI, Forethought, addresses the pressing issue of AI “hallucinations”.

Who knew The Wachowski’s dystopian sci-fi The Matrix would actually set the scene for the AI landscape some 50 years on?

As AI becomes an ever-present part of our daily lives, questioning and scepticism of its ability – and more recently, whether it all exists in a bubble – has become commonplace. But perhaps the most pressing issue is the phenomenon of AI “hallucinations.”

These are more than just the same cat passing by twice, but instead are instances where AI models generate outputs that may sound plausible but are, at their best, incorrect. At their worst they are downright nonsensical. Understanding and mitigating AI hallucinations is crucial for ensuring the reliability and credibility of a brand or business’ AI-generated content.

Subscribe to keep reading

Join Mumbrella Pro to access the Mumbrella archive and read our premium analysis of everything under the media and marketing umbrella.

Subscribe

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

"*" indicates required fields

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.