AI might not dream, but it does hallucinate: What wrong answers can tell you about choosing the right LLM
Sacha Cody, chief operating officer at BrandComms.AI, Forethought, addresses the pressing issue of AI “hallucinations”.
Who knew The Wachowski’s dystopian sci-fi The Matrix would actually set the scene for the AI landscape some 50 years on?
As AI becomes an ever-present part of our daily lives, questioning and scepticism of its ability – and more recently, whether it all exists in a bubble – has become commonplace. But perhaps the most pressing issue is the phenomenon of AI “hallucinations.”
These are more than just the same cat passing by twice, but instead are instances where AI models generate outputs that may sound plausible but are, at their best, incorrect. At their worst they are downright nonsensical. Understanding and mitigating AI hallucinations is crucial for ensuring the reliability and credibility of a brand or business’ AI-generated content.