Why trust actors but not AI?
In this guest post, Made This managing director Vinne Schifferstein Vidal examines a double standard around authenticity when it comes to AI.
We have never revealed the extent to which "real" advertising is inauthentic (Midjourney)
In countless conversations with industry leaders and clients, one theme keeps surfacing: transparency — especially when it comes to using artificial intelligence in our creative work.
Clients often feel compelled to disclose AI use, fearing a backlash that could tarnish their brand’s authenticity. Many are comfortable using AI-generated elements for backgrounds, textures, or product mockups. But replacing real people? That’s still often seen as crossing a line.
But here’s the uncomfortable question: why?
Audiences have never been told that the smiling woman on the billboard had every blemish, freckle, and wrinkle retouched. No one ever disclosed that the idyllic sunset behind a car in a TVC was extended with CGI or, more recently, generative fill in Photoshop. Designers and creatives have been using post-production tools like Facetune, Clone Stamp, After Effects, and now generative AI for years — without a whisper of explanation. And no one cared or questioned it.
I think it speaks to a broader problem – AI proponents see humans as an obstacle to greater efficiency and people can sense that.
That up until now that any interaction with another person was secretly grating, all prior interactions were a under false pretense and now AI-lovers can drop the charade to go play pretend with AI.
A brand using AI models says ‘humans are a hinderance to profitability. If we could legally grind up people for money we would’. Get out of your bubble.