As the cost of deception rises, the best strategy is truth
Two recent court verdicts in the US ruled that social media giants are liable for harms arising from the “engineered addiction” of their platforms.
Communications crisis expert Peter Wilkinson sees the rulings as a warning and a guide for anyone using social media networks to communicate.
Peter Wilkinson
Recently, two separate juries found that social media companies are liable for damage caused to children. The verdicts matter beyond the tech giants.
Every brand that built its communications model on those platforms, and measured success on manipulating clicks and reach, now faces a simple question: is this one more step towards me being legally exposed?
The court decisions reinforce my view that:
- Social media’s manipulation model is legally and culturally endangered
- AI prioritises accountability, making trust-based reputation the only viable long-term strategy
- Any job security in the communications industry is with folks pushing truth and transparency, rather than manipulating feelings and perceptions
The Los Angeles jury found that Meta and YouTube were negligent in the design and operation of their platforms, that their negligence was a substantial factor in causing harm to the plaintiff, and that both companies knew or should have known their services posed a danger to minors.
In the New Mexico case, Meta was found liable for making children vulnerable to predators.
The tech giants may appeal, but many lawyers believe the arguments will travel. Cases making identical claims will be heard in courts across the Western world.
Nor are the courts acting alone. The European Union’s Digital Services Act now holds platforms legally accountable for systemic risks — including harm to minors. The Australian government is yet to catch up, although the National AI Plan 2025 is designed to keep us safe, and we’ve led the banning of common social media apps for under-16s.
For any organisation whose communications strategy depends on a social media model built on engagement-at-any-cost, the legal and regulatory environment will become a danger zone.
In this environment, AI cuts both ways. Yes, AI amplifies the damaging algorithms that lock people into anxiety-driven scrolling and negative belief formation. But AI also does something transformative on the consumer side: it can seek truth.
Whatever an organisation claims, AI can now check. I can use AI to check my phone bill against every competitor’s pricing in seconds. I can ask it to compare superannuation fund performance over ten years. Before I sign a contract, I can have AI assess product reviews, warranty conditions, and levels of governance.
The crucial point is this: the cost of deception is rising. The return on trust is too.
This matters because truth is the foundation for all our communications. We humans use communication to make decisions. To protect ourselves against those who abuse it we’ve created laws about truth in advertising, fraud, false and misleading claims, perjury, defamation, and more. All of it to keep communication inside the guardrails of truth.
The tech giants removed those guardrails in the pursuit of clicks and profit, leading to fake news, paid-for influencers, less-than-honest advertising and an ecosystem of content creators who invented their own “truth”.
The court verdicts are the system fighting back.
I’ve always argued to my clients to focus on trust built on truth and transparency. This is not idealism. It is strategy. Investing in a reputation built on trust is cost-effective because you concentrate on the things you can actually change: your behaviour, your products, your people, your governance, your ethics.
AI will amplify this. This means a shift, away from marketing and PR, towards corporate affairs.
Marketing agencies are built to push products and shape perception. Corporate affairs is built around something different, good governance, a nose for risk, a strong internal culture, genuine ethics and real news, all of which build the kind of reputation that AI verification will reward rather than expose.
In the already-arriving AI-enabled world, trust built on truth and transparency will ultimately sell products, while organisations associated with social media campaigns built on perception and feelings will be identified quickly and clearly.