For years Meta has claimed that teenagers are protected by strong AI tools, filters and strict rules. But two American mothers, Lauren and Kate, proved how fragile these protections really are.
They created a fake Instagram profile of a 15-year-old—no ID, no parental approval, no checks. The platform accepted it instantly.
Next, they tested whether adult accounts could reach teens. Despite Meta’s assurances, they were able to follow teenagers, comment on their posts and send direct messages without any warnings or restrictions.
In one experiment, their adult test account replied to a teen comment on a public video, sent a follow request and, once accepted, could freely chat in private messages—the same route used in real harassment cases.
Experts say tech firms often exaggerate their safety systems to avoid tougher regulation and protect their image. But parents are becoming more sceptical and more tech-aware.
The mothers’ simple tests delivered a clear message: Meta’s teen safety promises look strong in marketing, but much weaker in reality.


