Digital Marketing

Google Sues AI Over False Reviews About Artist

Canadian actress Ashley MacIsaac has filed a civil lawsuit against Google, claiming that AI Overview falsely identified her as a sex offender. The case could test how courts treat liability for false AI-generated search summaries.

The statement of claim, filed in February in the Ontario Superior Court of Justice, seeks at least $1.5 million in damages from Google LLC. No applications have been tested in court.

Alleged Case

MacIsaac, a Juno Award-winning musician, says he found out about the false summary in December 2025 after the Sipekne’katik First Nation confronted him about it and canceled one of his concerts. The First Nation later issued a public apology.

According to the filing, AI Overview falsely claimed that MacIsaac had been convicted of sexual assault, Internet solicitation involving a child, and assault causing bodily harm, and falsely said he was on the national sex offender registry.

The lawsuit argues that Google is responsible for the release of its AI system, saying that Google “knew, or should have known, that the AI ​​overview was incomplete and could return false information.”

It also says that Google never admitted responsibility, never reached out to MacIsaac, and never apologized or retracted.

The filing makes a straightforward argument about AI liability:

“If a human spokesperson made these false allegations on behalf of Google, a significant award of punitive damages would be warranted. Google should bear little liability because the defamatory statements were published by software created and controlled by Google.”

MacIsaac said Google should take responsibility for what AI Overviews show. “This was not a search engine that just checks things and shows someone else’s story,” he said.

Google’s answer

Google has not commented on the lawsuit. In December, spokeswoman Wendy Manton said the AI ​​overview is “fluctuating and often changing” and that when a feature misinterprets web content, Google uses those cases to improve its systems. The false summary linking MacIsaac to criminal charges no longer appears.

Why This Matters

AI overviews can appear in Google search results as AI-generated summaries with links to more information. The Google Search Help documentation states that AI answers may include errors.

If those summaries show false claims about real people, the consequences can go beyond a negative search result. In MacIsaac’s case, the lawsuit alleges that AI Overview led to a canceled concert and reputational damage.

MacIsaac’s case is not the first time that AI-generated content has led to allegations of defamation. In 2023, an Australian mayor threatened legal action after ChatGPT falsely claimed he had been arrested for bribery. The lawsuit targets Google’s AI Overview specifically and claims the product had a flawed design.

The case adds to a growing legal question about AI-generated content: whether platforms are liable if automated summaries present false claims as search results.

Looking Forward

The case is in the statement of claim stage, and Google has not yet filed a response. Until then, the core questions have not been resolved: whether Google will dispute the liability, how it will reflect the result of the AI ​​Overview, and how the court will handle automatic summaries in the claim of defamation.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button