Canadian musician sues Google after AI Overview falsely identified him as sex offender

May 6, 2026 Ashley MacIsaac has filed a $1.5 million lawsuit against Google, alleging the company’s AI-generated search summaries falsely identified him as a convicted sex offender. The lawsuit claims Google’s AI Overview feature incorrectly stated that the three-time Juno Award winner had committed multiple sexual offences, including crimes involving children, and had been placed on Canada’s national sex offender registry for life.

The civil claim, filed in Ontario Superior Court, argues that Google is responsible for the “foreseeable republication” of defamatory content generated through its AI systems. According to the lawsuit, the AI Overview published false allegations that MacIsaac had been convicted of sexual assault, internet luring involving a child, and assault causing bodily harm.

The suit also argues that Google bears liability not only for publishing the information, but for the design of the AI system itself. “As the creator and operator of the AI overview, Google is also liable for injuries and losses arising from the AI overview’s defective design,” the lawsuit states. “Google knew, or ought to have known, that the AI overview was imperfect and could return information that was untrue.”

MacIsaac is seeking $500,000 in general damages, $500,000 in aggravated damages and another $500,000 in punitive damages.

According to the filing, the impact extended beyond online misinformation and directly affected the musician’s career. MacIsaac said he first became aware of the false claims after the Sipekne’katik First Nation cancelled a scheduled concert appearance in December following complaints from members of the public who had seen the AI-generated search results.

The First Nation later publicly apologized to MacIsaac, acknowledging that the decision was based on incorrect information generated by an AI-assisted search. “Decisions were based on incorrect information generated through an AI-assisted search, which mistakenly associated you with offenses unrelated to you,” the statement said. “We deeply regret the harm this caused to your reputation and livelihood.”

MacIsaac previously told The Canadian Press that the incident left him fearful about appearing in public. “I feared for my own safety going on stage because of what I was labelled as,” he said. “And I don’t know how long this will follow me.”

The lawsuit further alleges that Google neither contacted MacIsaac nor issued a direct apology after the misinformation surfaced. It describes the company’s response as “cavalier and indifferent,” arguing that software-generated defamation should not reduce a company’s legal responsibility.

“If a human spokesperson made these false allegations on Google’s behalf, a significant award of punitive damages would be warranted,” the lawsuit states. “Google should not have lesser liability because the defamatory statements were published by software that Google created and controls.”

In a statement provided through his lawyers, MacIsaac said the case goes beyond his personal experience and raises broader concerns about the reliability of generative AI systems. “When I first discovered the false statements Google was publishing about me, I felt I needed to speak out to the media to clear my name and bring attention to the issue. I believe this is a serious issue, that needs to be resolved in the courts,” he said. 

Google has not publicly commented on the lawsuit itself. However, when the issue first became public in December, the company said its AI Overviews are continuously updated and improved. “AI Overviews frequently improve to show the most helpful information, and we invest significantly in the quality of responses,” a Google spokesperson said at the time. “When issues arise – like if our features misinterpret web content or miss some context – we use those examples to improve our systems and may take action under our policies.”

 

The disputed AI Overview has since changed. It now reportedly includes a line noting that MacIsaac “made headlines for taking legal action against Google.”

The lawsuit arrives as technology companies face growing scrutiny over generative AI systems that confidently produce false or misleading information, commonly referred to as hallucinations. While many AI tools are marketed as research and productivity assistants, legal experts and regulators are increasingly examining what happens when inaccurate outputs cause measurable reputational or financial harm.



Top Stories

Related Articles

May 6, 2026 The official White House mobile app for iOS and Android is facing scrutiny after a security researcher more...

May 6, 2026 Major banks are searching for ways to reduce their exposure to the enormous loans financing AI data more...

May 6, 2026 South Africa has withdrawn its Draft National Artificial Intelligence Policy after officials discovered that several academic references more...

May 5, 2026 GameStop CEO Ryan Cohen has made an unsolicited offer to acquire eBay for about $56 billion. The more...

Picture of Mary Dada

Mary Dada

Mary Dada is the associate editor for Tech Newsday, where she covers the latest innovations and happenings in the tech industry’s evolving landscape. Mary focuses on tech content writing from analyses of emerging digital trends to exploring the business side of innovation.
Picture of Mary Dada

Mary Dada

Mary Dada is the associate editor for Tech Newsday, where she covers the latest innovations and happenings in the tech industry’s evolving landscape. Mary focuses on tech content writing from analyses of emerging digital trends to exploring the business side of innovation.

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn