HeadlinesBriefing favicon HeadlinesBriefing.com

OpenAI found breaching Canadian privacy laws

Engadget •
×

Canadian privacy regulators concluded that OpenAI failed to follow federal and provincial rules while training its language models. The investigation, led by Privacy Commissioner Philippe Dufresne, cited violations of the Personal Information Protection and Electronic Documents Act—PIPEDA—and similar provincial statutes. The findings focused on how the company collected and used personal data without proper safeguards.

Regulators said OpenAI gathered vast amounts of personal information without adequate protection, and that it did not obtain consent before collecting or using that data. While ChatGPT users see a warning that conversations may train the model, third‑party data OpenAI purchased or scraped often contains details users are unaware of. Users also lack a way to delete or correct such data.

OpenAI has already retired earlier models that breached Canadian law and now employs a filtering tool to mask personal details in publicly available internet data and licensed datasets. The company pledged to add a notice to the signed‑out version of ChatGPT within three months, and to improve data export tools and accuracy‑challenge mechanisms within six months, as outlined in the commissioners' report.

The probe began in 2023 and gained urgency after OpenAI flagged a user linked to the February 2026 Tumbler Ridge shooting without alerting Canadian law enforcement. Regulators now demand the company enhance safety protocols and cooperate with authorities. The ruling underscores the need for clear consent and data‑handling standards in AI training.