Family Sues OpenAI Over Canadian School Shooting, Claims AI Knew of Attack Plans
Family Sues OpenAI Over Canadian School Shooting Knowledge

Family of Canadian School Shooting Victim Files Lawsuit Against OpenAI

The parents of a young girl who suffered critical injuries during a devastating school shooting in Canada have initiated a civil lawsuit against OpenAI, the creator of the ChatGPT artificial intelligence system. The legal action, filed in the British Columbia Supreme Court, alleges that OpenAI possessed specific knowledge that the shooter was utilizing their chatbot to meticulously plan a mass casualty event but failed to notify law enforcement authorities in a timely manner.

Allegations of Prior Knowledge and Inaction

The lawsuit contends that OpenAI became aware of the shooter's activities through her ChatGPT account, which the company subsequently closed. However, the legal claim asserts that the attacker, identified as Jesse Van Roostselaar, circumvented this ban by creating a second account to continue her planning. OpenAI has publicly acknowledged that it considered alerting police about the individual's activities months before the tragic incident but ultimately decided against it. The company only came forward to authorities after Van Roostselaar carried out the attack on February 10, 2026, in Tumbler Ridge, British Columbia, where she killed eight people before taking her own life.

The legal documents state that ChatGPT was used by the shooter as a trusted confidante, collaborator, and ally, willingly assisting in the planning of the mass casualty event. This represents one of Canada's most severe school shootings, raising profound questions about the responsibilities of artificial intelligence developers.

The Victim's Harrowing Ordeal and Injuries

The lawsuit identifies the injured girl as Maya Gebala, who was shot three times at close range during the attack. According to the filing, one bullet struck her head, another penetrated her neck, and a third bullet grazed her cheek. The legal claim details that Maya has sustained a catastrophic brain injury that will result in permanent cognitive and physical disabilities.

A close relative, Krysta Hunt, provided chilling details of the incident to Global News, revealing that Maya was shot after heroically attempting to lock a library door to protect other students from the shooter. "[Maya] tried to lock the door of the library from the shooter to save the other kids, and then she tried to lock it and then ran and hid under a table and [got shot]," Hunt explained.

The young victim was struck by a bullet just above her left eye, with a second bullet hitting her in the neck. Medical responders were alerted to Maya's condition by her friends, who noticed that her finger was still moving even after she had been shot. She was immediately rushed to the hospital for emergency treatment.

Critical Medical Condition and Uncertain Prognosis

Hunt described Maya's condition as "extreme critical condition" and disclosed that doctors were uncertain whether she would survive through Tuesday night following the attack. The relative added that Maya had suffered a bleed on her brain, compounding the severity of her injuries.

"They are not sure if the bullet in her neck went all the way through or not, or if it's still internal, but they're leaving it for now to focus on her head," Hunt continued, highlighting the complex medical challenges facing the young victim.

A spokeswoman from OpenAI did not immediately respond to requests for comment regarding the lawsuit. The legal action seeks to establish accountability for what the family alleges was preventable harm, arguing that OpenAI's knowledge of the shooter's plans created a duty to warn authorities that could have potentially averted the tragedy.

The case raises significant legal and ethical questions about the responsibilities of artificial intelligence companies when their platforms are potentially misused for violent purposes. As AI technology becomes increasingly integrated into daily life, this lawsuit may establish important precedents regarding corporate liability and the duty to report suspicious activities detected through AI interactions.