OpenAI sues over Canadian school shooting fiasco – RT World News


The family of a 12-year-old boy who was critically injured in last month’s mass shooting said the chatgpt-maker failed to flag the shooter’s violent activity.

The parents of a young woman seriously injured in the incident alleged in a civil lawsuit Monday that ChatGPT owner OpenAI knew that transgender users were planning a mass shooting in Canada but failed to notify law enforcement.

The mother of 12-year-old Maya Gebala, who was hospitalized after the Feb. 10 shooting in Tumbler Ridge that killed nine people, claims in a lawsuit that the tech company failed to inform authorities about violent chat motivations from the shooter.

Jesse Van Ruetselaar, an 18-year-old transgender shooter, killed several students in one of the worst shootings in Canadian history.

A lawsuit filed in British Columbia Supreme Court alleges that OpenAI had “Shooter-Specific Knowledge of Using ChatGPT to Plan a Mass Casualty Event Like the Tumblr Ridge Mass Shooting” But law enforcement was not alerted.

It says Gebala was shot three times, this one “Tragic, Traumatic Brain Injury” It causes permanent cognitive and physical disabilities along with other serious medical complications.

OpenAI said it considered reporting user activity to the police but ultimately did not. After the attack, the company informed the authorities that the shooter’s ChatGPT account had been closed, but Ruetselaar escaped the ban by creating a second account.



Canada wants answers from OpenAI after school massacre

The lawsuit contends that ChatGPT acted as a “A trusted confidant, collaborator and ally” Alleged inaction for the shooter and the company contributed to the severity of the girl’s injuries and harm to others in the community.

Last month, Canadian officials summoned senior OpenAI representatives to Ottawa to review the company’s safety protocols after the school shooting. Canadian Artificial Intelligence Minister Evan Solomon said earlier this month that OpenAI CEO Sam Altman had agreed to give Canadian experts access to the company’s security office to assess future threats. Solomon met with Altman last week, he said “Horror and Responsibility” The activity failed to flag and the company said it was implementing changes.

Last year, OpenAI updated ChatGPT after an internal review found that more than a million users had disclosed suicidal thoughts to the chatbot. Psychiatrists warn that extended interactions with AI could lead to hallucinations and paranoia, as the phenomenon is sometimes called “AI Psychosis.”

You can share this story on social media:

Add Comment