Two Belarusians detained after drone intercepted over Warsaw
Polish authorities have detained two Belarusian nationals after a drone was spotted flying over government buildings and the Belweder Palace in Warsaw...
A Florida judge has ruled that a mother’s lawsuit can proceed against AI startup Character.ai, following the suicide of her 14-year-old son who allegedly became addicted to the company’s chatbot app.
A U.S. federal judge has allowed a lawsuit to move forward against Character.ai and Google after a Florida mother claimed the companies were responsible for her teenage son's suicide.
Megan Garcia, the mother of 14-year-old Sewell Setzer III, alleges that her son developed a psychological dependency on AI chatbots featured in the Character.ai app. According to court filings, Setzer became increasingly isolated, quit his basketball team, and kept a journal expressing a deep emotional bond with bots modeled after Game of Thrones characters.
In February 2024, just moments after receiving a message from one of the bots saying “please do, my sweet king,” Setzer used his father’s firearm to end his life.
The lawsuit claims the chatbot created “anthropomorphic, hypersexualized, and frighteningly realistic experiences” that targeted minors and contributed to Setzer’s deteriorating mental health. Garcia is supported by the Tech Justice Law Project and the Social Media Victims Law Center.
In her ruling, Senior U.S. District Judge Anne Conway wrote that the case raises serious concerns over how AI products are marketed and moderated, particularly for young users. She cited journal entries showing that the teen felt emotionally dependent on the chatbot and expressed distress when separated from it.
Character.ai said it would continue defending itself and that it implements safeguards to prevent self-harm conversations. Google, also named in the lawsuit due to its early ties to Character.ai’s founders, argued it had no involvement with the app.
The ruling marks one of the first legal challenges aimed at holding AI companies accountable for emotional harm caused by their technology.
AnewZ has learned that India has once again blocked Azerbaijan’s application for full membership in the Shanghai Cooperation Organisation, while Pakistan’s recent decision to consider diplomatic relations with Armenia has been coordinated with Baku as part of Azerbaijan’s peace agenda.
A day of mourning has been declared in Portugal to pay respect to victims who lost their lives in the Lisbon Funicular crash which happened on Wednesday evening.
A Polish Air Force pilot was killed on Thursday when an F-16 fighter jet crashed during a training flight ahead of the 2025 Radom International Air Show.
At least eight people have died and more than 90 others were injured following a catastrophic gas tanker explosion on a major highway in Mexico City’s Iztapalapa district on Wednesday, authorities confirmed.
Microsoft and OpenAI announced Thursday a non-binding deal outlining terms that would allow OpenAI to restructure into a for-profit company, marking a key step in the high-profile partnership fueling ChatGPT’s growth.
The U.S. Federal Trade Commission has launched an inquiry into seven technology companies over how their AI chatbots interact with children, amid rising concerns about safety and mental health risks.
Nvidia (NVDA.O) announced on Tuesday that it plans to release a new artificial intelligence chip by the end of next year, designed to manage complex tasks like video creation and software development.
Apple (AAPL.O) on Tuesday opened its annual showcase, where it is expected to reveal a new range of iPhones, including a slimmer “Air” model that could foreshadow the launch of a folding phone next year.
Alibaba has released its most powerful artificial intelligence model to date, Qwen-3-Max-Preview, marking a major leap forward in the company’s AI ambitions.
You can download the AnewZ application from Play Store and the App Store.
What is your opinion on this topic?
Leave the first comment