Syria’s Hama: one year on from regime’s fall, a city reborn in hope
On 5 December 2025, residents of Hama filled the streets, balconies, rooftops, main squares, waving flags and chanting slogans, celebrating the first ...
Elon Musk’s xAI chatbot Grok has come under global scrutiny after users reported it using offensive language. It prompted warnings from experts that human intervention in its responses proves the need for a global AI ethical framework.
Grok, developed by Elon Musk’s xAI in 2023, is designed to deliver witty and direct responses, drawing inspiration from Douglas Adams’ The Hitchhiker’s Guide to the Galaxy and Marvel’s AI assistant J.A.R.V.I.S.
In Adams’ science fiction classic, the “Guide” offers irreverent and often sarcastic explanations about the universe, while J.A.R.V.I.S. (Just A Rather Very Intelligent System), created by fictional billionaire Tony Stark in Iron Man, manages complex systems with efficiency and dry humour - both influences shape Grok’s deliberately “edgy” personality.
Prosecutors in Ankara in Türkiye, have launched an investigation into Grok after the AI chatbot was reported to have used offensive and discriminatory language in user interactions.
The Ankara Chief Prosecutor’s Office said, adding that the probe could lead to access restrictions and the removal of posts considered criminal under the Turkish Penal Code.
In response to the backlash, xAI said the issue was quickly identified and the model updated, while authorities in other countries are reportedly considering legal action over similar concerns.
Sadi Evren Seker, an IT professor and dean at Istanbul University, said AI systems do not act independently and that Grok’s behaviour may have been caused by internal or external interference or a system loophole.
He added that its use of offensive language showed how much freedom it had in generating responses and how it approached sensitive ethical, religious, and cultural topics. Noting that AI systems ultimately rely on human-provided data.
“AI then makes judgements based on this data, especially on issues like ethics, morality, and discrimination - the decision maker is still a human, and all AI does is produce results. It asks humans: ‘should I say this or that, is that ethical or not’ and the feedback it gets helps it improve over time,” he said.
“The question today is, ‘will AI change the way it uses language and the style of its language depending on the domain’ - this is a margin of flexibility, but there must be limits, as the language we use on social media is different, but is it right to insult?” He added.
“Some intervention has been made to Grok via ‘alignment’ mechanisms recently, which allowed the chatbot to have some flexibility, and inevitably, it used this flexibility, allowing it to swear and make discriminatory claims on X, which constitute a crime,” Seker said.
“We can safely say there is human intervention in Grok’s responses, as anyone who worked on the chatbot could’ve prevented it from providing answers with insults and racism,” he said.
Seker emphasised the need for countries to develop AI systems that align with their own cultural and ethical values, warning that incidents like Grok’s could lead to similar problems in the future.
“A country’s court may demand for a post be removed but someone may come out and say they won’t do it, revealing a problem with authority,” he said. “People on X tested how far they can go, intervening in the way Grok responds, triggering it to make racist comments -- all our institutions need to take immediate action on this issue or someone else will.”
Seker underlined that AI needs to be used as a tool by humans in the most appropriate way, noting that this is not a case of "AI versus humanity" but rather a matter of how humans choose to use AI.
“Banning AI is not the solution, so we need to review our entire education curriculum -- not as a single country but as humanity and determine how education on this issue can be provided,” he added.
Launched in November 2023 as an alternative to chatbots like Google’s Gemini and OpenAI’s ChatGPT, Grok is available to users on X and draws some responses directly from real-time public posts for "up-to-date information and insights on a wide range of topics.”
Since Elon Musk acquired X (formerly Twitter) in 2022 and scaled back content moderation, extremist posts have surged, prompting many advertisers to withdraw.
The 2026 FIFA World Cup draw at the Kennedy Center in Washington, D.C., has finalized the group stage for the tournament co-hosted by the U.S., Canada, and Mexico, setting the schedule and matchups for next summer’s expanded 48-team event.
Pakistan and Afghanistan exchanged heavy fire along their shared border late on Friday, a reminder of how sensitive the frontier remains despite ongoing diplomatic efforts.
Iran’s Foreign Ministry has strongly condemned the Gulf Cooperation Council (GCC) for its support of the claims by United Arab Emirates on three Iranian islands.
Chinese leader Xi Jinping accompanied French President Emmanuel Macron to Chengdu on Friday, a rare gesture seemingly reserved for the head of Europe's second-largest economy that highlights Beijing's focus on Paris in its ties with the European Union.
The United States plans to extend its travel ban to over 30 countries, U.S. Homeland Security Secretary Kristi Noem announced on Thursday.
The International Robot Exhibition (IREX) opened in Tokyo on 3 December, bringing together visitors to explore robotics applications for industry, healthcare, logistics, and everyday life.
A bipartisan group of U.S. senators, including prominent Republican China hawk Tom Cotton, introduced the SAFE CHIPS Act on Thursday, aiming to prevent the Trump administration from easing restrictions on China’s access to advanced artificial intelligence (AI) chips for a period of 2.5 years.
A former Apple engineer has unveiled a new Chinese chip designed to compete directly with Apple’s Vision Pro headset.
Chinese AI startup DeepSeek has introduced its newest model, DeepSeek-V3.2-Speciale, claiming it can perform some tasks as well as the latest models from Google DeepMind and OpenAI.
A new robotic system developed for the Czech Police is reshaping how complex investigations are carried out, bringing laboratory-level precision directly to crime scenes.
You can download the AnewZ application from Play Store and the App Store.
What is your opinion on this topic?
Leave the first comment