AnewZ Morning Brief - 2 November, 2025
Start your day informed with AnewZ Morning Brief: here are the top news stories for 2 November, covering the latest developments you need to know....
At its inaugural developer conference on Thursday, Anthropic unveiled two new AI models, Claude Opus 4 and Claude Sonnet 4, part of its next-generation Claude 4 family.
The company claims these models are among the most advanced in the industry, capable of long-horizon reasoning, complex task execution, and robust performance on popular programming and math benchmarks.
Claude Opus 4, the flagship model, is designed for in-depth problem-solving across multiple steps, while Claude Sonnet 4 serves as a more accessible alternative with significant upgrades over its predecessor, Sonnet 3.7. Both models are tuned for code writing, editing, and logical reasoning, making them suitable for a range of developer and enterprise use cases.
Users of Anthropic’s free chatbot apps will gain access to Sonnet 4, while Opus 4 will be reserved for paying users, with API access offered via Amazon Bedrock and Google Vertex AI. Pricing is set at $15/$75 per million tokens (input/output) for Opus 4 and $3/$15 for Sonnet 4 — with a million tokens equating to roughly 750,000 words.
The Claude 4 release is part of Anthropic’s broader strategy to scale revenue as it targets $12 billion in earnings by 2027, up from a projected $2.2 billion in 2025. The company, founded by former OpenAI researchers, recently secured $2.5 billion in credit and significant backing from Amazon and other investors to support continued development of its “frontier” models.
According to internal benchmarks, Opus 4 outperforms rivals such as Google’s Gemini 2.5 Pro and OpenAI’s GPT-4.1 on coding tasks like SWE-bench Verified. However, it lags behind OpenAI’s “o3” model in multimodal evaluations like MMMU and GPQA Diamond, which test advanced scientific reasoning.
To mitigate risks, Anthropic is releasing Opus 4 under enhanced safety protocols, including stricter content moderation and cybersecurity measures. The model meets Anthropic’s ASL-3 safety threshold, indicating a heightened ability to assist in the development of weapons of mass destruction — a risk Anthropic acknowledges and is actively working to contain.
Both models are described as “hybrid” systems, capable of instant responses for simple tasks and extended “reasoning mode” for deeper challenges. When reasoning, the models provide summaries of their thought processes, though Anthropic withholds full transparency to protect competitive secrets.
Notably, Opus 4 and Sonnet 4 can use external tools in parallel, extract and retain useful information in memory, and alternate between tool use and reasoning — a setup Anthropic says builds “tacit knowledge” over time.
The company also announced enhancements to Claude Code, its agentic coding tool, including SDK support, IDE integration, and GitHub connectors. Developers can now deploy Claude Code inside VS Code, JetBrains, and use it to respond to GitHub review feedback or correct coding errors automatically.
While acknowledging the limitations of current AI in producing secure and logically sound code, Anthropic is betting on rapid iteration to stay ahead. “We’re shifting to more frequent model updates,” the company said in a draft blog post. “This approach keeps you at the cutting edge as we continuously refine and enhance our models.”
As the AI arms race intensifies, Anthropic’s Claude 4 launch reflects its determination to secure a leading position in the development of high-performance, safe, and commercially viable AI systems.
Reports from CNN say the Pentagon has approved the provision of long range Tomahawk missiles to Ukraine after assessing its impact on U.S. stockpiles, while leaving the ultimate decision to President Trump.
Tanzanian police fired tear gas and live rounds on Thursday to disperse protesters in Dar es Salaam and other cities, a day after a disputed election marked by violence and claims of political repression, witnesses said.
Ukraine’s top military commander has confirmed that troops are facing “difficult conditions” defending the strategic eastern town of Pokrovsk against a multi-thousand Russian force.
Residents of Hoi An, Vietnam’s UNESCO-listed ancient town, began cleaning up on Saturday as floodwaters receded following days of torrential rain that brought deadly flooding and widespread destruction to the central region.
The United Nations has warned of a catastrophic humanitarian situation in Sudan after reports emerged of mass killings, sexual violence, and forced displacements following the capture of al-Fashir by the Rapid Support Forces (RSF).
Nvidia has announced a major partnership with the South Korean government and top companies to strengthen the country’s artificial intelligence capabilities by supplying hundreds of thousands of its advanced GPUs.
Character.AI will ban under-18s from chatting with its AI characters and introduce time limits, following lawsuits alleging the platform contributed to a teenager’s death.
A small, silent object from another star is cutting through the Solar System. It’s real, not a film, and one scientist thinks it might be sending a message.
A 13-year-old boy in central Florida has been arrested after typing a violent question into ChatGPT during class, prompting an emergency police response when school monitoring software flagged the message in real time.
Nokia chief executive Justin Hotard said artificial intelligence is fuelling a structural growth cycle similar to the internet expansion of the 1990s, but rejected fears that investor enthusiasm has reached unsustainable levels.
You can download the AnewZ application from Play Store and the App Store.
What is your opinion on this topic?
Leave the first comment