Meta releases new ai model llama 4

Reuters

Meta Platforms has unveiled the latest iteration of its large language model, introducing two new versions—Llama 4 Scout and Llama 4 Maverick—on Saturday.

In a statement, Meta described these models as its "most advanced yet" and "the best in their class for multimodality," emphasizing their ability to process and integrate multiple types of data, including text, video, images, and audio.

The new Llama 4 models are set to be open source, which Meta says will foster broader innovation by allowing developers and researchers free access to state-of-the-art AI technology. In addition, Meta previewed Llama 4 Behemoth, which it touted as "one of the smartest LLMs in the world" and its most powerful model to date, intended to serve as a benchmark and teacher for subsequent models.

The release comes at a time when investment in AI infrastructure is surging, following the transformative impact of OpenAI's ChatGPT on the tech landscape. Meta has announced plans to spend up to $65 billion this year to expand its AI capabilities, amid increasing investor pressure on big tech firms to demonstrate robust returns on their AI investments.

According to reports from The Information, the launch of Llama 4 was delayed because early versions of the model did not meet Meta’s technical benchmarks, particularly in areas such as reasoning and math tasks, and were found to be less adept at humanlike voice conversations compared to competing models from OpenAI.

With Llama 4 Scout and Llama 4 Maverick, Meta aims to reclaim ground in the competitive AI space by delivering versatile, multimodal systems that not only perform a wide range of tasks but also encourage open collaboration in the tech community. As the race in AI innovation heats up, Meta’s new models represent a significant strategic move to both address past performance challenges and push the boundaries of what artificial intelligence can achieve.

Tags

Comments (0)

What is your opinion on this topic?

Leave the first comment