Meta unveils new AI-powered LLaMA models

February 27, 2023

Meta has announced the release of a new large language model that can run on a single graphics processing unit (GPU) rather than a cluster of GPUs. LLaMA-13B is a new AI-powered large language model (LLM) that can outperform OpenAI’s GPT-3 model despite being “10x smaller.”

The new model is a collection of language models with parameters ranging from 7 billion to 65 billion. In comparison, OpenAI’s GPT-3 model, which serves as the foundation for ChatGPT, has 175 billion parameters. LLaMA is not a chatbot in the traditional sense; it is a research tool that, according to Meta, will likely solve problems with AI language models. It was trained using publicly available datasets such as Common Crawl, Wikipedia, and C4, which means the company could potentially open source the model and weights.

Smaller models trained on more tokens (word fragments) are easier to retrain and fine-tune for specific potential product use cases, according to Meta. As a result, LLaMA 65B and LLaMA 33B were trained on 1.4 trillion tokens. LLaMA 7B, its smallest model, is trained on one trillion tokens.

It competes with similar offerings from rival AI labs DeepMind, Google, and OpenAI. It is also said to outperform GPT-3 when measured across eight standard “common sense reasoning” benchmarks such as BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, and OpenBookQA while running on a single GPU. LLaMA-13B, in contrast to the data center requirements for GPT-3 derivatives, paves the way for ChatGPT-like performance on consumer-level hardware in the near future.

“Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field,” said Meta in its official blog.

Meta refers to its LLaMA models as “foundational models,” implying that the company intends for the models to serve as the foundation for future, more refined AI models built on the technology, similar to how OpenAI built ChatGPT on a foundation of GPT-3. LLaMA, according to the company, will be useful in natural language research and potentially power applications such as “question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models.”

The sources for this piece include an article in ArsTechnica.

Top Stories

Related Articles

December 23, 2025 Editor's Notes: This is the first of two articles reflecting on the year by Yogi Schulz. Schulz' more...

December 23, 2025 Google parent company Alphabet said Monday that it will acquire Intersect Power for $4.75 billion in cash more...

December 22, 2025 Artificial intelligence dominated global search behaviour in 2025, with Google’s own AI assistant, Gemini, emerging as the more...

December 22, 2025 OpenAI has hired the former head of Shopify’s core product organization to lead its next phase of more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn