Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Instella is here: AMD’s 3B-parameter model takes on Llama and Gemma

Instella employs an autoregressive transformer architecture consisting of 36 decoder layers and 32 attention heads, enabling it to process lengthy sequences of up to 4,096 tokens

byKerem Gülen
March 7, 2025
in Artificial Intelligence, News

AMD has unveiled Instella, a family of fully open-source language models featuring 3 billion parameters, trained from scratch on AMD Instinct™ MI300X GPUs. Instella models outperform existing open models of similar sizes and compete effectively with leading open-weight models, including Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B, including their instruction-tuned versions.

AMD unveils Instella: Open-source language models outperforming rivals

Instella employs an autoregressive transformer architecture consisting of 36 decoder layers and 32 attention heads, enabling it to process lengthy sequences of up to 4,096 tokens. The model utilizes a vocabulary of approximately 50,000 tokens, managed by the OLMo tokenizer, making it adept at generating and interpreting text across various domains.

The training procedure for Instella highlights collaboration between AMD’s hardware and software innovations. This new model builds on the groundwork established by AMD’s previous 1-billion-parameter models, transitioning from training on 64 AMD Instinct MI250 GPUs with 1.3 trillion tokens to using 128 Instinct MI300X GPUs with 4.15 trillion tokens for the current 3-billion-parameter model.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

instella-is-here-amd-3b-parameter-model-takes-on-llama-and-gemma
Image: AMD

By comparing Instella to prior models, AMD reports that it not only surpasses existing fully open models but also achieves competitive performance against state-of-the-art open-weight models, marking a significant milestone in the natural language processing field. This initiative aligns with AMD’s commitment to making advanced technology more accessible and fostering collaboration and innovation within the AI community.


AMD RX 9000 pricing could make you rethink that RTX 5090 purchase


Instella model phases and training data

This release includes several versions of the Instella models, each representing different training stages:

Model Stage Training Data (Tokens) Description
Instella-3B-Stage1 Pre-training (Stage 1) 4.065 Trillion First stage pre-training to develop proficiency in natural language.
Instella-3B Pre-training (Stage 2) 57.575 Billion Second stage pre-training to enhance problem-solving capabilities.
Instella-3B-SFT SFT 8.902 Billion (x3 epochs) Supervised Fine-tuning (SFT) to enable instruction-following capabilities.
Instella-3B-Instruct DPO 760 Million Alignment to human preferences and enhancement of chat capabilities with direct preference optimization (DPO).

In the multi-stage training pipeline, the first pre-training stage used 4.065 trillion tokens from diverse datasets, establishing foundational language understanding. The subsequent training on an additional 57.575 billion tokens further enhanced the model’s performance across varied tasks and domains.

During supervised fine-tuning, Instella-3B-SFT was trained with 8.9 billion tokens, improving interactive response capabilities. The final stage, Instella-3B-Instruct, underwent alignment training with Direct Preference Optimization using 0.76 billion tokens, ensuring that the model’s outputs are aligned with human values and preferences.

AMD has made all artifacts associated with Instella models fully open-source, including model weights, training configurations, datasets, and code, fostering collaboration and innovation in the AI community. These resources can be accessed via Hugging Face model cards and GitHub repositories.


Featured image credit: AMD

Tags: AIAMD

Related Posts

Google Skills combines courses from Cloud, DeepMind and more

Google Skills combines courses from Cloud, DeepMind and more

October 22, 2025
YouTuber builds a camera that films light at two billion FPS

YouTuber builds a camera that films light at two billion FPS

October 22, 2025
NASA runs Earth-based CYZ mission to simulate life on the Moon and Mars

NASA runs Earth-based CYZ mission to simulate life on the Moon and Mars

October 22, 2025
Galaxy XR is Samsung’s answer to the Vision Pro at half the price

Galaxy XR is Samsung’s answer to the Vision Pro at half the price

October 22, 2025
Samsung is building XR smart glasses with Warby Parker and Gentle Monster

Samsung is building XR smart glasses with Warby Parker and Gentle Monster

October 22, 2025
Apple’s 18-inch foldable iPad reportedly delayed until 2029

Apple’s 18-inch foldable iPad reportedly delayed until 2029

October 22, 2025

LATEST NEWS

Google Skills combines courses from Cloud, DeepMind and more

YouTuber builds a camera that films light at two billion FPS

NASA runs Earth-based CYZ mission to simulate life on the Moon and Mars

Galaxy XR is Samsung’s answer to the Vision Pro at half the price

Samsung is building XR smart glasses with Warby Parker and Gentle Monster

Apple’s 18-inch foldable iPad reportedly delayed until 2029

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.