Connect with us

Tech

Mistral AI Launches Ministral 3B and 8B Models for Edge Computing

Published

on

Mistral AI Launches Ministral 3B and 8B Models for Edge Computing

Mistral AI has announced the launch of two new models, Ministral 3B and Ministral 8B, on the first anniversary of its Mistral 7B model. These new models focus on on-device computing and edge applications, enhancing capabilities in areas such as knowledge reasoning and function-calling.

Ministral models can handle up to 128k context length and offer a unique sliding-window attention pattern for efficient inference, especially in resource-constrained environments. They aim to meet the demand for local and privacy-first inference in applications like on-device translation, smart assistants, local analytics, and robotics. The models serve as intermediaries for larger models, improving task routing and API calling across various contexts. 

Benchmarks indicate that the Ministral models outperform competitors, including Gemma 2 2B and Llama 3.2 3B and Llama 3.1 8B

Both models are available for commercial use, with pricing set at $0.04 per million tokens for Ministral 3B and $0.1 for Ministral 8B. The model weights for the 8B Instruct model will be available for research use. Last month, Mistral AI launched Pixtral 12B, a model capable of processing both images and text. With approximately 12 billion parameters, it employs vision encoding to interpret images alongside text.

A day after Meta released Llama 3.1Mistral AI also launched Mistral Large 2, the latest generation of its flagship model, offering substantial improvements in code generation, mathematics, and multilingual support. The model introduces advanced function-calling capabilities and is available on la Plateforme.

Continue Reading