Ultrasafe AI technology

AI models

We release the world’s most capable open models, enabling
frontier AI innovation.

Developer platform

Our portable developer platform serves our open and optimized models for
building fast and intelligent applications. We offer flexible access options!

General purpose models

Mistral Nemo

A state-of-the-art 12B small model built in collaboration with NVIDIA.

The most powerful model in its size category
Available under Apache 2.0 license
Multi-lingual (incl. European languages, Chinese, Japanese, Korean, Hindi, Arabic)
Large context window of 128K tokens

Mistral Large 2

Top-tier reasoning for high-complexity tasks, for your most sophisticated needs.

Multi-lingual (incl. European languages, Chinese, Japanese, Korean, Hindi, Arabic)
Large context window of 128K tokens
Native function calling capacities and JSON outputs
High coding proficiency (80+ coding languages)

Specialized models

Codestral

State-of-the-art Mistral model trained specifically for code tasks.

Trained on 80+ programming languages (incl. Python, Java, C++)
Optimized for low latency: smaller than coding models
Context window of 32K tokens

Mistral Embed

State-of-the-art semantic for extracting representation of text extracts.

English only for now
Achieves a retrieval score of 55.26 on the
Massive Text Embedding Benchmark (MTEB)

Research models

Codestral Mamba

A Mamba2 language model designed for coding tasks.

Context window of 256K tokens
7.3B parameters
Available under Apache 2.0 license

Mathstral

Variant of Mistral-7B, optimized for solving advanced problems.

Context window of 32K tokens
7B parameters
Available under Apache 2.0 license

Mixtral

State-of-the-art semantic for extracting representation of text extracts.

Available in 8x7 and 8×22
sizes
Available under Apache 2.0 license

The Platform

Access our latest products via our developer platform, hosted in Europe

Built for developers

La Plateforme is developers’ preferred way to access all Mistral Al’s models. Hosted and served on Mistral Al infrastructure, in Europe.

  • Our best models at the best price : Get access to our models at an unmatched price/performance point
  • Guides & community : Use our guides and community forums to build your own application and services
  • Secure by design : Your data are encrypted at rest (AES256) and in transit (TLS 1.2+); our servers are in the EU

from ultrasafeai.client import ultrasafeClient
from ultrasafeai.models.chat_completion import ChatMessage

api_key = os.environ[“ULTRASAFE_API_KEY”]
model = “ultrasafe-tiny”

client = UltrasafeClient(api_key=api_key)

messages = [
ChatMessage(role=”user”,
content=”Who is the most renowned French painter?”)
]

Fine-tune our models

We allow you to fine-tune our models in an easy, effective & cost-efficient way, and thus use smaller and better-suited models to solve your specific use cases. Fine-tuning can be done with our open-source fine-tuning code as well as on La Plateforme with our efficient Fine-tuning API.

Use Ultrasafe fine-tuning code

Benefit from Ultrasafe fine-tuning code to perform fine-tuning on Ultrasafe open-source models on your own.

Fine-tune Ultrasafe models on La Plateforme

Leverage Ultrasafe’s unique expertise in training models by using our highly efficient fine-tuning service to specialize both our open-source and commercial models.

Deploy anywhere

The Platform

Get started with Mistral models in a few clicks via our developer platform hosted on Mistral’s infrastructure and build your own applications and services. Our servers are hosted in EU.


Cloud platforms

Access our models via your preferred cloud provider and use your cloud credits. Our open models are currently available via our cloud partners (GCP, AWS, Azure, Snowflake, NVIDIA). Mistral Large 2 is available on Azure AI Studio, AWS Bedrock, Google Cloud Model Garden and Snowflake.

Self-deployment

Deploy Mistral models on virtual cloud or on-prem. Self-deployment offers more advanced levels of customisation and control. Your data stays within your walls. Try deploying our open models, and contact our team to deploy our optimized models similarly.

Model licenses

We distribute two categories of models:

  • Models licensed under Apache 2.0 that are no-string-attached contributions to the community and industry
  • Models with a double license: Mistral Research License / Mistral Commercial License. These models can be used for non-commercial purposes with no strings attached. A commercial license must be bought for commercial use cases requiring self-deployment. These models may also be used directly through our portable platform or distributors, with pay-as-you-go and reserved offerings.
Apache 2.0Ultrasafe Research LicenseUltrasafe Commercial License
Access to weights
Deployment for research purposes and individual usage
Creation of derivatives (e.g. fine-tuning) for research purposes and individual usage The same license applies to derivatives The same license applies to derivatives
Deployment for commercial purposes (internal & external use cases) Requires Ultrasafe Commercial License
Creation and usage of derivatives (e.g. fine-tuning) for commercial use cases Requires Ultrasafe Commercial License
Custom terms & support (self-deployment)

Pay-as-you-go pricing

General purpose models
ModelAPI NameDescriptionInputOutput
Mistral Nemoopen-mistral-nemo-2407Mistral Nemo is a state-of-the-art 12B model developed with NVIDIA.$0.3 /1M tokens$0.3 /1M tokens
Mistral Large 2mistral-large-2407Top-tier reasoning for high-complexity tasks, for your most sophisticated needs.$3 /1M tokens$9 /1M tokens
Specialist models
ModelAPI NameDescriptionInputOutput
Codestralcodestral-2405State-of-the-art Mistral model trained specifically for code tasks.$1 /1M tokens$3 /1M tokens
Mistral Embedmistral-embedState-of-the-art semantic for extracting representation of text extracts.$0.1 /1M tokens
Fine-tuning models
ModelOne-off trainingStorageInputOutput
Mistral Nemo$1 /1M tokens$2 per month per model$0.3 /1M tokens$0.3 /1M tokens
Codestral$3 /1M tokens$2 per month per model$1 /1M tokens$3 /1M tokens
Mistral Large 2$9 /1M tokens$4 per month per model$3 /1M tokens$9 /1M tokens
Legacy models
ModelAPI NameDescriptionInputOutput
Mistral 7Bopen-mistral-7bA 7B transformer model, fast-deployed and easily customisable.$0.25 /1M tokens$0.25 /1M tokens
Mixtral 8x7Bopen-mixtral-8x7bA 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total.$0.7 /1M tokens$0.7 /1M tokens
Mixtral 8x22Bopen-mixtral-8x22bMixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B.$2 /1M tokens$6 /1M tokens
Mistral Smallmistral-small-latestCost-efficient reasoning, optimised for high volume use cases that require low latency.$1 /1M tokens$3 /1M tokens
Mistral Mediummistral-medium-latestOur first commercial model.$2.75 /1M tokens$8.1 /1M tokens

Fine-tuning Pricing Explained

Mistral AI provides a fine-tuning API through La Plateforme, making it easy to fine-tune our open-source and commercial models. There are three costs related to fine-tuning:


One-off training: Price per token on the data you want to fine-tune our standard models on; minimum fee per fine-tuning job of $4

Inference: Price per input/output token when using the fine-tuned model(s)

Storage: Price per month per model for storage (irrespective of model usage; models can be deleted any time)