Latest Announcements

Mistral Small3.2: Enhanced Instruction Handling and Reduced Repetition Errors
Published on 2025-06-20More News
Trending Models

Mistral Small3.2 24B Instruct
Mistral Small3.2 24B Instruct: A powerful 24-billion parameter model by Mistral AI, designed for chat assistance with a context window of up to 12,800 tokens.


Magistral 24B
Magistral 24B by Mistral AI: A powerful 24-billion parameter, multi-lingual model for advanced text generation.


Llama3.3 70B Instruct
Llama3.3 70B: Meta's 70 billion param, multi-lingual model for assistant-style chat, with a context window of 128k.


Llama3.1 8B Instruct
Llama3.1 8B Instruct by Meta Llama, an 8 billion parameter, multi-lingual model with a 128k context window, excels in assistant-like chat.


Llama3.2 1B
Llama3.2 1B: Meta's 1B param model for multilingual chat apps, with context lengths up to 128k.


Qwen3 235B
Qwen3 235B: Alibaba's large language model with 235 billion parameters, designed for text generation tasks.


Qwen3 0.6B
Qwen3 0.6B: Alibaba's 0.6 billion param model excels in reasoning tasks with a 32k context window.


Gemma3 4B Instruct
Gemma3 4B by Google: 4 billion params, 128k/32k context-length. Supports multiple languages for creative content generation, chatbot AI, text summarization, and image data extraction.


Qwen3 30B
Qwen3 30B: Alibaba's 30 billion param large language model for advanced reasoning tasks.


Qwen3 8B
Qwen3 8B: Alibaba's 8 billion parameter LLM for reasoning & problem-solving. Mono-lingual model with context lengths up to 8k.


Qwen2.5 Coder 32B Instruct
Qwen2.5 Coder 32B Instruct by Alibaba Qwen: 32 billion param, bilingual, excels in coding tasks.


Llama3.1 8B
Llama3.1 8B: Meta's 8 billion param, 128k context-length LLM for multilingual assistant chat.
