Latest Announcements
Deepseek OCR: Vision‑Language Compression Meets Dynamic OCR
Published on 2025-11-19More News
Trending Models
Qwen3 0.6B
Qwen3 0.6B: Alibaba's 0.6 billion param model excels in reasoning tasks with a 32k context window.
All Minilm 33M
All-MiniLM-33M by Sentence Transformers: A compact, monolingual model for efficient information retrieval.
All Minilm 22M
All-MiniLM 22M by Sentence Transformers, a compact model for efficient information retrieval.
Gemma3 4B Instruct
Gemma3 4B by Google: 4 billion params, 128k/32k context-length. Supports multiple languages for creative content generation, chatbot AI, text summarization, and image data extraction.
Bge M3 567M
Bge M3 567M by BAAI: A bi-lingual, 567M param model for efficient information retrieval with an 8k context window.
Gemma3 27B Instruct
Gemma3 27B: Google's 27 billion parameter LLM for creative content & comms. Supports context up to 128k tokens. Ideal for text gen, chatbots, summarization & image data extraction.
Qwen3 8B
Qwen3 8B: Alibaba's 8 billion parameter LLM for reasoning & problem-solving. Mono-lingual model with context lengths up to 8k.
Llama3.1 8B
Llama3.1 8B: Meta's 8 billion param, 128k context-length LLM for multilingual assistant chat.
Gemma3 12B Instruct
Gemma3 12B: Google's 12 billion param. LLM for multilingual content creation & comm., incl. text gen., chatbots, summarization & image data extraction.
Qwen3 4B
Qwen3 4B: Alibaba's 4 billion parameter LLM for enhanced reasoning & logic tasks.
Qwen2.5 7B Instruct
Qwen2.5 7B Instruct: Alibaba's multilingual LLM with 7 billion params, supports context lengths up to 128k for long text generation.
Qwen3 32B
Qwen3 32B: Alibaba's 32 billion param LLM for complex question answering with logical reasoning.