Kawn Foundation Models
Kawn’s foundation language models focus on achieving deep understanding of Arabic texts across various styles and contexts. At Kawn, we develop a range of models varying in size and use, from lightweight models suitable for resource-constrained devices, to advanced models built with Mixture of Experts (MoE) technology that deliver high performance while maintaining resource efficiency.
Our model lineup includes:
Kuwain
A small and efficient model in terms of performance and speed, designed to operate effectively on resource-limited devices. It is tailored for focused tasks, achieving high performance on specific missions while maintaining a lightweight footprint and fast response.
Current Version: Kuwain 1.5
Fine-tuned Versions: Sadeed, Lahjawi, Mutarjim
Kawn Medium
Kawn Medium is a general-purpose, mid-sized language model developed to handle a wide range of text-based tasks such as reading comprehension, content generation, summarization, question answering, and linguistic analysis. It offers a clear balance between size and performance, making it suitable for multi-task applications that require deep understanding without the need for massive models.
Current Version: Coming Soon
Use Cases: Education, Digital Content, Government Services
Kawn-MoE
Kawn-MoE is the largest model in the Kawn suite, built on the Mixture of Experts (MoE) architecture. It activates a subset of “experts” within the model for each query, delivering superior performance without incurring costly computational overhead. It is designed to serve as the backbone for domain-specific models—such as legal, medical, and jurisprudential models—thanks to its ability to deeply understand complex texts. Kawn-MoE is highly customizable and used in knowledge-intensive environments requiring precision, contextual understanding, and interpretability.
Current Version: Coming Soon
Architecture: Mixture of Experts (MoE)
Domain-Specific Language Models
A comprehensive AI architecture designed to support Arabic language understanding across multiple levels—from textual and visual comprehension to semantic embeddings.
Model | Size | Best For | Architecture |
---|---|---|---|
Kuwain | Small | Focused tasks, on-device apps | Standard Transformer |
Kawn Medium | Medium | General-purpose applications | Standard Transformer |
Kawn–MoE | Large | Knowledge-intensive domains | Mixture of Experts (MoE) |