Technology Artificial Intelligence Editors Pick Large Language Model Software Engineering Staff Modern large language models are no longer trained only on raw internet text. Increasingly, companies are using powerful “teacher” models to help train smaller or more efficient “student” models. This process, broadly known as LLM distillation or model-to-model training , has become a key technique for building high-performing models at lower computational cost. Meta used its massive Llama 4 Behemoth model to help train Llama 4 Scout and Maverick, while Google leveraged Gemini models during the development of Gemma 2 and Gemma 3. Similarly, DeepSeek distilled reasoning capabilities from DeepSeek-R1 into smaller Qwen and Llama-based models. The core idea is simple: instead of learning solely from human-written text, a student model can also learn from the outputs, probabilities, reasoning traces, or behaviors of another LLM. This allows smaller models to inherit capabilities such as reasoning, instruction following, and structured generation from much larger systems. Distillation can happen during pre-training, where teacher and student models are trained together, or during post-training, where a fully trained teacher transfers knowledge to a separate student model. In this article, we will explore three major approaches used for training one LLM using another: Soft-label distillation , where the student learns from the teacher’s probability distributions; Hard-label distillation , where the student imitates the teacher’s generated outputs; and Co-distillation , where multiple models learn collaboratively by sharing predictions and behaviors during training. Soft-Label Distillation Soft-label distillation is a training technique where a smaller student LLM learns by imitating the output probability distribution of a larger teacher LLM . Instead of training only on the correct next token, the student is trained to match the teacher’s softmax probabilities across the entire vocabulary. For example, if the teacher predicts the next token with probabilities like “cat” = 70% , “dog” = 20% , and “animal” = 10% , the student learns not just the final answer, but also the relationships and uncertainty between different tokens. This richer signal is often called the teacher’s “dark knowledge” because it contains hidden information about reasoning patterns and semantic understanding. The biggest advantage of soft-label distillation is that it allows smaller models to inherit capabilities from much larger models while remaining faster and cheaper to deploy. Since the student learns from the teacher’s full probability distribution, training becomes more stable and informative compared to learning from hard one-word targets alone. However, this method also comes with practical challenges. To generate soft labels, you need access to the teacher model’s logits or weights, which is often not possible with closed-source models. In addition, storing probability distributions for every token across vocabularies containing 100k+ tokens becomes extremely memory-intensive at LLM scale, making pure soft-label distillation expensive for trillion-token datasets. Hard-label distillation Hard-label distillation is a simpler approach where the student LLM learns only from the teacher model’s final predicted output token instead of its full probability distribution. In this setup, a pre-trained teacher model generates the most likely next token or response, and the student model is trained using standard supervised learning to reproduce that output. The teacher essentially acts as a high-quality annotator that creates synthetic training data for the student. DeepSeek used this approach to distill reasoning capabilities from DeepSeek-R1 into smaller Qwen and Llama 3.1 models. Unlike soft-label distillation, the student does not see the teacher’s internal confidence scores or token relationships — it only learns the final answer. This makes hard-label distillation computationally much cheaper and easier to implement since there is no need to store massive probability distributions for every token. It is also especially useful when working with proprietary “black-box” models like GPT-4 APIs, where developers only have access to generated text and not the underlying logits. While hard labels contain less information than soft labels, they remain highly effective for instruction tuning, reasoning datasets, synthetic data generation, and domain-specific fine-tuning tasks. Co-distillation Co-distillation is a training approach where both the teacher and student models are trained together instead of using a fixed pre-trained teacher. In this setup, the teacher LLM and student LLM process the same training data simultaneously and generate their own softmax probability distributions. The teacher is trained normally using the ground-truth hard labels, while the student learns by matching the teacher’s soft labels along with the actual correct answers. Meta used a form of this approach while training Llama 4 Scout and Maverick alongside the larger Llama 4 Behemoth model. One challenge with co-distillation is that the teacher model is not fully trained during the early stages, meaning its predictions may initially be noisy or inaccurate. To overcome this, the student is usually trained using a combination of soft-label distillation loss and standard hard-label cross-entropy loss. This creates a more stable learning signal while still allowing knowledge transfer between models. Unlike traditional one-way distillation, co-distillation allows both models to improve together during training, often leading to better performance, stronger reasoning transfer, and smaller performance gaps between the teacher and student models. Comparing the Three Distillation Techniques Soft-label distillation transfers the richest form of knowledge because the student learns from the teacher’s full probability distribution instead of only the final answer. This helps smaller models capture reasoning patterns, uncertainty, and relationships between tokens, often leading to stronger overall performance. However, it is computationally expensive, requires access to the teacher’s logits or weights, and becomes difficult to scale because storing probability distributions for massive vocabularies consumes enormous memory. Hard-label distillation is simpler and more practical. The student only learns from the teacher’s final generated outputs, making it much cheaper and easier to implement. It works especially well with proprietary black-box models like GPT-4 APIs where internal probabilities are unavailable. While this approach loses some of the deeper “dark knowledge” present in soft labels, it remains highly effective for instruction tuning, synthetic data generation, and task-specific fine-tuning. Co-distillation takes a collaborative approach where teacher and student models learn together during training. The teacher improves while simultaneously guiding the student, allowing both models to benefit from shared learning signals. This can reduce the performance gap seen in traditional one-way distillation methods, but it also makes training more complex since the teacher’s predictions are initially unstable. In practice, soft-label distillation is preferred for maximum knowledge transfer, hard-label distillation for scalability and practicality, and co-distillation for large-scale joint training setups. Arham Islam + posts Bio I am a Civil Engineering Graduate (2022) from Jamia Millia Islamia, New Delhi, and I have a keen interest in Data Science, especially Neural Networks and their application in various areas. Arham Islam Why Gradient Descent Zigzags and How Momentum Fixes It Arham Islam A Developer’s Guide to Systematic Prompting: Mastering Negative Constraints, Structured JSON Outputs, and Multi-Hypothesis Verbalized Sampling Arham Islam What is Tokenization Drift and How to Fix It? Arham Islam The LoRA Assumption That Breaks in Production Arham Islam RAG Without Vectors: How PageIndex Retrieves by Reasoning Arham Islam How TabPFN Leverages In-Context Learning to Achieve Superior Accuracy on Tabular Datasets Compared to Random Forest and CatBoost Arham Islam A Technical Deep Dive into the Essential Stages of Modern Large Language Model Training, Alignment, and Deployment Arham Islam How Knowledge Distillation Compresses Ensemble Intelligence into a Single Deployable AI Model Arham Islam Five AI Compute Architectures Every Engineer Should Know: CPUs, GPUs, TPUs, NPUs, and LPUs Compared Arham Islam Sigmoid vs ReLU Activation Functions: The Inference Cost of Losing Geometric Context Arham Islam Paged Attention in Large Language Models LLMs Arham Islam How BM25 and RAG Retrieve Information Differently? Arham Islam Safely Deploying ML Models to Production: Four Controlled Strategies (A/B, Canary, Interleaved, Shadow Testing) Arham Islam Model Context Protocol (MCP) vs. AI Agent Skills: A Deep Dive into Structured Tools and Behavioral Guidance for LLMs Arham Islam Beyond Accuracy: Quantifying the Production Fragility Caused by Excessive, Redundant, and Low-Signal Features in Regression Arham Islam RAG vs. Context Stuffing: Why selective retrieval is more efficient and reliable than dumping all data into the prompt Arham Islam Getting Started with OpenClaw and Connecting It with WhatsApp Arham Islam The Statistical Cost of Zero Padding in Convolutional Neural Networks (CNNs) Arham Islam What are Context Graphs? Arham Islam Understanding the Layers of AI Observability in the Age of LLMs Arham Islam Implementing Softmax From Scratch: Avoiding the Numerical Stability Trap Arham Islam AI Interview Series #5: Prompt Caching Arham Islam AI Interview Series #4: Explain KV Caching Arham Islam Google Introduces T5Gemma 2: Encoder Decoder Models with Multimodal Inputs via SigLIP and 128K Context Arham Islam 5 AI Model Architectures Every AI Engineer Should Know Arham Islam Kernel Principal Co