training stability

  • Visualizing DeepSeek’s mHC Training Fix


    Visualizing why DeepSeek's mHC fixes training instability - interactive demoDeepSeek's recent paper introduces Manifold-Constrained Hyper-Connections (mHC) to address training instability in deep learning models with many layers. When stacking over 60 layers of learned mixing matrices, small amplifications can compound, leading to explosive growth in training gains. By projecting these matrices onto a "doubly stochastic" manifold using the Sinkhorn-Knopp algorithm, gains remain bounded regardless of depth, with just one iteration significantly reducing gain from 1016 to approximately 1. An interactive demo and PyTorch implementation are available for experimentation, illustrating how this approach effectively stabilizes training. This matters because it offers a solution to a critical challenge in scaling deep learning models safely and efficiently.

    Read Full Article: Visualizing DeepSeek’s mHC Training Fix

  • Manifold-Constrained Hyper-Connections: Enhancing HC


    [R] New paper by DeepSeek: mHC: Manifold-Constrained Hyper-ConnectionsManifold-Constrained Hyper-Connections (mHC) is introduced as a novel framework to enhance the Hyper-Connections (HC) paradigm by addressing its limitations in training stability and scalability. By projecting the residual connection space of HC onto a specific manifold, mHC restores the identity mapping property, which is crucial for stable training, and optimizes infrastructure to ensure efficiency. This approach not only improves performance and scalability but also provides insights into topological architecture design, potentially guiding future foundational model developments. Understanding and improving the scalability and stability of neural network architectures is crucial for advancing AI capabilities.

    Read Full Article: Manifold-Constrained Hyper-Connections: Enhancing HC

  • Activation Functions in Language Models


    Day 20: 21 Days of Building a Small Language Model: Activation FunctionsActivation functions are crucial components in neural networks, enabling them to learn complex, non-linear patterns beyond simple linear transformations. They introduce non-linearity, allowing networks to approximate any function, which is essential for tasks like image recognition and language understanding. The evolution of activation functions has moved from ReLU, which helped overcome vanishing gradients, to more sophisticated functions like GELU and SwiGLU, which offer smoother transitions and better gradient flow. SwiGLU, with its gating mechanism, has become the standard in modern language models due to its expressiveness and ability to improve training stability and model performance. Understanding and choosing the right activation function is vital for building effective and stable language models. Why this matters: Activation functions are fundamental to the performance and stability of neural networks, impacting their ability to learn and generalize complex patterns in data.

    Read Full Article: Activation Functions in Language Models