Sinkhorn-Knopp
-
Visualizing DeepSeek’s mHC Training Fix
Read Full Article: Visualizing DeepSeek’s mHC Training Fix
DeepSeek's recent paper introduces Manifold-Constrained Hyper-Connections (mHC) to address training instability in deep learning models with many layers. When stacking over 60 layers of learned mixing matrices, small amplifications can compound, leading to explosive growth in training gains. By projecting these matrices onto a "doubly stochastic" manifold using the Sinkhorn-Knopp algorithm, gains remain bounded regardless of depth, with just one iteration significantly reducing gain from 1016 to approximately 1. An interactive demo and PyTorch implementation are available for experimentation, illustrating how this approach effectively stabilizes training. This matters because it offers a solution to a critical challenge in scaling deep learning models safely and efficiently.
-
Interactive Visualization of DeepSeek’s mHC Stability
Read Full Article: Interactive Visualization of DeepSeek’s mHC Stability
An interactive demo has been created to explore DeepSeek's mHC paper, addressing the instability in Hyper-Connections caused by the multiplication of learned matrices across multiple layers. This instability results in exponential amplification, reaching values as high as 10^16. The solution involves projecting these matrices onto a doubly stochastic manifold using the Sinkhorn-Knopp algorithm, which ensures that the composite mapping remains bounded, regardless of depth. Surprisingly, just one iteration of the Sinkhorn process is sufficient to stabilize the gain from 10^16 to approximately 1. This matters because it offers a practical method to enhance the stability and performance of deep learning models that utilize Hyper-Connections.
-
DeepSeek-V3’s ‘Hydra’ Architecture Explained
Read Full Article: DeepSeek-V3’s ‘Hydra’ Architecture Explained
DeepSeek-V3 introduces the "Hydra" architecture, which splits the residual stream into multiple parallel streams or Hyper-Connections to prevent features from competing for space in a single vector. Initially, allowing these streams to interact caused signal energy to increase drastically, leading to unstable gradients. The solution involved using the Sinkhorn-Knopp algorithm to enforce energy conservation by ensuring the mixing matrix is doubly stochastic, akin to balancing guests and chairs at a dinner party. To address computational inefficiencies, custom kernels were developed to maintain data in GPU cache, and recomputation strategies were employed to manage memory usage effectively. This matters because it enhances the stability and efficiency of neural networks, allowing for more complex and powerful models.
