SIMD

  • EdgeVec v0.7.0: Fast Browser-Native Vector Database


    [P] EdgeVec v0.7.0: Browser-Native Vector Database with 8.75x Faster Hamming Distance via SIMDEdgeVec is an open-source vector database designed to run entirely in the browser using WebAssembly, offering significant performance improvements in its latest version, v0.7.0. The update includes an 8.75x speedup in Hamming distance calculations through SIMD optimizations, a 32x memory reduction via binary quantization, and a 3.2x acceleration in Euclidean distance computations. EdgeVec enables browser-based applications to perform semantic searches and retrieval-augmented generation without server dependencies, ensuring privacy, reducing latency, and eliminating hosting costs. These advancements make it feasible to handle large vector indices in-browser, supporting offline-first AI tools and enhancing user experience in web applications. Why this matters: EdgeVec's advancements in browser-native vector databases enhance privacy, reduce latency, and lower costs, making sophisticated AI applications more accessible and efficient for developers and users alike.

    Read Full Article: EdgeVec v0.7.0: Fast Browser-Native Vector Database

  • CNN in x86 Assembly: Cat vs Dog Classifier


    I implemented a Convolutional Neural Network (CNN) from scratch entirely in x86 Assembly, Cat vs Dog ClassifierAn ambitious project involved implementing a Convolutional Neural Network (CNN) from scratch in x86-64 assembly to classify images of cats and dogs, using a dataset of 25,000 RGB images. The project aimed to deeply understand CNNs by focusing on low-level operations such as memory layout, data movement, and SIMD arithmetic, without relying on any machine learning frameworks or libraries. Key components like Conv2D, MaxPool, Dense layers, activations, forward and backward propagation, and the data loader were developed in pure assembly, achieving a performance approximately 10 times faster than a NumPy version. Despite the challenges of debugging at this scale, the implementation successfully runs inside a lightweight Debian Slim Docker container, showcasing a unique blend of low-level programming and machine learning. This matters because it demonstrates the potential for significant performance improvements in neural networks through low-level optimizations.

    Read Full Article: CNN in x86 Assembly: Cat vs Dog Classifier