Tools

  • 15M Param Model Achieves 24% on ARC-AGI-2


    15M param model solving 24% of ARC-AGI-2 (Hard Eval). Runs on consumer hardware.Bitterbot AI has introduced TOPAS-DSPL, a compact recursive model with approximately 15 million parameters, achieving 24% accuracy on the ARC-AGI-2 evaluation set, a significant improvement over the previous state-of-the-art (SOTA) of 8% for models of similar size. The model employs a "Bicameral" architecture, dividing tasks into a Logic Stream for algorithm planning and a Canvas Stream for execution, effectively addressing compositional drift issues found in standard transformers. Additionally, Test-Time Training (TTT) is used to fine-tune the model on specific examples before solution generation. The entire pipeline, including data generation, training, and evaluation, has been open-sourced, allowing for community verification and potential reproduction of results on consumer hardware like the 4090 GPU. This matters because it demonstrates significant advancements in model efficiency and accuracy, making sophisticated AI more accessible and verifiable.

    Read Full Article: 15M Param Model Achieves 24% on ARC-AGI-2

  • The State Of LLMs 2025: Progress, Problems, Predictions


    [P] The State Of LLMs 2025: Progress, Problems, and PredictionsChoosing the right machine learning framework is crucial for development efficiency and model performance. PyTorch and TensorFlow are two of the most recommended frameworks, with TensorFlow being favored in industrial settings due to its robust tools and Keras integration, which simplifies development. However, some users find TensorFlow setup challenging, particularly on Windows due to the lack of native GPU support. Other notable frameworks include JAX, Scikit-Learn, and XGBoost, with various subreddits offering platforms for further discussion and personalized advice from experienced practitioners. This matters because selecting an appropriate machine learning framework can significantly influence the success and efficiency of AI projects.

    Read Full Article: The State Of LLMs 2025: Progress, Problems, Predictions

  • Enhance LLM Plots with LLMPlot.com


    I built LLMPlot.com (free + OSS) to make LLM plots not ugly anymore!LLMPlot.com is a new platform designed to enhance the visual appeal of language model evaluation plots, which are often criticized for their lack of aesthetics. The tool is free and open source, allowing users to input model details, provider, and scores to generate visually appealing comparison plots. These plots are optimized for sharing on social media platforms like X, LinkedIn, and Reddit, making them accessible and engaging for a wider audience. This matters because it improves the communication and understanding of complex data through better visual representation.

    Read Full Article: Enhance LLM Plots with LLMPlot.com

  • Alibaba’s MAI-UI: Leading GUI Agent Innovation


    Alibaba Tongyi Lab Releases MAI-UI: A Foundation GUI Agent Family that Surpasses Gemini 2.5 Pro, Seed1.8 and UI-Tars-2 on AndroidWorldAlibaba Tongyi Lab's MAI-UI is a groundbreaking family of GUI agents that excels in mobile GUI navigation and grounding, outperforming previous models like Gemini 2.5 Pro and Seed1.8. By integrating MCP tool use, agent-user interaction, and device-cloud collaboration, MAI-UI addresses gaps in earlier GUI agents, maintaining privacy while leveraging cloud models. Built on the Qwen3 VL framework, these agents process natural language and UI screenshots to perform actions in Android environments, achieving high accuracy on benchmarks such as ScreenSpot Pro and MMBench GUI L2. The system's robust navigation capabilities are enhanced through a self-evolving data pipeline and an online reinforcement learning framework, demonstrating significant improvements in success rates on the AndroidWorld benchmark. This matters because it represents a significant advancement in the development of intelligent, interactive mobile applications that can seamlessly integrate with user needs and complex environments.

    Read Full Article: Alibaba’s MAI-UI: Leading GUI Agent Innovation

  • New SSM Architecture Exceeds Transformer Baseline


    [R] New SSM architecture (exceeds Transformer baseline) - reproducible benchmarks (feedback wanted)Recent advancements in sequence modeling have introduced a new State Space Model (SSM) architecture that surpasses traditional Transformers by addressing their O(L^2) complexity limitation for long sequences. By integrating delta-rule updates with the powerful representational capabilities of gated convolutions, this new architecture achieves O(n) complexity, making it a strong baseline for sequence modeling tasks. The architecture not only matches but exceeds the performance and speed of Transformers, even with relatively short sequence lengths, thanks to the use of mildly optimized Triton kernels. This development is significant as it provides a more efficient and scalable solution for processing long sequences in natural language processing and other domains.

    Read Full Article: New SSM Architecture Exceeds Transformer Baseline

  • LLM Price Tracker & Cost Calculator


    I built a simple LLM price tracker + cost calculator (2100+ models, auto-updated)A new tool has been developed to help users keep track of pricing differences across over 2100 language models from various providers. This tracker not only aggregates model prices but also includes a simple cost calculator to estimate expenses. It updates every six hours, ensuring users have the latest information, and is published as a static site on GitHub pages, making it accessible for automation and programmatic use. This matters because it simplifies the process of comparing and managing costs for those using language models, potentially saving time and money.

    Read Full Article: LLM Price Tracker & Cost Calculator

  • AI Agent Executes 100,000 Tasks with One Prompt


    I built an AI agent that can do 100,000s of tasks one prompt :)An innovative AI feature called "Scale Mode" enables a single prompt to execute thousands of coordinated tasks autonomously, such as visiting numerous links to collect data or processing extensive documents. This capability allows for efficient handling of large-scale operations, including generating and enriching B2B leads and processing invoices. The feature is designed to be versatile, complementing a wide range of tasks by simply adding "Do it in Scale Mode" to the prompt. This advancement in AI technology showcases the potential for increased productivity and automation in various industries. Why this matters: Scale Mode represents a significant leap in AI capabilities, offering businesses the ability to automate and efficiently manage large volumes of tasks, which can lead to time savings and increased operational efficiency.

    Read Full Article: AI Agent Executes 100,000 Tasks with One Prompt

  • Benchmarking Speech-to-Text Models for Medical Dialogue


    I benchmarked 26 local + cloud Speech-to-Text models on long-form medical dialogue and ranked them + open-sourced the full evalA comprehensive benchmarking of 26 speech-to-text (STT) models was conducted on long-form medical dialogue using the PriMock57 dataset, consisting of 55 files and over 81,000 words. The models were ranked based on their average Word Error Rate (WER), with Google Gemini 2.5 Pro leading at 10.79% and Parakeet TDT 0.6B v3 emerging as the top local model at 11.9% WER. The evaluation also considered processing time per file and noted issues such as repetition-loop failures in some models, which required chunking to mitigate. The full evaluation, including code and a complete leaderboard, is available on GitHub, providing valuable insights for developers working on medical transcription technology. This matters because accurate and efficient STT models are crucial for improving clinical documentation and reducing the administrative burden on healthcare professionals.

    Read Full Article: Benchmarking Speech-to-Text Models for Medical Dialogue

  • botchat: Privacy-Preserving Multi-Bot AI Chat Tool


    botchat | a privacy-preserving, multi-bot AI chat toolbotchat is a newly launched tool designed for users who engage with multiple AI language models simultaneously while prioritizing privacy. It allows users to assign different personas to bots, enabling diverse perspectives on a single query and capitalizing on the unique strengths of various models within the same conversation. Importantly, botchat emphasizes data protection by ensuring that conversations and attachments are not stored on any servers, and when using the default keys, user data is not retained by AI providers for model training. This matters because it offers a secure and versatile platform for interacting with AI, addressing privacy concerns while enhancing user experience with multiple AI models.

    Read Full Article: botchat: Privacy-Preserving Multi-Bot AI Chat Tool

  • CNN in x86 Assembly: Cat vs Dog Classifier


    I implemented a Convolutional Neural Network (CNN) from scratch entirely in x86 Assembly, Cat vs Dog ClassifierAn ambitious project involved implementing a Convolutional Neural Network (CNN) from scratch in x86-64 assembly to classify images of cats and dogs, using a dataset of 25,000 RGB images. The project aimed to deeply understand CNNs by focusing on low-level operations such as memory layout, data movement, and SIMD arithmetic, without relying on any machine learning frameworks or libraries. Key components like Conv2D, MaxPool, Dense layers, activations, forward and backward propagation, and the data loader were developed in pure assembly, achieving a performance approximately 10 times faster than a NumPy version. Despite the challenges of debugging at this scale, the implementation successfully runs inside a lightweight Debian Slim Docker container, showcasing a unique blend of low-level programming and machine learning. This matters because it demonstrates the potential for significant performance improvements in neural networks through low-level optimizations.

    Read Full Article: CNN in x86 Assembly: Cat vs Dog Classifier