local models

  • Aventura: Open Source Adventure RP App


    Free, open source adventure RP app (AGPL 3) | AventuraAventura is a free and open-source frontend application designed for adventure role-playing and creative writing, licensed under AGPL 3. It supports OpenAI-compatible sources and allows users to modify model parameters, despite limited testing due to hardware constraints. Key features include event and character tracking, multiple choice options for storytelling, long-term memory management, automatic lorebook retrieval, and anti-slop automation using LLMs. The app also offers a setup wizard for new scenarios, built-in spell checker, and lorebook classification, while its unique memory system maintains coherence by summarizing and querying past chapters without overloading the main narrative AI. This matters because it enhances the creative process by automating complex tasks, allowing users to focus on storytelling.

    Read Full Article: Aventura: Open Source Adventure RP App

  • WebSearch AI: Local Models Access the Web


    WebSearch AI - Let Local Models use the InterwebsWebSearch AI is a newly updated, fully self-hosted chat application that enables local models to access real-time web search results. Designed to accommodate users with limited hardware capabilities, it provides an easy entry point for non-technical users while offering advanced users an alternative to popular platforms like Grok, Claude, and ChatGPT. The application is open-source and free, utilizing Llama.cpp binaries for the backend and PySide6 Qt for the frontend, with a remarkably low runtime memory usage of approximately 500 MB. Although the user interface is still being refined, this development represents a significant improvement in making AI accessible to a broader audience. This matters because it democratizes access to AI technology by reducing hardware and technical barriers.

    Read Full Article: WebSearch AI: Local Models Access the Web

  • Stress-testing Local LLM Agents with Adversarial Inputs


    Stress-testing local LLM agents with adversarial inputs (Ollama, Qwen)A new open-source tool called Flakestorm has been developed to stress-test AI agents running on local models like Ollama, Qwen, and Gemma. The tool addresses the issue of AI agents performing well with clean prompts but exhibiting unpredictable behavior when faced with adversarial inputs such as typos, tone shifts, and prompt injections. Flakestorm generates adversarial mutations from a "golden prompt" and evaluates the AI's robustness, providing a score and a detailed HTML report of failures. The tool is designed for local use, requiring no cloud services or API keys, and aims to improve the reliability of local AI agents by identifying potential weaknesses. This matters because ensuring the robustness of AI systems against varied inputs is crucial for their reliable deployment in real-world applications.

    Read Full Article: Stress-testing Local LLM Agents with Adversarial Inputs

  • gsh: A New Shell for Local Model Interaction


    gsh - play with any local model directly in your shell REPL or scriptsgsh is a newly developed shell that offers an innovative way to interact with local models directly from the command line, providing features like command prediction and an agentic scripting language. It enhances user experience by allowing customization similar to neovim and supports integration with various local language models (LLMs). Key functionalities include syntax highlighting, tab completion, history tracking, and auto-suggestions, making it a versatile tool for both interactive use and automation scripts. This matters as it presents a modern approach to shell environments, potentially increasing productivity and flexibility for developers and users working with local models.

    Read Full Article: gsh: A New Shell for Local Model Interaction

  • LocalGuard: Auditing Local AI Models for Security


    I built a tool to audit local models (Ollama/vLLM) for security and hallucinations using Garak & InspectAILocalGuard is an open-source tool designed to audit local machine learning models, such as Ollama, for security and hallucination issues. It simplifies the process by orchestrating Garak for security testing and Inspect AI for compliance checks, generating a PDF report with clear "Pass/Fail" results. The tool supports Python and can evaluate models like vLLM and cloud providers, offering a cost-effective alternative by defaulting to local models for judgment. This matters because it provides a streamlined and accessible solution for ensuring the safety and reliability of locally run AI models, which is crucial for developers and businesses relying on AI technology.

    Read Full Article: LocalGuard: Auditing Local AI Models for Security