Shannon Entropy

  • Blocking AI Filler with Shannon Entropy


    I got tired of the "I apologize" loop, so I wrote a Python script to block it using Shannon EntropyFrustrated with AI models' tendency to include unnecessary apologies and filler phrases, a developer created a Python script to filter out such content using Shannon Entropy. By measuring the "smoothness" of text, the script identifies low-entropy outputs, which often contain unwanted polite language, and blocks them before they reach data pipelines. This approach effectively forces AI models to deliver more direct and concise responses, enhancing the efficiency of automated systems. The open-source implementation is available for others to use and adapt. This matters because it improves the quality and relevance of AI-generated content in professional applications.

    Read Full Article: Blocking AI Filler with Shannon Entropy

  • Enhancing AI Text with Shannon Entropy Filters


    Purging RLHF "assistant-voice" with Shannon Entropy (Math + DPO Export)To combat the overly polite and predictable language of AI models, a method using Shannon Entropy is proposed to filter out low-entropy responses, which are seen as aesthetically unappealing. This approach measures the "messiness" of text, with professional technical prose being high in entropy, whereas AI-generated text often has low entropy due to its predictability. By implementing a system that blocks responses with an entropy below 3.5, the method aims to create a dataset of rejected and chosen responses to train AI models to produce more natural and less sycophantic language. This technique is open-source and available in Steer v0.4, and it provides a novel way to refine AI communication by focusing on the mathematical properties of text. This matters because it offers a new approach to improving AI language models by enhancing their ability to produce more human-like and less formulaic responses.

    Read Full Article: Enhancing AI Text with Shannon Entropy Filters