complex systems

  • Reevaluating LLMs: Prediction vs. Reasoning


    "Next token prediction is not real reasoning"The argument that large language models (LLMs) merely predict the next token in a sequence without engaging in real reasoning is challenged by questioning if human cognition might operate in a similar manner. The focus should not be on the method of next-token prediction itself, but rather on the complexity and structure of the internal processes that drive it. If the system behind token selection is sophisticated enough, it could be considered a form of reasoning. The debate highlights the need to reconsider what constitutes intelligence and reasoning, suggesting that the internal processes are more crucial than the sequential output of tokens. This matters because it challenges our understanding of both artificial intelligence and human cognition, potentially reshaping how we define intelligence.

    Read Full Article: Reevaluating LLMs: Prediction vs. Reasoning

  • Axiomatic Convergence in Generative Systems


    The Axiomatic Convergence Hypothesis (ACH) explores how generative systems behave under fixed external constraints, proposing that repeated generation under stable conditions leads to reduced variability. The concept of "axiomatic convergence" is defined with a focus on both output and structural convergence, and the hypothesis includes predictions about convergence patterns such as variance decay and path dependence. A detailed experimental protocol is provided for testing ACH across various models and domains, emphasizing independent replication without revealing proprietary details. This work aims to foster understanding and analysis of convergence in generative systems, offering a framework for consistent evaluation. This matters because it provides a structured approach to understanding and predicting behavior in complex generative systems, which can enhance the development and reliability of AI models.

    Read Full Article: Axiomatic Convergence in Generative Systems