Decoupling Knowledge and Reasoning: A New Approach to Artificial General Intelligence

```html

From Artificially Useful to Artificially General Intelligence: A New Approach to Reasoning

Large language models (LLMs) have made remarkable progress in recent years, demonstrating impressive practical application potential – artificially useful intelligence (AUI). However, their capabilities in adaptive and robust reasoning, the hallmarks of artificial general intelligence (AGI), remain limited. While LLMs appear successful in areas like everyday logic, programming, and mathematics, they struggle with generalizing algorithmic understanding to new contexts.

Experiments with algorithmic tasks in esoteric programming languages show that the reasoning of LLMs is adapted to the training data and limited in its transferability. The root cause of this limited transferability presumably lies in the coupling of knowledge and reasoning within LLMs. To enable the transition from AUI to AGI, researchers propose a new approach: decoupling knowledge and reasoning.

Three Key Directions for the Advancement of AI Systems

This new approach focuses on three key directions:

First, instead of the widespread next-token prediction pretraining, reasoning should be trained from scratch through reinforcement learning (RL). This approach aims to teach the model to reason independently, rather than merely recognizing statistical correlations in the training data.

Second, a curriculum of synthetic tasks should be used to facilitate the learning of a "reasoning prior" for RL. This prior can then be transferred to natural language tasks. By using synthetic tasks, complex scenarios can be created that allow the model to develop robust reasoning strategies.

Third, more general reasoning functions should be learned by using a small context window. This reduces the exploitation of spurious correlations between tokens and promotes the development of more robust and generalizable reasoning abilities.

A New Architectural Concept for AI Systems

Such an RL-based reasoning system, coupled with a trained retrieval system and a large external memory as a knowledge store, could overcome the current limitations of existing architectures in learning to reason in new scenarios. By separating knowledge and reasoning, models can react more flexibly to unknown situations and solve more complex problems.

The proposed directions offer a promising foundation for the future development of AI systems. The decoupling of knowledge and reasoning could be the key to developing AGI and pave the way for more powerful and adaptable AI systems.

For companies like Mindverse, which specialize in the development of customized AI solutions, these research results offer valuable impetus. The development of chatbots, voicebots, AI search engines, and knowledge systems could be significantly improved by integrating these new approaches to reasoning. The ability to reason robustly and adaptively in new situations is crucial for the development of AI systems that meet the demands of a constantly changing world.

Bibliography: https://arxiv.org/abs/2502.19402 https://www.arxiv.org/pdf/2502.19402 https://chatpaper.com/chatpaper/paper/115625 https://deeplearn.org/arxiv/580500/general-reasoning-requires-learning-to-reason-from-the-get-go https://www.oneusefulthing.org/p/the-end-of-search-the-beginning-of https://openreview.net/forum?id=BGnm7Lo8oW https://www.researchgate.net/publication/384938802_Reasoning_Paths_Optimization_Learning_to_Reason_and_Explore_From_Diverse_Paths https://huggingface.co/papers/2502.07374 https://www.scribbr.com/methodology/inductive-reasoning/ https://www.assessmentday.co.uk/aptitudetests_logical.htm ```