Source linked

Spectral Geometry of Thought in Transformers: Phase Transitions and Correctness Prediction

Transformers exhibit spectral phase transitions when reasoning versus recalling facts, with implications for architecture design and correctness prediction.

transformer-reasoningspectral-analysislanguage-modelsfrontierautomatedarxiv_ml

The preprint presents a systematic spectral analysis of 11 transformer models across 5 architecture families, identifying seven core phenomena that advance our understanding of how transformers reason. The findings have direct implications for the development of more accurate and efficient language models. The authors' spectral theory of reasoning reveals that the geometry of thought is universal in direction, architecture-specific in dynamics, and predictive of outcome. The results show that transformers' hidden activation spaces undergo spectral phase transitions when reasoning, with 9/11 models showing lower alpha for reasoning. The authors also identify a spectral scaling law, token-level spectral cascade, and spectral correctness prediction, among other phenomena. These findings have significant implications for the design of more effective language models and the development of new architectures that can better capture the nuances of human reasoning.


Source: The Spectral Geometry of Thought: Phase Transitions, Instruction Reversal, Token-Level Dynamics, and Perfect Correctness Prediction in How Transformers Reason

Read original source ->

External source stays available while the OJO article and comment thread stay local.

Comments load interactively on the live page.