This article deeply explores the core mechanisms and industry status of Interleaved Thinking in Large Language Models. It traces the evolution of Interleaved Thinking, from Anthropic's Extended Thinking to implementations by models such as OpenAI, MiniMax, Kimi, and DeepSeek. The core principle is that the model retains and accumulates reasoning states during multi-turn tool calling and reasoning, instead of simply discarding them. This significantly improves the performance of complex Agent tasks. The article points out that the lack of native support for Thinking fields in the OpenAI API has led the industry into a lack of standardization. Finally, the article affirms MiniMax's contributions to promoting Interleaved Thinking standardization and ecosystem building, discussing it within the broader context of Agentic AI development.


