This article thoroughly analyzes the reasons behind the challenges faced by Anthropic's Multi-Modal Communication Protocol (MCP) one year after its launch. The article first reviews the significant buzz MCP generated upon its release in 2024, as it was expected to solve the challenge of constant reinvention in AI application development. However, its core weakness lies in occupying a large context window during calls, leading to a surge in Large Language Model (LLM) Token consumption and high operating costs. This also distracts model attention, resulting in a significant decrease in reasoning accuracy. MCP's low development threshold has resulted in many redundant and low-quality tools in the ecosystem. This increases screening costs for developers, and the permissive permission design leads to serious security risks, potentially causing irreversible errors. The article points out that Anthropic has quietly shifted to a system called Skills, which is seen as an implicit acknowledgment of the design flaws of MCP. Skills achieves more seamless and efficient integration by encapsulating high-frequency, validated capabilities. Ultimately, the article proposes that MCP and Skills are essentially "patches" to compensate for the current limitations of AI intelligence, aiming to use deterministic engineering methods to control probabilistic intelligent agents, and predicts that MCP will transition from a hyped technology to a foundational infrastructure, with high-frequency capabilities going to Skills and long-tail data going to MCP.


