Skip to main content
Featured Newsletter

BestBlogs Issue #78: The Infinite Brain

Hello everyone! Welcome to Issue 78 of BestBlogs.dev's curated AI article recommendations.

This is our final issue of 2025, and this week's theme is "The Infinite Brain."

An article by Notion founder Ivan Zhao provides a fitting conclusion to the year. He frames AI as the third revolutionary force following steam and steel: steam engines expanded the limits of physical labor, steel raised the heights of architecture, and AI is becoming the "infinite brain"—breaking through cognitive boundaries. His core argument: we need to stop viewing AI merely as a "copilot" and start reimagining how we work entirely.

The data backs this up. A survey by Lenny and Figma, based on 1,750 respondents, reveals that over half of professionals save at least half a day per week thanks to AI. Engineers are migrating from GitHub Copilot to Cursor and Claude Code. PMs are using AI to cross-functionally build prototypes. Interestingly, entrepreneurs benefit the most while designers perceive the least value—AI penetration varies significantly by role.

Yet this year also exposed the gap between ambition and reality. Research from Berkeley and DeepMind shows that 68% of Agents are limited to 10 steps or fewer, with multi-agent collaboration suffering from "coordination tax" and error amplification. A candid post-mortem from a frontend team put it bluntly: technical success doesn't equal product success. The 80/20 bottleneck—where Agents handle 80% but the final 20% requires manual fixes—meant users preferred doing things themselves. Their conclusion: "Skills over standalone Agents"—integrate capabilities into general-purpose tools rather than building yet another wheel.

Perhaps this captures 2025's true story: AI is indeed reshaping how we work, but the process is messier, more pragmatic, and demands more patience than we imagined. As we shift from "what AI can do" hype to "how AI should be used" in practice, the real transformation is just beginning.

Here are this week's 10 highlights worth your attention:

🧠 Notion founder Ivan Zhao interprets AI transformation through a historical lens, framing it as "The Infinite Brain." At the individual level, programmers leap from 10x to 30-40x productivity; at the organizational level, AI breaks through traditional communication bottlenecks; at the economic level, knowledge economies will evolve from "Florence" to "Tokyo"-scale megacities. The takeaway: stop treating AI as copilot—reimagine work itself.

📊 The AI Workplace Companion Survey by Lenny and Figma, based on 1,750 samples, reveals AI's real ROI: over half of professionals save at least half a day weekly. Entrepreneurs benefit most; designers perceive the least. Engineers are shifting from Copilot to Cursor and Claude Code. The opportunity frontier is migrating from content production toward strategic thinking.

🤖 Three papers lay bare the Year One struggles of Agents : 68% are limited to 10 steps, multi-agent setups face coordination tax and error amplification, and throwing more compute doesn't linearly improve performance. Real breakthroughs require systematic evolution in tool management, verification capabilities, and communication protocols. Essential reading for Agent teams.

💡 A candid post-mortem from a frontend team: technical success, product failure. User habit resistance, the 80/20 bottleneck, and workflow fragmentation led to zero adoption post-launch. Key lesson: technical success ≠ product success. Skills integrated into general tools beat standalone Agents. Hard-won lessons are the most valuable.

🔧 A detailed comparison of MCP vs. Agent Skills : MCP solves connectivity; Skills encapsulate domain knowledge and operational workflows. Skills' "progressive disclosure" mechanism uses a three-layer architecture to load information on-demand, effectively mitigating context explosion. The proposed MCP + Skills hybrid architecture is an important reference for Agent development.

📈 LangChain's annual report shows 57% of enterprises have deployed Agents in production. Customer service and R&D analysis are the two dominant use cases; the biggest challenge is output quality, not cost. Observability tracking is now standard; multi-model hybrid architectures are trending. Data-backed industry baselines.

🎯 Three Gemini co-leads at Google DeepMind in a rare joint interview: Flash now matches previous-gen Pro performance; Pro's main role has become distilling Flash. Post-training is the biggest breakthrough opportunity; latency and speed are severely undervalued. Code, reasoning, and math are largely "solved"—next up: open-ended tasks and continuous learning.

🚀 Major updates from Chinese open-source models this week. Zhipu's GLM-4.7 achieves open-source SOTA in coding—73.8% on SWE-bench, outperforming GPT-5.2 on Code Arena blind tests. MiniMax's M2.1 targets multi-language programming, surpassing Claude Sonnet 4.5 in tests while open-sourcing the new VIBE full-stack benchmark.

🎤 Tongyi open-sources Fun-Audio-Chat 8B , an end-to-end voice model that bypasses the traditional ASR+LLM+TTS pipeline for lower latency. Highlights include emotion perception and Speech Function Call support—executing complex tasks through natural voice. Weights and code fully available.

🌐 Y Combinator partners review 2025's five AI surprises : YC startups' model preference has shifted from OpenAI to Anthropic; startups are achieving arbitrage through model orchestration layers; the single-person unicorn remains unrealized. A separate year-end dialogue offers a bolder thesis: this isn't an AI bubble—it's AI War. Online Learning will become the third paradigm-shifting breakthrough.

In 2025, AI evolved from tool to partner—and we're still learning how to work alongside it. Thank you for being with us this year. Stay curious, and see you in 2026!

Hung-yi Lee
youtube.com
12-22
4440 words · 18 min
92
[Introduction to Generative AI & Machine Learning 2025] Lecture 10: History of Speech Language Model Development (Historical Review; 2025 Technology from 1:42:00)

A 2025 masterclass summary by Prof. Hung-yi Lee on SLM evolution. Key highlights: the trade-offs between cascade and end-to-end models, leveraging LLMs to fix speech semantic gaps, and how TASTE/STITCH architectures enable "think-while-speak" reasoning with zero latency. Ideal for grasping the logic behind GPT-4o's voice mode.

AI Engineer
youtube.com
12-19
5123 words · 21 min
93
From Arc to Dia: Lessons learned building AI Browsers – Samir Mody, The Browser Company of New York

Samir Mody, AI Engineering Lead at The Browser Company, discusses their journey from the Arc browser to the AI-native browser Dia. Key insights include optimizing tools and processes for rapid iteration, viewing model behavior as a craft, and addressing AI safety as an emergent product attribute. Mody elaborates on how their engineering culture, prototyping strategies, and team structure evolved to tackle the challenges of building an interface that reasons, plans, and acts. He highlights the importance of internal tools for rapid prototyping and evaluation, introduces JEPA for automated prompt optimization, and underscores the necessity of embedding AI safety measures like user confirmation for sensitive actions to mitigate prompt injection risks, citing Dia's autofill, scheduling, and email functions as examples. The talk concludes by stressing the imperative for companies to fully embrace technological shifts.

Spring Blog
spring.io
12-23
1896 words · 8 min
94
Explainable AI Agents: Capture LLM Tool Call Reasoning with Spring AI

Spring AI's Tool Argument Augmenter enables LLMs to "articulate" their reasoning during tool calls. By dynamically injecting additional fields (like reasoning steps and confidence levels), developers can capture complete decision-making logic without modifying tool code. This significantly enhances AI agent explainability and observability, supporting intelligent systems with long-term memory and self-reflection capabilities.

宝玉的分享
baoyu.io
12-20
2698 words · 11 min
93
Rebirth from Failure: A Retrospective on an AI Agent Frontend Implementation

This article provides a profound retrospective of a real-world enterprise-level frontend AI Agent project, moving from "technical success" to "product failure" and eventually finding a breakthrough through a mindset shift. The author details the technical journey of building a prototype with the Claude Agent SDK, tackling challenges like training on private component libraries, implementing local file systems, and establishing automated quality loops. The core insight lies in the reflection on the "Agent Island" phenomenon: technical feasibility does not guarantee product adoption. The author advocates for shifting focus from building standalone Agents to encapsulating "Skills" that integrate into existing developer workflows (e.g., Cursor or Claude Code).

Datawhale
mp.weixin.qq.com
12-22
6693 words · 27 min
92
Beyond Simple Agents: An In-depth Look at Agent Skills

This article thoroughly discusses two core concepts in the AI Agent field: Model Context Protocol (MCP) and Agent Skills. It highlights that MCP primarily addresses the connectivity between Agents and external tools/resources, while Agent Skills focus on encapsulating domain knowledge and operational procedures, thereby equipping Agents with the knowledge of how to effectively use tools. A key innovation is the “progressive disclosure” mechanism embedded in Agent Skills. Through a three-tier architecture—metadata, skill body, and additional resources—information is loaded on demand, significantly alleviating context explosion and high costs often associated with overly large tool JSON Schemas in traditional MCPs. The article emphasizes that MCP and Skills are complementary, not competitive, and proposes a layered hybrid architecture that integrates both to optimize costs, improve maintainability, and enhance reusability. Finally, it details the SKILL.md specification and principles for writing high-quality Skills, while anticipating future trends such as industry standardization, ecosystem building, and automated capability discovery. It also cautions against potential risks like security vulnerabilities and fragmentation.

InfoQ 中文
mp.weixin.qq.com
12-21
15533 words · 63 min
92
From RAG to Context: 2025 RAG Technology Year-End Review

This year-end review of RAG technology's development in 2025 highlights its irreplaceable status as a data infrastructure in enterprise-level AI implementations, despite ongoing questions about long context windows and attention shifting to Agents. The article details RAG's technical advancements, including decoupling retrieval into distinct "search" and "retrieve" stages, and integrating TreeRAG and GraphRAG to address the limitations of traditional RAG in complex queries. It emphasizes RAG's trend of evolving from a mere knowledge base to a general-purpose data foundation for Agents, introducing the concept of "context engineering" and analyzing the critical role of domain knowledge, tool data, and conversational state data in Agent context assembly. Ultimately, the article forecasts RAG's future as a "Context Engine" or "Context Platform." Concurrently, it examines the progress of Multimodal RAG and its associated engineering challenges.

Product School
youtube.com
12-24
8817 words · 36 min
92
Lessons from 10 years of Building AI Driven Product | Trello VP of Product

Trello's Head of Product shares 10 lessons from a decade of AI development. The article highlights overcoming the initial "negative value" phase by building trust through explainable UI. Key takeaways: PMs must master AI debugging and "Evals," adjust routing logic to prevent user gaming, and leverage GenAI to move beyond "one-size-fits-all" features toward hyper-personalized user experiences.

Founder Park
mp.weixin.qq.com
12-22
3394 words · 14 min
93
LangChain Agent Annual Report: Output Quality Remains the Biggest Obstacle for Agents; Customer Service and Research Emerge as Fastest-Growing Use Cases

LangChain's latest survey reveals that 57% of organizations have moved AI Agents into production as of 2026. Customer service and data analysis dominate the landscape. The primary bottleneck has shifted from cost to output quality. Key trends include universal adoption of observability, a shift toward multi-model architectures, and the continued dominance of coding agents in daily workflows.

Silicon Valley Girl
youtube.com
12-23
6383 words · 26 min
92
I Spent $10K Testing 100+ AI Tools — These 11 Are the Only Ones You Need

A curated list of 11 indispensable AI tools for 2025, ranging from foundational models (Claude, Perplexity) to automation workflows (Zapier, n8n) and specialized apps (Notion, Gamma, HeyGen). The core focus is on leveraging AI Agent browsers and integrations to eliminate repetitive tasks. Bottom line: Don't waste money on random subscriptions; use these 11 tools to systematize your workflow first.

开始连接LinkStart
xiaoyuzhoufm.com
12-26
941 words · 4 min
92
Vol.92|Interview with Zhai Xingji of Yuhe Tech: Is the Most Revenue-Generating AI Agent the Only Path for AI to Business?

96-generation entrepreneur Zhai Xingji explains how to restructure enterprise sales processes with AI Agents, focusing on charging by incremental value rather than labor costs. Through automated pre-sales solution generation, they helped clients achieve a 20% conversion rate increase. Emphasizes choosing core business pain points, ensuring technical feasibility, and the need for rapid iteration and decisive decision-making in entrepreneurship.

前端早读课
mp.weixin.qq.com
12-26
4628 words · 19 min
92
[Morning Brief] Who Is Your AI Workplace Partner? This Data Has the Answer

This report from Lenny and Figma’s AI Insights Manager surveys 1,750 pros to reveal AI’s real ROI: 50%+ save at least half a day weekly. Key takeaways: Founders benefit most, while Designers see the least gain. Engineers are migrating from GitHub Copilot to Cursor and Claude Code, while PMs are using AI to cross into prototyping. The findings suggest the next frontier for AI lies in moving from "output generation" to "strategic exploration" and "thought partnership."

宝玉的分享
baoyu.io
12-24
3840 words · 16 min
93
Steam, Steel, and the Infinite Brain

Notion founder Ivan Zhao offers a historical lens on AI transformation, comparing it to steam and steel as the new "infinite brain." The piece analyzes AI's impact across three levels: individual, organizational, and economic. At the individual level, AI agents transform programmers from "10x engineers" to "30-40x engineers," though broader adoption requires solving context fragmentation and verifiability challenges. Organizationally, AI acts like steel reinforcing structure and steam powering production, enabling companies to scale beyond traditional communication bottlenecks. Economically, knowledge work will evolve from "human-scale" Florence to "megacity-scale" Tokyo, creating cross-timezone, high-density operations. Core insight: we must stop treating AI as merely a "copilot" and reimagine how work fundamentally operates.

51CTO技术栈
mp.weixin.qq.com
12-21
11072 words · 45 min
93
Gemini Leadership: Pro's Main Role is to Distill Flash! The Greatest Potential for Breakthroughs Lies in Post-training; Noam, Jeff Dean: Continual Learning is a Key Direction for Improvement

A rare dialogue featuring Gemini's three co-technical leads (Jeff Dean, Oriol Vinyals, Noam Shazeer) reveals the technical philosophy behind Gemini 3. Key insights include: Flash models now match or exceed previous Pro performance, with Pro's primary role becoming Flash distillation; post-training is identified as the largest breakthrough opportunity; latency and speed are severely undervalued, often more important than absolute intelligence in practice.

Founder Park
mp.weixin.qq.com
12-24
6340 words · 26 min
92
Google's Two Most Successful AI Applications This Year, Both Spearheaded by Him

This profile examines how Josh Woodward doubled Gemini's user base in eight months. Key takeaways: leveraging small (5-7 person) teams for rapid iteration, using the "block" system to bypass bureaucracy, and shifting AI's role from a search engine to a "content container." It also highlights his vision for "Dynamic Views," where AI moves beyond chat boxes to generate real-time interactive interfaces.

127. Large Model Quarterly Report New Year Dialogue: Guang Mi's Prediction of the AI War's Two Major Alliances and the Third Paradigm of Online Learning

This year-end conversation is essential listening for understanding the 2025 global AI competitive landscape. Guangmi presents a critical thesis: this isn't an AI bubble, but an AI war—an arms race that tech giants and nations cannot afford to lose. The podcast deeply analyzes the competition between NVIDIA's GPU and Google's TPU ecosystems, revealing why OpenAI, Anthropic, and Google alternate in leadership positions. Most importantly, they predict Online Learning will become the third paradigm-shifting breakthrough after pre-training and reinforcement learning, proposing an investment portfolio shift from concentration to diversification. For practitioners tracking AI investments and technical trends, this podcast delivers rare frontline insights.

Y Combinator
youtube.com
12-22
7728 words · 31 min
92
What Surprised Us Most In 2025

Y Combinator partners reflect on unexpected AI shifts in 2025. Data reveals YC startups now prefer Anthropic over OpenAI, particularly for code-related tasks. More significantly, startups are building orchestration layers to "arbitrage" models, dynamically selecting the best performer for each task. The discussion covers stabilization signals in the AI economy, long-term value of infrastructure investments, and why "one-person unicorns" remain unrealized.

    BestBlogs Issue #78: The Infinite Brain | BestBlogs.dev