Logobestblogs.dev

Articles

Google Gemini URL Context
Simon Willison's Weblog
08-18
AI Score: 87
⭐⭐⭐⭐

The article announces a significant new feature in the Google Gemini API: the `url_context` tool. This allows Gemini models to fetch and integrate content from specified URLs directly into their responses to prompts. The author, Simon Willison, quickly updated his `llm-gemini` tool to version 0.25, providing a simple command-line interface for developers to experiment with this capability. The fetched web content is charged as input tokens, a crucial detail for cost and context window management. Technical observations reveal that the content is retrieved as raw HTML, without JavaScript execution, and originates from Google's IP addresses. This feature significantly enhances Gemini's ability to provide up-to-date, contextually rich responses by browsing the web.

Artificial IntelligenceEnglishGemini APILLMURL ContextAI DevelopmentTool Use
Maintainers of Last Resort
Simon Willison's Weblog
08-16
AI Score: 87
⭐⭐⭐⭐

The article introduces Geomys, an organization founded by Filippo Valsorda, which provides professional maintenance and support for critical open-source packages, primarily within the Go language ecosystem. Funded by client retainers, this model is presented as an inspiring and successful approach to ensure the financial sustainability of key open-source projects. The piece specifically emphasizes Geomys' recent role as 'maintainers of last resort' for security-focused Go projects that have lost active maintenance. Examples cited include their work on the `bluemonday` HTML sanitization library and their efforts regarding CSRF protection after `gorilla/csrf` became unmaintained. The author, Simon Willison, expresses optimism about this sustainable open-source maintenance model, suggesting it effectively addresses a critical gap in the open-source community.

Artificial IntelligenceEnglishOpen SourceSoftware MaintenanceGo LanguageSecurityFinancial Sustainability
PyPI: Preventing Domain Resurrection Attacks
Simon Willison's Weblog
08-19
AI Score: 82
⭐⭐⭐⭐

This article highlights a critical security vulnerability known as domain resurrection attacks, where an attacker can gain control of user accounts by registering expired domain names previously used for email verification. This poses a significant supply chain risk, particularly for platforms like PyPI. To counter this, PyPI has introduced a protective measure: it now un-verifies email addresses if their associated domains enter an expiration phase. The implementation involves integrating with Fastly's Domainr API and regularly polling domain statuses. This proactive step, while not a perfect solution, effectively closes a major attack vector, as demonstrated by a past incident involving the `ctx` package in 2022.

Artificial IntelligenceEnglishSecuritySupply Chain SecurityPyPIDomain Resurrection AttackEmail Verification
too many model context protocol servers and LLM allocations on the dance floor
Simon Willison's Weblog
Yesterday
AI Score: 81
⭐⭐⭐⭐

This article, referencing insights from Geoffrey Huntley, brings attention to the significant and often under-discussed token cost associated with integrating tools via Model Context Protocol (MCP) in large language model (LLM) applications. It estimates that for sophisticated tools like Amp or Cursor, a substantial portion of the LLM's context window (e.g., 24,000 tokens for system prompts) is consumed even before task-specific inputs. Furthermore, adding popular GitHub MCP definitions alone can cost an additional 55,000 tokens for just 93 tools. The author warns that this rapid depletion of the context window, combined with LLMs performing worse when cluttered with irrelevant information, severely limits the tokens available for solving the actual task. As a highly effective and token-efficient alternative, the article proposes leveraging existing command-line interface (CLI) tools. It argues that by giving coding agents access to CLI tools like GitHub's `gh`, they gain extensive functionality with minimal token cost, as frontier LLMs often already possess knowledge of how to use such tools. The author also shares positive experiences with building custom CLI tools for LLMs, emphasizing that the `--help` command can efficiently teach the LLM how to interact with the tool, especially when usage examples are included in the help text.

Artificial IntelligenceEnglishLLM OptimizationToken EfficiencyAI AgentsPrompt EngineeringCLI Tools
No more articles