Paul Graham
@paulg · 17h agoA year of smoothly exponential growth, and then suddenly it stops working. That has to be demoralizing. But the startup toughs it out, and five months later growth is back.
90
39
1,466
294
219
A year of smoothly exponential growth, and then suddenly it stops working. That has to be demoralizing. But the startup toughs it out, and five months later growth is back.
Apple just released CLaRa-7B-Instruct
huggingface.co/apple/CLaRa-7B…
In partnership with @GivingTuesday, we're launching Claude for Nonprofits.
It has discounted plans, new integrations, and free training to help nonprofits spend less time on admin and more time on their missions: anthropic.com/news/claude-fo…
hbd chatgpt
Try talking with ChatGPT, our new AI system which is optimized for dialogue. Your feedback will help us improve it. openai.com/blog/chatgpt/
The New York Times innovated a new kind of article today: The Self Debunking Story, where by the end of the article the headline is clearly rendered false.
INSIDE NYT’S HOAX FACTORY
Five months ago, five New York Times reporters were dispatched to create a story about my supposed conflicts of interest working as the White House AI & Crypto Czar.
Through a series of “fact checks” they revealed their accusations, which we debunked in detail. (Not surprisingly the published article included only bits and pieces of our responses.)
Their accusations ranged from a fabricated dinner with a leading tech CEO, to nonexistent promises of access to the President, to baseless claims of influencing defense contracts.
Every time we would prove an accusation false, NYT pivoted to the next allegation. This is why the story has dragged on for five months.
Today they evidently just threw up their hands and published this nothing burger. Anyone who reads the story carefully can see that they strung together a bunch of anecdotes that don’t support the headline. And of course, that was the whole point.
At no point in their constant goalpost-shifting was NYT willing to update the premise of their story to accept that I have no conflicts of interest to uncover.
As it became clear that NYT wasn’t interested in writing a fair story, I hired the law firm Clare Locke, which specializes in defamation law. I’m attaching Clare Locke’s letter to NYT so readers have full context on our interactions with NYT’s reporters over the past several months.
Once you read the letter, it becomes very clear how NYT willfully mischaracterized or ignored the facts to support their bogus narrative.




Why do (senior) engineers struggle to build AI Agents? For decades, engineering meant removing ambiguity and defining strict interfaces. But Agents are probabilistic, not deterministic. You cannot "code away" variance:
1. Text is the New State
2. Hand over Control
3. Errors are just inputs
4. From Unit Tests to Evals
5. Agents Evolve, APIs Don't
I talked to a startup yesterday that had been forced to compress certain information to fit it into an LLM context window. But compression is understanding, and in the compressed form the information could be used for other, new things.
Great review of Opus 4.5
>"TLDR: It's the Sonnet 3.5 of 2025. Try it. Do it now"
Claude Opus 4.5: full review
This is the best model release in a long long time when it comes to programming. It blows my mind how good it is. I have not seen this big of an improvement since the original release of gpt-4-0314
The main improvement is they've finally thought it how to 'think' correctly.
It no longer makes gruesome logic errors in its thinking.
Problems like "Okay, I'll run tests now. <Tests fail> Great! The tests pass." are no longer a thing.
This generalizes across to basically ALL logic when it comes to thinking about code - it extremely rarely, if ever, makes mistakes.
The next big milestone: It no longer writes slop code! This is huge. With Codex, you can get it to write code that works. But it writes awful code - useless functions, bad abstractions, etc. This sucks, because it works short term, but long term the model will run itself into a corner where it can no longer work with the code it wrote itself.
Not the case with Opus. Not only does it write elegant code, but it also knows how to refactor slop code into non-slop code. It deeply understands the codebase and can figure out elegant solutions that aren't just 'mechanical' refactorings.
It's very autonomous and independent. It will, by itself, when encountering issues, create minimal reproducible examples, try to bisect where the error comes from, then fix it without getting stuck in rabbit holes. Even if the error is in some unrelated part of the code -- code that it didn't even write itself!!
It also DOES EXACTLY WHAT YOU SAY, WITHOUT CUTTING CORNERS! This is huge!!! Using Codex is basically a game of whack-a-mole where it understands what you want it to do, but it's too difficult so it reward-hacks its way into a shit solution that you don't want.
Opus actually tackles the problem and solves it properly even if it's difficult.
The long context understanding is pretty much perfect. Paired with the compaction mechanism available in Claude Code by default, you can basically have an infinitely long conversation where it understands everything inside it, with no degradation.
In terms of design, research, coming up with novel ideas. It's better, but not quite expert-human-level. It can propose solutions that I would consider good design, but it can't quite 'think with portals' yet. Still, a good improvement over what we had before, which was basically non-existent.
All of the above I've gathered from testing it over the past few days where the task is to write an interpreter for a language that we were designing on the fly. It's a very niche design, similar to Self and Smalltalk, except we're building the language inside the language itself. This leads to extremely difficult scenarios where you're trying to define how functions work -- inside the language -- when you don't have functions yet! And it still does a magnificent job. Sometimes, I don't even fully understand what I'm asking it to do, but Opus does, and it does a good job.
TL;DR: It's the Sonnet 3.5 of 2025. Try it. Do it now
Data centers are getting really big, really quickly
a16z.news/p/charts-of-th…
World Labs CEO Fei-Fei Li on how AI can generate infinite 3D worlds.
“With this technology… we can actually create infinite universes.”
“It suddenly will enable us to live in a multiverse way.”
@drfeifei @theworldlabs