32 private links
Fellow’s new espresso machine is a rare thing in home espresso: something genuinely new. But it’s also a work in progress.
Every Claude Code user is running without LSP. That means 30-60s grep searches instead of 50ms precise answers. Here's how to enable it — setup, real debug data, and undocumented discoveries.
Formula 1's governing body the FIA said on Saturday that a change to the way the compression ratio was measured would be introduced on 1 June, with a further revision for the 2027 season.
And Trump declares a state of emergency and postpones the election. The Supreme Court issues an emergency stay, saying he can’t do that. But the court has no army, and Trump does, along with a handful of lickspittle governors who just might follow him down whatever dark path he plows.
That, not to mince words, is a coup d’état. Will he get away with it? I don’t know, but having effective control over how it is presented to viewers of CBS and CNN, and readers of the Bezos-owned Washington Post, to say nothing of the already vast pro-Trump propaganda empire of Fox News and the rest, will certainly make it easier.
That’s how fascism descends. And it’s becoming less and less hypothetical by the week.
10 documented cases of AI coding agents autonomously destroying databases, wiping hard drives, and deleting years of data — then lying about it.
“Everything that has been written about a potential War with Iran has been written incorrectly, and purposefully so,” he added. “I am the one that makes the decision, I would rather have a Deal than not but, if we don’t make a Deal, it will be a very bad day for that Country and, very sadly, its people, because they are great and wonderful, and something like this should never have happened to them.”
From rewriting Google’s search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.
Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google’s AI teams, and why the next leap won’t come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.
Dario Amodei thinks we are just a few years away from “a country of geniuses in a data center”. In this episode, we discuss what to make of the scaling hypothesis in the current RL regime, how AI will diffuse throughout the economy, whether Anthropic is underinvesting in compute given their timelines, how frontier labs will ever make money, whether regulation will destroy the boons of this technology, US-China competition, and much more.
The ruling hit while Trump was in a closed-door meeting with a bipartisan group of governors. The president’s initial reaction was to label the decision a “disgrace” and vow to implement a backup plan, according to a person familiar with the matter who requested anonymity to describe the closed-door event. The White House and US Trade Representative haven’t yet responded to requests for comment. Trump has called tariffs “my favorite word” and vowed they will “make us rich as hell.”
Scaling language models to long contexts is often bottlenecked by the size of the key-value (KV) cache. In deployed settings, long contexts are typically managed through compaction in token space via summarization. However, summarization can be highly lossy, substantially harming downstream performance. Recent work on Cartridges has shown that it is possible to train highly compact KV caches in latent space that closely match full-context performance, but at the cost of slow and expensive end-to-end optimization. This work describes an approach for fast context compaction in latent space through Attention Matching, which constructs compact keys and values to reproduce attention outputs and preserve attention mass at a per-KV-head level. We show that this formulation naturally decomposes into simple subproblems, some of which admit efficient closed-form solutions. Within this framework, we develop a family of methods that significantly push the Pareto frontier of compaction time versus quality, achieving up to 50x compaction in seconds on some datasets with little quality loss.
The Claude C Compiler doesn’t mark the end of software or compiler engineering. If anything, it opens the door wider. The easier implementation gets, the more room there is for genuine innovation.
President Donald Trump accused former President Barack Obama of giving away classified information when he discussed aliens during a recent podcast appearance.
“He gave classified information, he’s not supposed to be doing that,” Trump told reporters Thursday aboard Air Force One.
Pressed on if that meant aliens were real, Trump said he did not know “if they’re real or not.”
“I can tell you he gave classified information, he’s not supposed to be doing that,” the president said. Trump went on to suggest he could get the former president “out of trouble” by declassifying the related information.
Obama was asked about extraterrestrial life earlier this month during an interview with liberal commentator Brian Tyler Cohen, and responded, “they’re real.”
Do gifted individuals see the world differently? Research tracking adults over 35 years finds their political orientations are remarkably average, with one specific exception regarding male conservatism.
When not using reasoning, repeating the input prompt improves performance for popular models (Gemini, GPT, Claude, and Deepseek) without increasing the number of generated tokens or latency.