
When the people building frontier AI start building institutional infrastructure to manage societal fallout, that tells you more about the timeline than any benchmark.
In October 2024, Dario Amodei, CEO of Anthropic, published a 14,000-word essay called "Machines of Loving Grace." He laid out a specific vision: AI could compress 50 to 100 years of scientific and social progress into 5 to 10 years. He called it the compressed 21st century.
Eighteen months later, in March 2026, his company launched the Anthropic Institute. A dedicated research body staffed with economists, legal scholars, and social scientists, tasked with studying what happens to societies when that compression actually arrives.
I've been watching this space closely while building an AI product, and that combination of signals caught my attention more than any model release or benchmark result this year. When the people building frontier AI start hiring economists to study job displacement and lawyers to study how AI interacts with legal systems, it tells you something about what they see coming.
The essay is worth reading in full. The coverage reduced it to "Anthropic CEO is optimistic about AI," which, in my opinion, misses the point.
Amodei made domain-specific predictions across five areas, and he deliberately arranged them from most to least confident. Biology and health came first: near-elimination of infectious disease, a dramatic reduction in cancer deaths, prevention of most genetic diseases, and human lifespans potentially reaching 150 years. Neuroscience and mental health came second: treatments for depression, schizophrenia, and addiction.
Then economic development: 20% annual GDP growth in developing regions. Governance: AI strengthening judicial fairness and democratic institutions. And finally, work and meaning, which he admitted was his least certain prediction.
That ordering matters. He put his strongest bets up front and his weakest last. He also hedged more than most people remember. His exact words: "Everything I'm saying could very easily be wrong." He framed his predictions through what he called "marginal returns to intelligence," a framework for identifying where raw intelligence alone can't solve problems because physical-world constraints, data limitations, or regulatory barriers get in the way.
This was careful, structured thinking from someone with inside knowledge of AI capabilities.
The Institute consolidates three research teams that already existed inside Anthropic: the Frontier Red Team (which stress-tests AI systems), Societal Impacts (which studies how people actually use Claude in the real world), and Economic Research (which tracks AI's effects on jobs and the economy). Jack Clark, Anthropic's co-founder, moved from head of public policy to lead it.
Look at the first hires. Matt Botvinick came from Yale Law School to lead AI and rule of law research. Anton Korinek, an economist from the University of Virginia, is directing work on economic transformation. Zoe Hitzig, who previously worked at OpenAI, is connecting the economics research directly to model development decisions.
Economists. Lawyers. Social scientists who study systemic disruption.
The Institute also claimed something worth noting in its announcement: that it has "access to information that only the builders of frontier AI systems possess." External AI researchers don't have this access. Government policymakers don't have it. Anthropic is saying, explicitly, that they know things about where this technology is heading that the rest of us can only estimate from the outside.
Whether that claim is fully credible or partially self-serving, the direction is clear. They're building research infrastructure for what comes after the models get powerful.
So what do they already see? The Institute's own economic research offers a clue.
Earlier this year, Anthropic researchers Maxim Massenkoff and Peter McCrory published a paper introducing an "observed exposure" metric that compares what AI can theoretically do in a job versus what it actually does today. The gap is enormous. In computer and math roles, AI could theoretically handle 94% of tasks. In practice, it currently handles about 33%.
That 61-percentage-point gap between capability and adoption exists because of legal constraints, technical limitations, and the simple fact that humans still need to review AI work. But every one of those barriers is eroding. Model capabilities improve with each generation. Integration tooling gets better. Legal frameworks evolve. The gap narrows.
The same research paper modelled what the researchers called a "Great Recession for white-collar workers," a scenario where unemployment in AI-exposed occupations doubles from roughly 5% to 10%, mirroring the 2007-2009 financial crisis. They noted it hasn't happened yet, but flagged a 14% drop in job-finding rates among young workers (ages 22-25) in exposed fields since ChatGPT launched. The early signals are there, even if the full impact hasn't arrived.
![]()
We wrote about a related dimension of this in The 3% Problem, where we dug into Section's AI Proficiency data showing fewer than 3% of knowledge workers actually qualify as proficient with AI tools, despite 54% self-rating as proficient. The capability-adoption gap has a human side too. Even where the technology works, most people haven't figured out how to use it properly.
Amodei's essay puts biology first because he's most confident AI will accelerate breakthroughs there. Work and meaning come last because he's least sure what happens to human employment and purpose when AI gets powerful enough.
The Institute's first major hires are economists and legal scholars. Not biologists.
They're preparing for the domain Amodei was least confident about predicting. The one most likely to cause immediate, visible societal disruption before the long-term benefits of AI-accelerated biology or economic development have time to materialise.
There's a pattern here that I find more telling than any product announcement. The CEO writes the optimistic vision of what AI could accomplish. The company then builds institutional infrastructure to manage the part of that vision most likely to go badly in the short term. That sequencing is unusual for a technology company. Most leave the societal response to governments and think tanks. Anthropic is trying to do both: build the technology and build the institution that studies its consequences.
I'm not sure whether this reflects genuine responsibility, strategic positioning to stay ahead of regulators, or both. Probably both. What I do think is that the combination of the essay and the Institute tells us something more concrete about timeline expectations than either document alone.
The builders expect the compressed 21st century to arrive within a planning horizon short enough to justify standing up an institute now. And they expect the disruption to hit employment and legal systems before it hits biology.
![]()
If you're leading a team or running an organisation that uses knowledge workers, three signals are worth tracking.
The Institute's own publications. They've committed to sharing what they learn. Anthropic's economic research has already produced some of the most concrete data on AI's real-world impact. The Institute will produce more. Read it.
Model capability jumps. Each generation of frontier models narrows the capability-adoption gap. When a new model can handle tasks that required human review in the previous generation, that's a concrete step toward the gap closing. Track this in your own domain.
Legal and regulatory shifts. The barriers slowing AI adoption right now include legal constraints around liability, data privacy, and professional licensing. When those shift, adoption accelerates. The direction of regulation matters as much as the technology itself.
The practical question for most organisations isn't whether AI will transform knowledge work. The Anthropic Institute's own research suggests the theoretical capability is already at 94% for some roles. The question is how quickly the adoption barriers come down and whether you'll have adapted before they do.
The builders are already preparing for that moment. Whether the rest of us are is a different question entirely.
I lead data & AI for New Zealand's largest insurer. Before that, 10+ years building enterprise software. I write about AI for people who need to finish things, not just play with tools
AI patterns, workflow tips, and lessons from the field. No spam, just signal.