Advice for a (young) investigator in the first and last days of the Anthropocene

Jascha Sohl-Dickstein's talk at MIT CBMM, 2025

Note published in September, 2025

I recently came across this talk from Jascha Sohl-Dickstein through a recommendation from my mentor Tomek Korbak, and thought it was worthwhile to document the key arguments so that I can revisit them later. Also, I wanted to share a few relevant figures and plots along the way, since I think some of them are especially useful in getting a sense of where we stand in the AI development timeline and what to expect in the next few years.

The talk starts by discussing how there are proposals to name our current period the Anthropocene, marking when human activity became a geological force. Geological epochs are how we measure deep time, divided by distinct changes in life forms, climate, or geological processes observable in rock strata. The Anthropocene concept proposes that human impact has now reached the scale of these geological forces through things like nuclear experiments and climate changes. If we measure this from 1950 when exponential economic growth began, this would be an extraordinarily brief epoch if AI systems displace human agency within the coming decades. This temporal compression is somewhat unsettling. Geological epochs typically last millions of years. Yet the Anthropocene might last less than a century since we're already trying to build our successor.

But what does it mean to build a successor? Are we actually approaching AGI? The above figure shows training compute has grown exponentially, with current models trained using compute approaching what a human brain performs over an entire lifetime. While this scale comparison doesn't guarantee human-level intelligence, it suggests these systems are operating at scales where such capabilities become plausible.

On top of this, Jascha argues that precise definitions of AGI may be missing the point. There's extensive nuance about when exactly the Wright brothers achieved "flight." Was it the 100-meter glide, the 59-second powered flight, or the 38-minute controlled flight? Similarly, while we can debate whether current AI constitutes AGI, there will come a point where the question becomes obvious to answer. At some point, machines will simply and reliably do intellectual work better than humans across domains.

Also, we probably don't need an "AGI" for agents to start replacing significant amounts of human work. The METR study above has been important for building my mental model of AI progress (feel free to watch the first two minutes: video). The length of tasks models can complete autonomously increases exponentially, doubling roughly every seven months. If this trend continues, we'd see models handling full workdays of intellectual labor within 2-3 years. As a concrete example, Anthropic recently demonstrated Claude autonomously cloning the claude.ai interface in under 6 hours of continuous work, using 3000+ tool calls. So this future of autonomous AI agents is definitely coming, and coming soon.

But when exactly is this transformation coming? While precise predictions are difficult (especially given that AGI definitions may become irrelevant), Jascha shared that the San Francisco AI community consensus is now 2-5 years. Another consideration is that reasoning might be particularly suited for scaling. You can generate many attempts at solving a problem, filter for correct solutions, then train the next generation on this new high-quality synthetic data. This creates a self-improvement loop that could accelerate progress beyond current trends.

So what should we do with this limited time? For most people, including myself, we may only have 2-5 years to make meaningful intellectual contributions before AI can do the same work. Therefore, my key takeaway was that I should collaborate more. Slow, independent scientific inquiries are satisfying but not optimal when racing against an exponential. Working alone on a two-year project risks spending that time on something that could be done trivially by just waiting.