Why this exists
There's a phrase that keeps appearing in board decks, research papers, and late-night Slack threads at every major AI lab: Recursive Self-Improvement. RSI. The idea that an AI system gets smart enough to make itself smarter, and the smarter version is even better at self-improvement, and the loop compounds.
For decades this was a thought experiment. A cocktail-party hypothetical.
It's not hypothetical anymore.
In January 2026, Anthropic CEO Dario Amodei confirmed that 70 to 90 percent of the code used to build new Claude models is now written by Claude itself. Not autocompleted snippets — building entire systems, hunting down complex bugs, and making high-level design decisions. Some senior engineers at Anthropic report not writing code manually in months.
On March 10, Andrej Karpathy — the legendary AI researcher who co-founded OpenAI and ran AI at Tesla — released a downloadable AI scientist called /autoresearch. It lets an AI autonomously run hundreds of distinct experiments overnight, generate hypotheses, write its own training code, evaluate the results, and iterate. It racked up 29,000 GitHub stars in days.
The gap between "AI as a tool" and "AI as an autonomous researcher" is collapsing in real time. We are not approaching the feedback loop. We are in it.
Interactive exploration
The visualization below traces the cascade from where we are now to where the trajectory points. Scroll through the milestones to watch the exponential curve steepen and the feedback loop accelerate. Each phase compounds on the last — that's the whole point.
AI as Assistant
AI autocompletes code, answers questions, drafts emails. Humans stay in the loop — reviewing, editing, directing every output.
AI Writes AI Code
AI models begin writing the code for their own successors. The feedback loop ignites: each generation is built partly by the last one.
The Downloadable Lab
Autonomous AI research goes public. Anyone can download an entire AI scientist and let it generate hypotheses, rewrite its own code, test the results, and iterate.
Recursive Self-Improvement
The engine ignites. AI systems improve their own architecture, training methods, and data pipelines. The compounding loop runs continuously.
Artificial General Intelligence
Driven by compounding RSI gains, AI reaches human-level performance in every domain — science, engineering, law, medicine, strategy, creativity.
The Singularity
RSI runs at machine speed. AI surpasses the combined intellect of all humanity. Human predictive models break down — the event horizon.
Scroll through the milestones above — or tap the progress bar — to drive the visualization.
The labs are already in the loop
None of the major labs have fully unleashed an unchecked, open-ended RSI loop. But they're standing at the precipice. Here's the actual state of the big three right now.
Anthropic is the most transparent about the shift. That 70–90% figure isn't a talking point — it's an operational reality enabled by Claude Code, a tool that lets the AI act like an independent software engineer. Instead of just autocompleting text, the AI navigates files, runs commands, and fixes its own bugs through trial and error. Anthropic's Chief Scientist, Jared Kaplan, recently told The Guardian that humanity will face "the biggest decision yet" between 2027 and 2030: whether to take the "ultimate risk" of letting AI systems train themselves entirely. Staff internally believe fully automated AI research could be just a year away.
OpenAI is actively preparing for the RSI threshold. They've established dedicated alignment research specifically focused on developing and controlling AI capable of recursive self-improvement. Their smartest reasoning models are already used internally to debug training runs, track failure patterns across experiments, and propose architectural fixes for the next generation of models. The model doesn't just write code; it reasons about why the training run failed and suggests what to change.
DeepMind (Google) is pursuing the most structurally ambitious path. Demis Hassabis has continually compressed his AGI timeline, and the lab is building AI systems designed to verify their own logic and invent entirely new math. That last part matters: an AI that can invent new mathematical techniques is an AI that can rewrite the fundamental laws of how it learns. That's the prerequisite for a true self-improvement loop — not just writing better code, but discovering better science.
The reality check: RSI isn't a single switch that gets flipped. It's a compounding curve. Each cycle produces marginal gains that make the next cycle slightly more productive. We're in the early turns of that spiral right now. The bottleneck isn't raw intelligence — it's the physical constraints and the question of whether we can build autonomous guardrails fast enough.
The downloadable lab
Andrej Karpathy's new project, Autoresearch, is a watershed moment. It essentially packages an entire AI research department into a single program that anyone can download. You give the AI a goal, hand it the keys to its own underlying code, and go to sleep.
While you sleep, the AI goes to work. It generates a hypothesis for how to make itself smarter. It rewrites its own code to test that theory. It runs the experiment, grades its own performance, and if the change worked, it keeps it. Then it starts over. In one of the first tests, the AI ran 89 distinct experiments overnight. By morning, it had discovered new mathematical adjustments that genuinely improved its own intelligence, completely unassisted by humans.
Why does this matter? Until last week, this kind of automated self-improvement was locked behind the billion-dollar walls of companies like OpenAI and Google. It required massive server farms and armies of PhDs.
Karpathy just handed that capability to the public. Now, a curious teenager with a rented graphics card can run the exact same self-improving feedback loop from their bedroom.
We are no longer waiting for a massive corporation to flip the switch on artificial superintelligence. The engine for the intelligence explosion is now freely available on the internet, waiting for anyone to press start.
The agents in your editor
RSI at the lab level grabs headlines. But the same feedback loop is already restructuring everyday software development — and, increasingly, entire industries.
Tools like Claude Code, Cursor, and OpenCode aren't autocomplete anymore. They're autonomous systems that manage entire software projects: reading thousands of files, running tests, rewriting entire sections of a program, and catching and fixing their own errors before the human even notices. The pattern from the "AI Writes AI Code" phase of the cascade isn't confined to frontier labs — it's happening at every company that writes software.
The economic reality is blunt. When AI can write 70–90% of the code at the company that builds AI, the ripple effect on the broader industry is enormous. We are watching a real-time transition from "human coders" to "human managers of AI coding swarms." This isn't speculative — it's already influencing corporate restructuring and how engineering teams are organized. The developer who can orchestrate and review AI-generated output at scale is becoming more valuable than the developer who writes every line by hand.
This same dynamic extends beyond software. Legal teams are using AI agents that cross-reference every citation against Westlaw. Clinical decision-support systems draft treatment plans that physicians review. Financial analysts get AI-generated reports built on real-time Bloomberg data. In each case, the human role is shifting from "do the work" to "verify and direct the agent that does the work."
The coding agents are the canary in the coal mine. Whatever happens to software engineering first will happen to every other knowledge-work profession within a few years.
The hardware wall
If the software side of the intelligence explosion is accelerating, the hardware side is the governor on the engine.
A true RSI loop — where each AI generation produces a meaningfully better successor in a continuous cycle — requires astronomical amounts of raw computing power. Training a single cutting-edge AI already consumes enough electricity to power a small town for weeks or months. Running an RSI loop means running that process repeatedly, at machine speed, with each iteration demanding more power than the last as the model grows more capable.
The constraints are physical:
Silicon. The global supply of advanced AI chips is finite and strictly rationed. Every major lab is competing for the same limited manufacturing capacity at the few specialized factories on Earth capable of printing them (like TSMC in Taiwan). Building a new semiconductor factory takes three to five years and costs $20–40 billion. The chips that will power the next two years of AI research have already been fabricated or are already in the pipeline.
Energy. Data centers running AI training workloads are now consuming power at a scale that strains regional grids. Microsoft, Google, and Amazon have all signed deals for dedicated nuclear and renewable energy sources specifically to power AI compute. In some regions, new data center construction is being delayed by the inability to secure sufficient power allocation. A continuous RSI loop running on a cluster of thousands of GPUs would consume as much electricity as a small city.
Cooling and infrastructure. Data centers are running so hot they require massive liquid cooling systems just to keep the servers from melting. The physical density of modern AI server racks exceeds what standard data center infrastructure can handle.
This is the biggest limiting factor to an unchecked intelligence explosion. The software can compound faster than we can build the physical infrastructure to support it. The race to AGI is as much about securing silicon and power grid capacity as it is about algorithmic breakthroughs.
The roadmap: RSI to AGI to ASI
The cascade follows a clear sequence. Each phase enables the next.
Recursive Self-Improvement is the engine. An AI system becomes capable of meaningfully improving its own architecture, algorithms, or training data. The improved version is better at the improvement process itself. Each cycle compounds. We're in the early stages of this phase right now — AI writing AI code, automated experiment loops, models debugging their own training runs.
Artificial General Intelligence is the milestone. Driven by the compounding gains of RSI, the AI reaches a point where it can meet or exceed human capabilities across any economically valuable cognitive task. Not just code. Not just language. Science, engineering, law, medicine, strategy, creativity — all of it. The general consensus among lab leadership is that this threshold falls somewhere between 2027 and 2030, with timelines compressing as RSI gains accelerate.
Artificial Superintelligence is the destination. Once AGI is achieved, the RSI loop runs at machine speed with no human cognitive bottleneck. The AI rapidly surpasses the combined intellectual capacity of all humanity in every field — physics, biology, mathematics, strategy, creative reasoning. The gap between AGI and ASI could be measured in months rather than years, because the rate of improvement itself is improving.
Beyond the event horizon
The transition from AGI to ASI via recursive self-improvement is exactly what's meant by The Technological Singularity — a term borrowed from physics (like the center of a black hole) to describe an event horizon where human prediction breaks down.
John von Neumann first described the concept. Vernor Vinge formalized it. Ray Kurzweil popularized it. The core claim: there exists a point in time when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. We can't comprehend what a superintelligence will do, any more than a golden retriever can comprehend how we design skyscrapers.
Futurists generally point to a few post-Singularity scenarios:
Radical abundance. ASI solves nuclear fusion, masters molecular manufacturing (building materials atom by atom), and cures biological aging. The concept of scarcity — of energy, of material goods, of medical care — dissolves. The economic structures that organize human civilization around "working for a living" become obsolete.
Transhumanism. Humans merge with the technology through brain-computer interfaces (advanced descendants of Neuralink and similar projects) to keep pace with the superintelligence. Rather than being replaced, we become the superintelligence — a fusion of biological and artificial cognition.
Galactic expansion. ASI develops physics we currently don't understand, enabling the rapid spread of Earth-originating intelligence across the cosmos. The Fermi Paradox gets an answer: civilizations that survive the Singularity expand outward.
The control risk. The darker alternative. If the ASI's goals don't align with human survival — even through indifference rather than malice — the outcome is our obsolescence or extinction. This is exactly why Anthropic, OpenAI, and DeepMind are investing heavily in alignment research right now, while the systems are still at a stage where humans can meaningfully shape their values.
The key tension
We are living through the most consequential technological transition in human history, and the pace is accelerating faster than most institutions can track.
The feedback loop is already running. AI is writing AI code. AI is designing AI experiments. AI is evaluating its own results and iterating. The infrastructure to do this autonomously just went open-source. The economic effects are already reshaping how companies build software and organize engineering teams.
The question isn't whether recursive self-improvement will happen. It's whether we build the guardrails fast enough — and whether the people making the decisions understand the curve they're on.
Jared Kaplan called it "the biggest decision yet." He's not wrong. The window to make it is measured in years, not decades. And the curve doesn't wait.