Import AI 453: Breaking AI agents; MirrorCode; and ten views on gradual disempowerment
Import AI 453: Breaking AI agents; MirrorCode; and ten views on gradual disempowerment Was fire equivalent to a singularity for people at the time? Welcome to Import AI, a newsletter about AI research. Import AI runs on arXiv and feedback from readers. If you’d like to support this, please subscribe. A shorter issue than usual as I was attending the 2026 Bilderberg conference this week. AI can reverse engineer software that contains thousands of lines of code: …MirrorCode demonstrates some of the long-horizon capabilities of modern AI systems… AI measurement organizations METR and Epoch have built MirrorCode, a benchmark meant to test out how well AI models can autonomously reimplement complex existing software. The results show that AI systems are more capable than most people think at certain types of coding task, suggesting AI progress may be even faster than we previously thought. What is MirrorCode: “Each MirrorCode task consists of a command-line (CLI) program that an agent is tasked to reimplement exactly. The AI agent is given execute-only access to the original program and a set of visible test cases, but does not have access to the original source code,” the researchers write. “The full MirrorCode benchmark includes more than 20 target programs spanning different areas of computing: Unix utilities, data serialization and query tools, bioinformatics, interpreters, static analysis, cryptography, and compression.” The results: Today’s AI models are extremely capable at some of these tasks: “Claude Opus 4.6 successfully reimplemented gotree — a bioinformatics toolkit with ~16,000 lines of Go and 40+ commands. We guess this same task would take a human engineer without AI assistance 2–17 weeks. We see continued gains from inference scaling on larger projects, suggesting they may be solvable given enough tokens.” Additionally, they also found that performance can scale with inference, so the more compute you give a model, the better it’ll do. Caveats: Now, this benchmark isn’t quite like normal coding tests. It’s better to think of it as a proofpoint for AI systems being able to generate systems which imitate the function of other systems when they get a lot of help: AI systems tested out here are asked to clone programs which produce a canonical output (and therefore can naturally generate a specification), there may be some cases of memorization on the basic programs, and this only covers a slice of the large universe of potential software projects. Why this matters - for some tasks, AI is already as good as a fulltime sophisticated employee: Imagine you gave a talented software programmer a CLI interface to a complicated program and asked them to write the underlying program without seeing its source code. I’d wager only a fraction of them could do it if the program was quite sophisticated. And the ones that could would likely spend many days working on it. The fact AI can do this task autonomously is remarkable and a testament to the skill of these models. Read more: MirrorCode: Evidence that AI can already do some weeks-long…