A former Microsoft engineer is training AI to beat 1982's Robotron: 2084, an arcade game where a lone human must overcome endless waves of robots following a cybernetic revolt. The choice of game is deliciously ironic for a project exploring artificial intelligence. Yet there's serious intent beneath the jest.
Dave Plummer, best-known as the creator of Task Manager and 3D Pinball for Windows, recently trained an AI to master Dave Theurer's 1981 classic Tempest. That success set the stage for a bigger challenge. Tempest is an elegant game, but it also has many more of what Plummer calls guardrails—a single movement axis, much more predictable enemy behaviours, and far fewer decisions being made moment-to-moment.
Robotron demands something different. Each level starts with humans scattered around the arena, and dozens upon dozens of robotrons moving and shooting towards both the humans and yourself; any touch means death, for both yourself and the humans. It's a game that forces you to keep moving and making split-second choices. The control scheme itself presents a unique challenge. Though not the first to implement it, Robotron's use of dual joysticks popularised the design among 2D shooting games and has since been copied by other arcade-style games.
Plummer has articulated why this matters beyond mere gameplay mastery. As Plummer put it: "A screaming 1982 arcade cabinet trying to murder you with a hundred simultaneous bad decisions at 60 frames a second." "It is a brutally compressed lesson in real-time systems, human limits, and the difference between intelligence and reflex."
The technical challenge is equally significant. While it doesn't suffer the panic that meatbags do, "Robotron mastery is partly tactical, partly statistical, and partly an exercise in triage under uncertainty. The AI doesn't merely need to dodge. It needs to understand what is worth dodging toward." The game forces constant prioritisation under severe time pressure. Rescue the remaining humans or clear enemies first? Stay mobile or consolidate territory? Every frame demands a fresh calculation with incomplete information.
Plummer has called Robotron "an old game, yes. A magnificent one. A loud one. A deeply unfair one. But it is also a laboratory. It is a place where 30 or 40-year-old design decisions about CPU cycles, linked lists, blitter modes, jump tables, and joystick ergonomics are suddenly back on the table because they still describe a live system with measurable behavior. And the moment you point an AI at it, the game starts revealing itself all over again. Not as a museum piece, but as an active adversary."
Another fascinating element of the project is Plummer's live training dashboard, which shows the AI playing Robotron along with various graphs about how it's doing in certain areas. It is weirdly compulsive viewing, and the project remains ongoing.
The work sits at an intersection worth exploring. Machine learning researchers have long recognised that games offer testbeds for real-world problems. A system that learns to navigate Robotron's chaos must develop robust decision-making under uncertainty. It cannot rely on pattern matching alone. The game punishes rigidity and rewards adaptability. Those are skills with application far beyond arcades.
Plummer's background lends credibility to the endeavour. He created the Task Manager for Windows, the Space Cadet Pinball ports to Windows NT, Zip file support for Windows, HyperCache for the Amiga and many other software products. A man who spent decades optimising systems for real-time performance under constraints now trains machines to do the same in miniature.
What emerges is neither a side project nor a trivial pursuit. Robotron remains what it always was: a relentless teacher. Every wave teaches the same lesson. Systems fail when they cannot make rapid decisions under pressure. Humans fail. AIs will too, until they learn better. And watching that learning happen—frame by frame, decision by decision—offers insights that pen-and-paper analysis never could.