When Riley Walz launched a website in September 2025 that let anyone track the near-real-time movements of San Francisco's parking enforcement officers, it lasted precisely four hours before city officials pulled the plug. The Find My Parking Cops site was shut down after San Francisco Municipal Transportation Agency officials disabled the live data feed it relied on, citing concerns that the tool could prevent "employees from doing their jobs safely and without disruption." That episode, somewhere between public interest journalism and elaborate prank, is now on the CV of an OpenAI employee.
Walz, a software engineer famous for his online stunts, is joining OpenAI to research and develop new ways for humans to interact with AI, according to Wired, which first reported the hire. An OpenAI spokesperson confirmed the appointment. He joined the Labs team at OpenAI in February 2026.
Walz's skills creating novel web experiences will be put to use inside OAI Labs, a relatively new team led by research leader Joanne Jang, which has been tasked with "inventing and prototyping new interfaces for how people collaborate with AI." The team is deliberately secretive about its work, but its mandate is not modest: to move the dominant paradigm for AI interaction beyond the text chat box that has defined the category since ChatGPT's 2022 launch.
Jang, who previously led OpenAI's influential Model Behavior team, launched OAI Labs as a separate initiative focused on building research into new collaboration interfaces between humans and AI systems. When asked whether the team might eventually work alongside former Apple design chief Jony Ive, who is engaged with OpenAI on AI hardware, Jang said she is open to various ideas but would start with research areas she knows best.
A Career Built on Exposing What Data Can Do
Walz is known for projects including IMG_0001, a website that surfaces forgotten early-iPhone YouTube uploads; Jmail, a tool for browsing the publicly released Epstein files; and Bop Spotter, a street-corner installation that logged songs heard in San Francisco's Mission District. His work blends programming with cultural observation, and The New Yorker has described it as existing within "a lineage of prankster art that used the Internet both as a medium and as a venue."
In November 2025, Walz and web developer Luke Igel launched Jmail, a browser-based archive of public emails released under the Epstein Files Transparency Act, presenting the documents through a Gmail-style interface as if viewed from Jeffrey Epstein's personal inbox. The project took five hours to build and attracted an estimated 18.4 million visits by late November 2025. The speed and reach of that project illustrates exactly what OpenAI is buying: an ability to make complex or obscure information feel immediate and human.
Not all of Walz's experiments have been unambiguously in the public interest. After the chief executive of UnitedHealthcare was shot dead in New York City and police indicated the suspect had fled on a CitiBike, Walz attempted to analyse trip data he had previously scraped for a separate project to assist the search. He told The New York Times that people online called him a "bootlicker" for helping authorities and threatened his safety. The episode highlighted a tension that runs through all his work: the same data access and technical creativity that produces compelling public tools can just as easily serve surveillance ends.
Why OpenAI Needs This Kind of Thinking
The commercial logic behind the hire is straightforward enough. ChatGPT currently has 800 million weekly active users, up from 400 million in February 2025. OpenAI has spent years racing Google and Anthropic to build compelling AI products, and while ChatGPT has been a consumer hit, the company is now eyeing new interfaces to improve those experiences further. At some point, retaining hundreds of millions of users requires giving them new reasons to return, and chat boxes alone may not be enough.
The move comes as millions of developers have started using coding agents such as Claude Code as their primary means of accessing AI models, and with hires like Walz, OpenAI hopes to get ahead of the next major AI product shift. The question the company appears to be asking is not just how to make existing interfaces better, but whether an entirely different interaction model is waiting to be invented.
From a fiscal responsibility standpoint, that is a sensible question for a company burning significant capital on model infrastructure. Australian regulators and their counterparts globally have watched the AI platform race with mounting attention, concerned that the combination of dominant user bases and novel interface design could entrench the market power of a small number of American firms in ways that are difficult to reverse.
The Privacy Dimension
Critics on the progressive side of this debate raise a fair point: Walz's creative approach to public data has shown how thin the line between transparency and surveillance can be. The parking officer tracker exposed a genuine accountability gap in how San Francisco managed its enforcement data, but it also demonstrated that a single engineer with scraped public records can produce a real-time surveillance product in a weekend. Channelling that capability inside one of the world's most data-rich companies amplifies both the potential and the risk.
Proponents argue, with some justice, that Walz's instinct has always been to expose and satirise power rather than serve it. His projects have targeted institutions and systems, not individuals, and his cultural sensibility is closer to net-art than to commercial data brokerage. Australia's Office of the Australian Information Commissioner has noted in recent guidance that the design of AI interfaces carries significant privacy implications, an observation that makes the work of a team like OAI Labs directly relevant to Australians who use these products every day.
What the Walz hire ultimately reflects is that the race to define how humans and AI interact is no longer purely a machine-learning problem. It is a design problem, a cultural problem, and an ethics problem rolled together. Walz is an American software engineer and internet artist who has spent his career asking what systems make possible and who they serve. Those questions, however disruptive their expression, are worth taking seriously inside a company whose products are now woven into the daily lives of hundreds of millions of people. Whether OpenAI's institutional incentives will allow that scrutiny to flourish internally is a different question, and one that reasonable observers can disagree on.