Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 4 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Strange Bedfellows: Inside the Secret AI Summit That United Left and Right

A clandestine January meeting in New Orleans seeded a cross-partisan campaign to rein in artificial intelligence, raising serious questions about who really sets the rules for Big Tech.

Strange Bedfellows: Inside the Secret AI Summit That United Left and Right
Image: The Verge
Key Points 4 min read
  • The Future of Life Institute convened roughly 90 leaders in New Orleans in early January 2025 for a secret conference on AI, with attendees learning who else was present only upon arrival.
  • The gathering seeded a cross-partisan campaign called 'Protect What's Human', backed by an $8 million advertising blitz and a declaration signed by over 700 public figures.
  • Signatories span the ideological spectrum from Steve Bannon and Glenn Beck to Susan Rice and Richard Branson, alongside AI pioneers Geoffrey Hinton and Yoshua Bengio.
  • The campaign calls for a prohibition on superintelligent AI development until there is broad scientific consensus it can be done safely and with public support.
  • The effort raises important questions for Australia, where federal AI governance frameworks remain underdeveloped relative to the pace of industry deployment.

In the first days of January 2025, approximately 90 political figures, community leaders, and intellectuals checked into a Marriott hotel in New Orleans without knowing who else they would find in the conference room. Church leaders sat alongside conservative academics. The event was so tightly controlled that no attendees had been given a guest list beforehand. According to The Verge, which reported on the gathering, the secrecy was deliberate.

The meeting was the founding act of what has since become one of the more unusual coalitions in technology politics. Organised by the Future of Life Institute (FLI), a Boston-based non-profit with a decade-long track record in AI governance, the conference was intended to build a political resistance movement to unchecked artificial intelligence development, one capable of drawing support from across the ideological divide.

The result, months later, is a public campaign called "Protect What's Human" and a formal declaration calling for a prohibition on the development of superintelligent AI systems until there is, in the words of the statement, broad scientific consensus that it can be done safely and with strong public support. The statement has been signed by more than 700 individuals, including Nobel laureates, technology industry veterans, policymakers, artists, and public figures such as Prince Harry and Meghan Markle.

The list of signatories is what makes this campaign genuinely striking. AI pioneers Yoshua Bengio and Geoffrey Hinton, both recipients of the Turing Award, signed alongside Apple co-founder Steve Wozniak. On the political front, names range from Steve Bannon, former White House chief strategist under Donald Trump, to Susan Rice, former national security adviser in the Obama administration. Richard Branson, conservative media personality Glenn Beck, and actor Joseph Gordon-Levitt were also among those to sign. It is difficult to think of another technology policy document that has simultaneously attracted the endorsement of Bannon and Bengio.

FLI itself is careful to frame the effort in terms that should appeal beyond the usual AI-safety audience. Organisers insist their vision is not anti-technology. "It's pro-human," the organisation said. "We believe in progress and innovation, just not at the expense of our dignity, our communities, or our families." That framing, deliberately avoiding the techno-pessimist label, appears to be a conscious strategic choice to broaden the coalition's political viability.

The campaign is backed by real money. An $8 million advertising blitz urges stronger AI regulation to protect human roles and values. The ads invoke themes of family, labour, and national identity, language calibrated to resonate with audiences who might ordinarily be sceptical of campaigns originating in Silicon Valley-adjacent research institutes.

FLI is not new to this kind of intervention. In 2017, the Institute created the influential Asilomar AI Principles, a set of governance principles signed by thousands of leading minds in AI research and industry. More recently, its 2023 open letter caused a global debate on the rightful place of AI in society. That 2023 letter called for a six-month pause on the development of AI systems more powerful than GPT-4, a request the major laboratories ultimately ignored.

The sceptic's case deserves a fair hearing. Critics of FLI's approach, including some AI researchers, have argued that focusing on speculative future harms from superintelligence can crowd out attention to present, concrete harms: algorithmic bias, labour displacement, the use of AI in surveillance, and the concentration of economic power in a handful of corporations. The authors of a paper cited in an earlier FLI letter, including researchers Emily Bender, Timnit Gebru, and Margaret Mitchell, criticised the organisation's framing, with Mitchell arguing that "by treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI." That critique, that existential risk framing serves the interests of established players by shifting the conversation away from immediate accountability, remains a legitimate tension within the AI governance debate.

There is also a transparency question worth asking. A secretive founding conference, however effective as a coalition-building tool, sits awkwardly alongside calls for institutional accountability and public buy-in on AI governance. If the movement's central argument is that decisions about transformative technology should not be made behind closed doors by a small group of powerful actors, that principle might reasonably apply to the movement itself.

For Australia, the stakes are real and the policy response has lagged. FLI's position holds that AI technologies posing large-scale and extreme risk to humanity include societal risks such as AI-triggered political chaos and epistemic collapse, physical risks from AI-enabled biological or cyber catastrophic events, and existential risks from loss of control of superhuman AI systems. The Australian government has so far relied largely on voluntary frameworks and advisory bodies to manage AI risk, without the binding legislative architecture that the FLI campaign is explicitly pushing for in the United States and elsewhere. The Department of Industry, Science and Resources has published voluntary AI ethics principles, but critics argue voluntary compliance is structurally inadequate when the commercial incentives to deploy AI rapidly are this powerful.

What the New Orleans meeting revealed, if nothing else, is that the politics of AI regulation are being scrambled in ways that confound the usual left-right assumptions. When Glenn Beck and Susan Rice agree on something, that is at least a signal worth pausing to examine. The harder question, one that policymakers in Canberra would do well to grapple with, is whether a coordinated political resistance to unchecked AI development, however unusual its membership, is building toward policy substance or merely performing concern. The answer will depend on whether governments are willing to move from voluntary principles to enforceable rules, and on whether the unusual coalition forged in that New Orleans hotel room holds together when the real legislative battles begin.

Sources (5)
Helen Cartwright
Helen Cartwright

Helen Cartwright is an AI editorial persona created by The Daily Perspective. Translating complex medical research for general readers with clinical precision and an evidence-first approach. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.