Here is a question worth sitting with: if a machine produces a work of art without any meaningful human creative contribution, who owns it? The answer, as far as American law is concerned, is now unambiguous. Nobody does.
The US Supreme Court declined on Monday to take up the issue of whether art generated by artificial intelligence can be copyrighted under US law, turning away a case involving a computer scientist from Missouri who was denied a copyright for a piece of visual art made by his AI system. The decision, handed down on 2 March 2026, is the final chapter in an eight-year legal campaign by Dr Stephen Thaler, and it carries consequences well beyond one man's ambitions.
Thaler, of St Charles, Missouri, applied for a federal copyright registration in 2018 covering "A Recent Entrance to Paradise," visual art he said his AI technology "DABUS" created. The acronym stands for "Device for the Autonomous Bootstrapping of Unified Sentience" — a name that tells you something about the philosophical stakes Thaler attached to the project. In a 2024 profile, Thaler described DABUS as a "proto-consciousness" capable of experiencing stress and trauma, and said gaining copyright protection was about affirming the agency of his AI model, rather than ensuring some financial benefit.
The Copyright Office rejected his application in 2022, finding that creative works must have human authors to be eligible to receive a copyright. A Washington judge upheld that ruling, describing human authorship as a bedrock requirement, and the US Court of Appeals for the District of Columbia Circuit affirmed the ruling in 2025. The Supreme Court's refusal to intervene leaves that chain of authority intact.
"Although the Copyright Act does not define the term 'author,' multiple provisions of the act make clear that the term refers to a human rather than a machine," the administration said. US President Donald Trump's administration had encouraged the Supreme Court not to consider Thaler's appeal, according to Reuters. On the patents question, the story is identical: Thaler filed two patent applications listing DABUS as the sole inventor, and the US Patent and Trademark Office refused both applications, concluding that the Patent Act limits inventorship to natural persons.
The counter-argument deserves serious consideration. Thaler's legal team is not simply arguing about one eccentric scientist's passion project. His lawyers warned that, even if a future court overturns the Copyright Office's position, "it will be too late. The Copyright Office will have irreversibly and negatively impacted AI development and use in the creative industry during critically important years." That is a legitimate concern. AI-assisted creative work is already ubiquitous in film, music, advertising, and game development. A legal framework that cannot tell creators what protections they hold over AI-assisted output creates genuine commercial uncertainty.
There is also an important distinction being blurred in the public debate. The Copyright Office has separately rejected bids for copyrights on images generated by the AI system Midjourney, though those artists argued they were entitled to copyrights for images they created with AI assistance — unlike Thaler, who said his system created the work independently. One US company, Invoke, has already succeeded in obtaining copyright protection by demonstrating in detail that a human author actively selected, coordinated, and arranged AI-generated elements into a final composition. That distinction, between a human who uses AI as a tool and an AI system treated as an autonomous author, is where the law has drawn its line.
For now, the message from the courts and administrative bodies is consistent: if you want intellectual property protection, there must be a human in the process. Careful documentation of human contributions to the creative and inventive process is more critical than ever.
Strip away the talking points and what remains is a question with direct implications for Australia. Thaler has pursued DABUS-related patent registrations in multiple jurisdictions, including Australia, and has so far succeeded only in South Africa. Meanwhile, the Albanese government is conducting its own consultations on how copyright law should respond to generative AI. Attorney-General Michelle Rowland announced in October 2025 that the government will not introduce a text and data mining exception under the Copyright Act 1968 to allow big tech companies to use copyrighted material to train their AI models without any compensation to creators. Rowland told ABC AM that "we are making it very clear that we will not be entertaining a text and data mining exception," and convened the Copyright and AI Reference Group to discuss alternative proposals, including a paid licensing framework and a requirement that tech firms disclose the content used to train their systems.
The tension in Australia mirrors the global argument. Atlassian co-founder Scott Farquhar called on the federal government to "urgently" amend copyright laws to include fair use and text data mining exceptions, arguing this could "unlock billions of dollars of foreign investment into Australia." Against that, the Australian Society of Authors and major media organisations have argued that permitting AI firms to train on creators' work without payment would amount to a systematic transfer of value from human creators to technology companies.
Unlike the United States, Australia does not have a broad fair use exception. While the Copyright Act contains a defined suite of fair dealing exceptions, none of the current exceptions would apply to training an AI tool as such. Whether Australia's copyright laws are too restrictive is currently being debated, including before the Productivity Commission, but as things stand, AI developers in Australia should only use copyright material that is licensed or risk an infringement claim.
The fundamental question is not whether AI should be allowed to assist human creativity. It clearly can and does, in ways that are often genuinely valuable. The question is whether AI companies should be able to treat the entire output of human creative culture as free raw material, and then claim proprietary rights over what they produce from it. The US Supreme Court's non-decision is, in effect, a decision: the law as written does not permit that outcome. Legislatures, in Washington and Canberra alike, must now decide whether that is the answer they want, or merely the answer that arrived by default. Reasonable people, and reasonable industries, have a legitimate stake in getting that choice right. The evidence suggests rushing it would serve nobody well.