Tech companies are betting that improvisational actors hold the key to teaching artificial intelligence something it desperately needs: authentic human emotion. Handshake AI is inviting actors, improvisers, and performers to join a paid, collaborative improv project to work with one of the leading AI companies. But this isn't theatre work. Performers are matched with other actors over video and given light prompts or scenarios to explore together in unscripted, open-ended sessions where they improvise scenes, explore characters, and respond naturally in the moment.
The catch, for the performers, is that these sessions serve no audience. Actors improvise scenes and dialogue using light prompts and personality notes, with wide latitude to explore tone, emotion, and character choices. What they generate becomes training data. The performances—their voices, facial expressions, emotional choices, and the natural human inconsistency that makes improvisation compelling—become raw material for teaching large language models how humans actually express feeling.
This marks a fundamental shift in how the tech industry acquires the semantic texture of human behaviour. Rather than licence existing creative works or hire traditional voice actors for predetermined scripts, AI firms are commissioning something closer to performance research: the unscripted, spontaneous expression of authentic emotion. The companies seek experts in new research areas to test the limits of the world's top LLMs' understanding by completing tasks that positively impact the next frontier AI model.
The practice sits uneasily within the broader labour dynamics of AI training. Current large language models lack what technologists are increasingly recognising as essential: the ability to detect and generate genuine emotional nuance. Leading generative models like Claude, Gemini, GPT-4, and Llama can understand text but not emotion. These models can't process tone of voice, rhythm of speech or emphasis on words. They can't read facial expressions. They are effectively unable to process any of the nonverbal information at the heart of communication.
Teaching machines this skill requires labour that sits at the intersection of art and annotation work. Unlike conventional voice acting or motion capture, where performers are hired to deliver specific lines or movements under direction, improvisation training asks artists to generate authentic emotional responses in real time. This places unusual demands on their creative labour while creating genuine ambiguity about what is being purchased and how the resulting data might be used.
The historical parallel is instructive. Meta and Realeyes hired actors to make avatars appear more human, and T's voice, face, movements, and expressions would be fed into an AI database to better understand and express human emotions. What began as a specific research goal—training virtual avatars—created new questions about scope and future use. Emotion detection algorithms could be used to improve any kind of AI involving human faces or expressions, even when data agreements do not explicitly mention this.
This creates a structural problem. Actors understand their craft as serving specific creative purposes within defined contexts. But when their emotional performance becomes training data, the context expands exponentially. An improvised scene created to teach an AI chatbot how to respond with apparent empathy could just as easily become training material for emotion recognition systems, marketing analytics, or workplace surveillance tools. There is no way for a participant to opt in or out of specific use cases.
The economic dimension compounds this. Projects are remote, part-time, and flexible, making them easy to fit alongside auditions, classes, or rehearsals. This positioning—as supplementary gig work—reflects how creative labour has been systematised in the gig economy. Actors receive payment for their time and skill but typically do not share in the value created when their emotional expressions are encoded into systems used by major corporations. The training data becomes an asset. The performers remain contractors.
Fair observers can reasonably disagree about whether this represents a problem worth solving. Technology companies argue that paying workers for creative contribution is appropriate, and that the use of generalised emotional training data benefits consumers by improving AI systems. Actors and creative unions counter that the asymmetry is unjust: the specific, irreplaceable emotional intelligence that only trained performers can provide is captured, systematised, and monetised in ways that systematically exclude the originating artists from downstream revenue.
The Australian creative sector, smaller and more interconnected than its American counterpart, may feel this tension acutely. When training datasets are built, questions about who owns emotional signatures, who controls their use, and who benefits from their economic application become more than abstract. They determine whether Australia's creative workers remain agents in their own labour or become raw material suppliers to larger digital economies.
The recruitment of improv actors represents a moment where the tech industry has finally acknowledged what artists have always known: authentic human emotion is the thing machines most urgently need. The question is whether the terms on which that labour is purchased reflect its actual value. For now, the answer appears to be no.