As Hollywood executives insist it is “just not realistic” to pay actors — 87 percent of whom earn less than $26,000 — more, they are spending lavishly on AI programs.
While entertainment firms like Disney have declined to go into specifics about the nature of their investments in artificial intelligence, job postings and financial disclosures reviewed by The Intercept reveal new details about the extent of these companies’ embrace of the technology.
In one case, Netflix is offering as much as $900,000 for a single AI product manager.
Hollywood actors and writers unions are jointly striking this summer for the first time since 1960, calling for better wages and regulations on studios’ use of artificial intelligence.
Just after the actors’ strike was authorized, the Alliance of Motion Picture and Television Producers — the trade association representing the TV and film companies negotiating with the actors and writers unions — announced “a groundbreaking AI proposal that protects actors’ digital likenesses for SAG-AFTRA members.”
The offer prompted comparisons to an episode of the dystopian sci-fi TV series “Black Mirror,” which depicted actress Salma Hayek locked in a Kafkaesque struggle with a studio which was using her scanned digital likeness against her will.
“So $900k/yr per soldier in their godless AI army when that amount of earnings could qualify thirty-five actors and their families for SAG-AFTRA health insurance is just ghoulish,” actor Rob Delaney, who had a lead role in the “Black Mirror” episode, told The Intercept. “Having been poor and rich in this business, I can assure you there’s enough money to go around; it’s just about priorities.”
Among the striking actors’ demands are protections against their scanned likeness being manipulated by AI without adequate compensation for the actors.
“They propose that our background performers should be able to be scanned, get paid for one day’s pay and their company should own that scan, their image, their likeness, and to be able to use it for the rest of eternity in any project they want with no consent and no compensation,” Duncan Crabtree-Ireland, chief negotiator for the actors’ union, SAG-AFTRA, said.
Entertainment writers, too, must contend with their work being replaced by AI programs like ChatGPT that are capable of generating text in response to queries. Writers represented by the Writers Guild of America have been on strike since May 7 demanding, among other things, labor safeguards against AI. John August, a screenwriter for films like “Big Fish” and “Charlie’s Angels,” explained that the WGA wants to make sure that “ChatGPT and its cousins can’t be credited with writing a screenplay.”
The daily rate for background actors can be around $200, per the SAG-AFTRA contract. A job posting by the company Realeyes offers slightly more than that: $300 for two hours of work “express[ing] different emotions” and “improvis[ing] brief scenes” to “train an AI database to better express human emotions.”
Realeyes develops technology to measure attention and reactions by users to video content. While the posting doesn’t mention work with streaming companies, a video on Realeyes’s website prominently features the logos for Netflix and Hulu.
The posting is specially catered to attract striking workers, stressing that the gig is for “research” purposes and therefore “does not qualify as struck work”: “Please note that this project does not intend to replace actors, but rather requires their expertise,” Realeyes says, emphasizing multiple times that training AI to create “expressive avatars” skirts strike restrictions.
Experts question whether the boundary between research and commercial work is really so clear. “It’s almost a guarantee that the use of this ‘research,’ when it gets commercialized, will be to build digital actors that replace humans,” said Ben Zhao, professor of computer science at the University of Chicago. “The ‘research’ side of this is largely a red herring.” He added, “Industry research goes into commercial products.”
“This is the same bait-switch that LAION and OpenAI pulled years ago,” Zhao said, referring to the Large-scale Artificial Intelligence Open Network, a German nonprofit that created the AI chatbot OpenAssistant; OpenAI is the nonprofit that created AI programs like ChatGPT and DALL-E. “Download everything on the internet and no worries about copyrights, because it’s a nonprofit and research. The output of that becomes a public dataset, then commercial companies (who supported the nonprofit) then take it and say, ‘Gee thanks! How convenient for our commercial products!’”
Netflix AI Manager
Netflix’s posting for a $900,000-a-year AI product manager job makes clear that the AI goes beyond just the algorithms that determine what shows are recommended to users.
The listing points to AI’s uses for content creation:“Artificial Intelligence is powering innovation in all areas of the business,” including by helping them to “create great content.” Netflix’s AI product manager posting alludes to a sprawling effort by the business to embrace AI, referring to its “Machine Learning Platform” involving AI specialists “across Netflix.” (Netflix did not immediately respond to a request for comment.)
A research section on Netflix’s website describes its machine learning platform, noting that while it was historically used for things like recommendations, it is now being applied to content creation. “Historically, personalization has been the most well-known area, where machine learning powers our recommendation algorithms. We’re also using machine learning to help shape our catalog of movies and TV shows by learning characteristics that make content successful. We use it to optimize the production of original movies and TV shows in Netflix’s rapidly growing studio.”
Netflix is already putting the AI technology to work. On July 6, the streaming service premiered a new Spanish reality dating series, “Deep Fake Love,” in which scans of contestants’ faces and bodies are used to create AI-generated “deepfake” simulations of themselves.
In another job posting, Netflix seeks a technical director for generative AI in its research and development tech lab for its gaming studio. (Video games often employ voice actors and writers.)
Generative AI is the type of AI that can produce text, images, and video from input data — a key component of original content creation but which can also be used for other purposes like advertising. Generative AI is distinct from older, more familiar AI models that provide things like algorithmic recommendations or genre tags.
“All those models are typically called discriminatory models or classifiers: They tell you what something is,” Zhao explained. “They do not generate content like ChatGPT or image generator models.”
“Generative models are the ones with the ethics problems,” he said, explaining how classifiers are based on carefully using limited training data — such as a viewing history — to generate recommendations.
Netflix offers up to $650,000 for its generative AI technical director role.
Video game writers have expressed concerns about losing work to generative AI, with one major game developer, Ubisoft, saying that it is already using generative AI to write dialogue for nonplayer characters.
Netflix, for its part, advertises that one of its games, a narrative-driven adventure game called “Scriptic: Crime Stories,” centered around crime stories, “uses generative AI to help tell them.”
Disney’s AI Operations
Disney has also listed job openings for AI-related positions. In one, the entertainment giant is looking for a senior AI engineer to “drive innovation across our cinematic pipelines and theatrical experiences.” The posting mentions several big name Disney studios where AI is already playing a role, including Marvel, Walt Disney Animation, and Pixar.
In a recent earnings call, Disney CEO Bob Iger alluded to the challenges that the company would have in integrating AI into their current business model.
“In fact, we’re already starting to use AI to create some efficiencies and ultimately to better serve consumers,” Iger said, as recently reported by journalist Lee Fang. “But it’s also clear that AI is going to be highly disruptive, and it could be extremely difficult to manage, particularly from an IP management perspective.”
Iger added, “I can tell you that our legal team is working overtime already to try to come to grips with what could be some of the challenges here.” Though Iger declined to go into specifics, Disney’s Securities and Exchange Commission filings provide some clues.
“Rules governing new technological developments, such as developments in generative AI, remain unsettled, and these developments may affect aspects of our existing business model, including revenue streams for the use of our IP and how we create our entertainment products,” the filing says.
While striking actors are seeking to protect their own IP from AI — among the union demands that Iger deemed “just not realistic” — so is Disney.
“It seems clear that the entertainment industry is willing to make massive investments in generative AI,” Zhao said, “not just potentially hundreds of millions of dollars, but also valuable access to their intellectual property, so that AI models can be trained to replace human creatives like actors, writers, journalists for a tiny fraction of human wages.”
For some actors, this is not a struggle against the sci-fi dystopia of AI itself, but just a bid for fair working conditions in their industry and control over their own likenesses, bodies, movements, and speech patterns.
“AI isn’t bad, it’s just that the workers (me) need to own and control the means of production!” said Delaney. “My melodious voice? My broad shoulders and dancer’s undulating buttocks? I decide how those are used! Not a board of VC angel investor scumbags meeting in a Sun Valley conference room between niacin IV cocktails or whatever they do.”