ChatGPT Turns One — Here’s How AI Will Impact Media & Entertainment in 2024

One year ago this week, OpenAI unleashed ChatGPT into a largely unsuspecting world where generative AI was nowhere in the popular lexicon. Now 14 billion ChatGPT visits later, AI plays such an elevated role in our global consciousness that it properly became one of the central issues in the recently settled writers’ and actors’ strikes.

Players across media and entertainment — no matter what role they play — don’t know what to make of it all, which is understandable when even the company behind ChatGPT can’t seem to figure out what it all means. We’ve all witnessed the drama these past two weeks in Silicon Valley’s version of “Game of Thrones” as OpenAI’s board first fired CEO Sam Altman, then nearly hired him back (but didn’t), then enabled Microsoft to hire him (but couldn’t), and then returned him to his CEO role. The drama — reportedly sparked by a significant company AI breakthrough that jolted certain board members — surpassed virtually anything we could find on Netflix.

No one, of course, can predict generative AI’s evolution in its second year. But we certainly can anticipate several major developments in 2024 that directly impact the media and entertainment business and how creative works are both developed and monetized. Here are some of them.

Battles in the courts

First, the courts will flesh out initial basic AI guardrails set by U.S. copyright law and the WGA and SAG in their negotiations, balancing the need to accept tech-forward realities with the rights of creators and art itself. Right now, the U.S. Copyright Office grants no protection to AI-only generated works, while recent strike settlements address several key issues but leave at least one gaping hole for “Synthetic Performers” to enter stage left.

Early returns are already in on the media-related litigation front that point the way to where this will all land. Take comedian Sarah Silverman’s copyright infringement lawsuits against OpenAI and Meta, where she and others challenge Big Tech’s AI “training” on the backs of theirs and an Internet’s worth of copyrighted creative works. The federal judge in the Meta case, and at least one more federal judge in a similar case, have thrown out those infringement claims, concluding that even literal line by line “scraping” of copyrighted works (in other words, direct copying) is not enough to find infringement. These courts instead ruled that Big Tech’s mass copying is “fair use” because its AI-generated “outputs” couldn’t be traced back to the specific creative work “inputs” at issue.

Silverman’s case, together with numerous other similar cases pending in the U.S. court system, will certainly wind their way up to the appellate courts in 2024, and those courts are likely to uphold the core reasoning of those decisions. Ultimately, one of those cases is sure to find its way inside the hallowed halls of the U.S. Supreme Court. That won’t happen in 2024. But soon after, a majority of Justices will likely use it to craft a legal test that is seemingly objective, but instead reflects rampant subjectivity that will only invite more endless AI-related media and entertainment litigation.

The Supreme Court’s ultimate AI test will likely balance two of its recent key copyright infringement rulings. First, the Court will point to 2021’s Google v. Oracle where it ruled that Google’s literal copying of 11,500 lines of Oracle code — a fact that Google conceded — was a fair use. Google created something entirely new based on that copied code, a majority of the Justices reasoned — with a kind of logic that directly applies in this AI context.

The Court will seek to soften that blow when applying its test to creative works versus the software code at issue in Google v. Oracle (a distinction the Court itself noted in that case). It will use its early 2023 ruling in the Andy Warhol/Prince case, which defined a new economic “harm to creator” component that narrowed fair use, to create a kind of entirely unworkable “you know it when you see it” test when deciding whether AI has taken too much of a creator’s work to be acceptable.

New legislation

Congress will jump on the bandwagon in the coming year and pass significant AI legislation that directly impacts the media and entertainment industry. President Joe Biden’s recent Executive Order points the way. Congress will demand that the Big Tech companies behind generative AI give some basic level of transparency about the material on which their large language models are trained. Regulators will also try to get ahead of the game — a stark contrast to when they were largely absent when social media rose in popularity and importance (and caused significant harm).

Expect the creative community to do its best to keep AI companies honest by implementing so-called forensic AI tech like watermarking to identify whether relevant creative works were “scraped” or not. That, in turn, will promote “opt in” solutions for AI training like the approaches taken by Lore Machine, an entertainment-tech company that enables creators to share in AI-generated output monetization.

sarah silverman vs openai lawsuit 2
Sarah Silverman has taken on OpenAI and Meta in copyright lawsuits. (Getty)

Creatives vs Big Tech

Big Tech AI players like Alphabet (the company formerly known as Google) will, as usual, try to have it both ways. Desperate to keep up with OpenAI (and Microsoft, the company largely supporting it), Alphabet will relentlessly march on with its AI development while trotting out its new SynthID watermarking solution to quell the creative masses. SynthID sounds a lot like its infringing content “take down” Content ID system for unlicensed copyrighted works that find their way on to YouTube. Alphabet throws these bones to the creative community, while its stock price rockets upward and the entertainment industry struggles to monetize amidst its continuing transfer of wealth to the Big Tech players that disrupt it.

Faced with these realities, and as it should, the creative community will do its best to harness the power of AI to its advantage. AI-fueled dubbing of film and television by companies like Flawless will begin to roll out their technology to expand global audiences (a boon for creators), and artists will increasingly experiment with AI to create exciting new creative works. Artists like Mark Mothersbaugh of Devo — who recently spoke at TheWrap’s annual Grill event — are excited by those possibilities. But others certainly are not. Many understandably feel insecure about their place in a creative universe that is increasingly overwhelmed by synths which, of course, was a large part of what the recent strikes were all about.

The major studios that finance much of that Hollywood talent and art — faced with mounting Wall Street pressure to transform their business models in a new tech-driven media world order — will begin to focus on generative AI to increase output and cut costs. Early experiments will include hyperautomation in visualization and initial uses of “Synthetic Performers.” Silicon Valley-based streamers like Netflix, with Big Tech DNA coursing through their veins, will lead the way.

So we largely know how the courts, regulators, major studios and streamers, and artists themselves will act in 2024 in a new world order of generative AI. Less known is how we consumers will react to — and support — art that kind of feels like the real thing, but that is somehow misses the mark of authentic human connection.

For those of you interested in learning more, visit Peter’s firm Creative Media at creativemedia.biz and follow him on Twitter/X @pcsathy.

The post ChatGPT Turns One — Here’s How AI Will Impact Media & Entertainment in 2024 appeared first on TheWrap.