The M&E industry’s exuberant embrace of AI continued to accelerate at the recent IBC trade show in Amsterdam as efforts to mitigate the technology’s downsides struggled to keep pace.
With exhibitor AI signage emblazoned in all directions, it’s clear marketers believe highlighting the technology’s use in their solutions is a winning strategy. The scene prompted one vendor CEO to joke it might be smart to convert his own company’s high-profile AI messaging to mean “Actual Intelligence.”
At the production end of the M&E service pipeline, the industry’s massive shift to cloud-orchestrated workflows across cloud-centric and hybrid implementations involving both private and public resources has unleashed an explosion of AI-assisted solutions that are designed to automate a big share of the workload. Vendors were also touting benefits impacting encoding, playout, distribution, advertising and personalization of UX.
The generic AI labeling disguised the gap between uses that have been maturing over many years and those that are built on far less mature large-language models (LLMs) supporting generative AI. With the guardrails coming together slowly on a learning curve impeded by hype and unproven expectations, the odds against preventing unintended consequences remain high.
As Deloitte managing director Judah Libin notes in an email response to our queries, AI adoption, especially the use of LLM gen-AI tools that exploded into prominence two years ago, is proceeding at “a pace that far exceeds the speed at which our societies and regulatory frameworks can adapt, creating significant gaps in our understanding and governance.” Making matters worse, Libin adds, the technology poses “profound ethical and societal dilemmas, from the creation of deepfakes to the inherent biases in AI-generated outputs.”
New Developments Strengthen Case for AI
Before assessing the status of collective efforts to rein in AI, including a new initiative unveiled at IBC, it’s important to recognize some other types of developments that can be seen as having a mitigating impact. 0ne of the more significant cases in point, which we explore in-depth elsewhere, involves the emergence of neural processing units (NPUs) as AI-optimized chipsets, which by supplementing CPUs and GPUs in network appliances and CPE could alleviate core AI processing workloads while addressing privacy rules that can restrict use case development.
Notably, NPUs implanted in high-end set-top boxes (STBs) and broadband gateways could offer a new perspective on how cable operators, telcos, DBS providers and even NextGen TV broadcasters with their ATSC 3.0 signal converters might be battling with smart TV OEMs and cloud-based super-scalers for whole-home dominance in the future. For example, Vantiva, a company quietly operating under the radar since combining the home networking units of Technicolor and CommScope, introduced an NPU-driven STB labeled ONYX, a far-field voice-controlled (FFV) device which officials said opens the door to next-gen AI-supported use cases.
These start with things like identifying and locating specific events in video, sharpening resolution, decreasing film grain, and greater content personalization but will evolve to more functionalities in tandem with the development of home-oriented LLMs, according to Vantiva CTO Charles Cheevers. With immense AI data processing power, he says, such devices have the potential to interact verbally at a personal level with everyone in the household by utilizing facial, voice and device recognition and compilations of past user experiences while avoiding the privacy violations that come with shipping information to the cloud.
On the production processing side, no vendor has been more attuned to incorporating AI assistance into its tools over the past several years than Telestream. These efforts have led to a portfolio of products aimed at managing diverse formats for today’s complex distribution environment, ensuring compatibility across multiple platforms while meeting varying standards for quality and accessibility, notes Colleen Smith, Telestream’s senior vice president of product marketing and channel enablement.
Now, she says, AI is helping the company to add more automation to the processes in response to the industry’s accelerated production and distribution timelines. Specifically, Telestream’s latest AI augmentations add automation to workflow creation through its Vantage Workflow Designer, making it easier to scale its quality control solutions with high volumes of content across OTT and traditional TV distribution. The company has also used AI to infuse its Stanza captioning platform with instant speech-to-text generation in multilingual environments.
But there’s something else about Telestream’s embrace of AI that stands out. “We’re taking a cautious approach to AI,” Smith says. By sharing with customers the level-of-confidence percentages scored by all company solutions using AI, the company has imposed a form of self-governance that prevents it from going too far, she explains.
Of course, there are many other issues beyond reliability, privacy and processing power consumption, not to mention the widely debated potential for job losses, that the industry has to deal with as AI saturates the ecosystem. For example, there’s a great-equalizer aspect to AI that threatens to undermine service providers’ ability to differentiate among suppliers, notes Juan Martin, CTO and co-founder of streaming platform provider Quickplay. With everybody conveying similar messages, it raises the question of “how all these organizations will survive,” Martin says.
“We’re a digital transportation company helping customers to build platforms and orchestrate solutions across a multiple partner ecosystem,” he adds. As a gateway to the streaming marketplace, the company’s success depends on its ability to “sift through the options for our customers.”
At IBC Quickplay announced it has positioned its cloud-native content management, processing orchestration, gen-AI tools, dynamic advertising and other capabilities for access in the Amazon Web Services (AWS) Marketplace to address the industry’s need for “the smartest, fastest, most effective ways to engage and monetize viewers.” By offering its open-architecture approach to orchestrating what it deems best-of-breed solutions from a bevy of partners, the company is helping customers to operate in the OTT market with suppliers that go beyond reliance on AI algorithms to build well-conceived solutions, Martin says.
The Daunting Realities
Just how bad is the performance gap between hype and reality in the M&E space? “One thing that drives me nuts is the amount of hype we’ve seen in the past two years,” says Yves Bergquist, director of the AI & Neuroscience in Media Project at the Entertainment Technology Center (ETC). “Keeping humans in the loop is extremely important.”
Yves is co-chair with AMD fellow Fred Walls of the task force on AI standards and media mounted by ETC and the Society of Motion Picture and Television Engineers (SMPTE). Earlier this year the task force produced what SMPTE president Renard Jenkins calls “the most comprehensive document looking at both the technical side as well as the impact and the ethical and responsibility areas of this particular technology.”
Participating in a recent webinar with Jenkins and Walls, Yves says research shows that the true capabilities of AI systems in the vast majority of cases “are a fraction of what they’re trying to advertise.’ When it comes to getting to the truth of what can be done, “not enough people talk about how hard this is,” he adds.
Jenkins notes AI-assisted facial recognition is one example of where AI isn’t living up to a widely accepted 85% performance standard. “People go into using AI thinking it’s going to save us a lot of money,” Jenkins says. “Most of the time, the reason it fails to deliver is there isn’t enough time put into figuring out what it can really do.”
One of the most daunting tasks involves identifying biases that are inevitably built into LLM models. While part of the challenge involves eliminating the most egregious biases such as occur with use of reference material in facial recognition, it’s also important to be transparent about the unavoidable biases like those arising from cultural disparities.
The blending of ethical and performance issues in bias assessment is just one example of how addressing ethics and performance is really two sides of the same coin, Yves notes. “I have yet to see a requirement related to ethical AI that isn’t also a requirement of rigorous AI practice,” he says.
This is reflected in the dozens of standards and policy framework initiatives identified in the ETC/SMPTE task force report now underway at ISO-EPS, IEEE, ITU, W3C and other organizations. It’s an impressive list, but there’s obviously a long way to go, especially when it comes to setting the ethical frameworks on which performance standards must be built.
The report spells out the challenge: “While stakeholders in the development of this plan expressed broad agreement that societal and ethical considerations must factor into AI standards, it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions. Moreover, legal, societal, and ethical considerations should be considered by specialists trained in law and ethics.”
How fast the industry gets to having real guardrails will greatly depend on how big the perceived risk becomes, which “will help to drive decision making about the need for specific AI standards and standards-related tools.” How much comfort can be gained from a realization that the scarier AI gets, the more likely we are to act is debatable.
But even that modicum of relief from anxiety is yet to be found when it comes to reducing the alarms generated by the deepfake threat. When asked about progress toward creating tools capable of identifying professional caliber deepfakes, Walls replies that while efforts to develop such tools abound, the pace of deepfake AI development is such that it “will be really hard to tell if something is a deepfake or not.”
It would be great if the industry could count on some help from new laws to battle the scourge. While there’s cause from hope in Europe, where the EU Parliament has passed the AI Act with final approval expected by year’s end, Congress has made little headway beyond holding some hearings and taking a preliminary look at a handful of bills.
Nobody is more focused on the need for legislative action than National Association of Broadcasters president and CEO Curtis LeGeyt, who has had a front-row seat at two Senate hearings on AI issues. During a January 10 appearance before the Senate Judiciary Committee’s Subcommittee on Privacy, Technology and the Law, LeGeyt highlighted examples of AI-related abuses in three major areas of broadcast industry concerns, including copyright infringement, misuses of AI-generated likenesses of radio and TV personalities that spread false information, and other uses of deepfakes that make it hard to distinguish between truth and fiction.
“I have seen the harm inflicted on local broadcasters and our audiences by Big Tech giants whose market power allows them to monetize our local content while siphoning away local ad dollars,” LeGeyt says. “The sudden proliferation of generative AI tools risks exacerbating this harm. To address this, NAB is committed to protecting the unauthorized use of broadcast content and preventing the misuse of our local personalities by generative AI technologies.”
Along with pursuing Congressional action, NAB is deeply engaged in fostering self-governance.
“Our technology team is working closely with these new innovations to equip local stations with the best tools to integrate into their operations,” LeGeyt says.
Of course, with whatever help NAB and the various standards organizations can provide in developing tools and standards, it’s up to broadcasters to execute, notes Deloitte’s Judah Libin. “The burden is on broadcasters to organize, monitor and regulate themselves, then focus on industry standardization,” he says. A “clear governance structure and ethical guidelines” are essential, as is “rigorous and continuous testing.”
The Drive Toward Collaboration Against AI’s Downsides
A big part of self-governance is centered in broadcast newsrooms. Fostering such efforts has been a top priority at the Radio Television Digital News Association (RTDNA), the first national journalistic association to issue guidelines on news outlets’ use of AI, according to RTDNA president and CEO Dan Shelley.
Issued a year ago, the guidelines focus on how AI is used in newsgathering, editing and distribution with attention to ensuring accuracy through contextual and source validation, avoiding violations of privacy and maintaining clarity when AI is used to modify content. The association says newsroom policies should also keep faith with audiences, informing them of AI usage with assurances that everything is reviewed for adherence to journalistic principles by journalists.
Things are moving in the right direction. “There are infrastructures in place with experts thinking hard and acting very carefully when it comes to testing and implementing AI technology in local newsrooms,” Shelley says.
But staffing up with AI specialists is just part of the labor-intensive aspects of keeping AI on track. Everyone involved in news broadcasting has a role to play, underscoring the fact that, as Shelley stresses, “no matter how good AI becomes, it will never replace human intellect and the sensibilities to produce the best results obtainable.”
Alarm over the deepfake challenges earlier this year triggered an initiative under IBC’s Accelerator program that’s aimed at getting a worldwide coterie of broadcasters on board in the search for ways to prevent deepfakes from polluting the news stream. As of the September conference, the group reported significant headway, but officials make clear they’re hoping for broader participation.
“I don’t think there could be any more existential threat to media than we have in the form of misinformation, disinformation, manipulated images, fake images,” says Mark Smith, IBC Council chairman and head of the IBC Accelerator Program. “There’s a whole tsunami of content that’s coming at these trusted brands in our world of news broadcasters and news agencies.”
As Tim Forrest, content editor at U.K.-based Independent Television News, notes, “The fakes are getting better. The technology to make them is improving, too, and it’s getting easier and easier to use.” He says “a qualified guess” is that there are about 34 million deepfake images, video and text messages spreading across the globe daily.
The new effort to shore up the ability of legitimate newscasters to counter the scourge was the brainchild of consultant Allan McLennan, president and CEO of the PADEM Group, and
Anthony Guarino, until recently executive vice president of global production and studio technology at Paramount. As described by McLennan, the goal is to foster awareness of the deepfake threat and cooperation in doing something about it across the intensely competitive global news industry.
Some of the world’s biggest newscasters have joined the fight, including the Associated Press, CBS/Paramount, BBC, ITN, Globo, and many more, McLennan says. “We’re creating a path to sharing information and addressing disinformation together,” he explains. Along with promoting greater awareness of the threat, the participants are sharing results from their experiments with technologies that can be used to flag disinformation so that it doesn’t get into the news pipelines.
Committing time to do that is hard when everybody is competing to be first in breaking news, which is why finding technology that works quickly at scale is essential. Getting there will take working together over many years to share and act on information about the mechanisms that can be used effectively to expedite efforts to establish the providence of stories and detect deepfakes, McLennan says.
In doing so, the goal isn’t to create credentials to identify what works but to make sure the word gets out when something does. “There’s plenty of technology at hand, and everyone with these technologies in different niches wants to own the categories,” McLennan says. “What this group is all about is getting everyone to be part of the effort and bring solutions that can be put to use.”
But, he adds, “It’s not just about technology. It’s about the broadcast industry recognizing this is an issue that needs to be looked at with an immense commitment to collaboration.”