The changes sweeping M&E industry approaches to content production have brought producers to a mission-defining crossroads: either they take action to eliminate the impediments to efficiency and innovation imposed by distance or suffer the consequences of restraining the full power of dispersed collaboration across their workflows.
At this point widescale adoption of the cost-saving efficiencies unleashed by cloud-based workflow convergence and studio virtualization have gone a long way toward eliminating reliance on traditional purpose-built and location-defined production environments. Applications of AI, video game rendering technology, a new generation of cameras, and much else have made this transformation possible.
How these technologies are used varies greatly between sports, news and other live content productions and operations devoted to producing movies and episodic programming for TV and streaming outlets. But all types of production stand to gain from network connectivity that allows video content to be shared in real time no matter how far apart participants in any given workflow might be.
In a previous article we focused on how real-time streaming on backend connections between venues and studios leads to major savings and improved user experience in sports production. Here we’re taking a broader look at what real-time video connectivity means to all types of production in the context of the remarkable transformations now underway as these advanced technologies take hold.
The Shift to Hybrid Approaches to Virtual Production
Specifically, the emergence of hybrid approaches to utilizing virtual production (VP) technologies has freed producers from locational lock-in. “Virtual Production is an ever-evolving solution,” says Jaime Raffone, senior manager of cinematic production at Sony Electronics (SE). “The move to hybrid and immersive technologies has been extremely helpful for us.”
Until last year, SE’s play in VP was focused on versions of what’s known as On-Set VP (OSVP) featuring massively scalable LED panels that can be seamlessly stacked to create video-generated environments encompassing floors, walls and ceilings. These are used to build giant volume facilities like the Sony Innovation Studio’s Stage 7 at Sony Picture Entertainment’s production lot in Culver City, CA
Volumetric studios cost a lot, but they can save producers a lot of money by eliminating the costs of on-location shoots, notes camera and lens expert Gary Adcock, a leading voice in next-gen VP production. “Our rough planning is every three meters of wall has its own computer rendering system,” Adcock explains. “When you start getting into 100-meter walls and every three meters you have to do that, all of a sudden you have a tremendously expensive outlay for computer systems.”
Those devices can run $25,000 to $40,000 or more each, he notes. And then there’s the cost of licensing and running in-camera visual effects (ICVPX) software such as the commonly used Epic Games-supplied Unreal Engine, which combines, compresses, unifies frame rates, and parses out the moment-to-moment scene-specific raw CGI and camera-captured video footage for synchronized rendering on each wall segment.
But such costs pale next to what it can cost for a location-based shoot, especially when there’s no viewer-discernable difference between the real and virtual versions. For example, Adcock notes, no one can tell there are no real cars involved in the climatic wild car chase at the end of The Batman film released in 2022. “We’ve gotten to the point where we can mimic the reality of the world in a way that’s necessary for the types of things that are done for filming and television,” he says.
But, for all the gains, the next-gen approach to VP can still be a hard sell when it comes to convincing project managers to relinquish heavy reliance on postproduction in favor of spending upfront on expertise and technology that significantly reduces the need for location-based production. Adcock says it’s often hard for people to embrace the idea of basing production on a roadmap that front ends preparations for dealing with all the nuances of visual effects, coloration, lighting, etc. that have traditionally been reserved for post.
“You have to think about what the end point is and work backwards to make sure what you’re setting up in VP will take you there,” he explains. “For people who have spent a long time in film, that can be a hard transition to make.”
But calculations as to inconvenience and steep learning curves give way to new thinking when a hybrid approach brings VP elements together with real props and backgrounds to enable much greater flexibility in the use of studio spaces while still minimizing the need for on-location shooting. Thanks to a spate of ground-breaking innovations that have made hybrid use-case flexibility a reality, VP in all its permutations is now seen as an essential tool that can contribute time- and cost-saving efficiency to just about any project.
LED wall building blocks linked to camera-to-computer rendering “cabinets” can be positioned to provide backdrops or stacked to support virtually any volume space. Workflow management systems harmonize these components with other dynamic elements across the physical and virtual spaces, including robotic cameras, 3D extended reality (XR) objects, lighting that illuminates the physical space in synch with on-screen luminosity, and much else.
New Hybrid VP Product Strategies
“As effects guys and directors are pulled in, they start to realize they can really improve the entire workflow,” says Bob Caniglia, director of sales operations for the Americas at Blackmagic Design. He notes that while there are some directors resistant to making the changes in production planning and execution required with reliance on LED volumes and in-camera visual effects (ICVFX), which take an approach to REMI (remote image model) production that directly embellishes camera feeds with visual effects, many more want to benefit from things like providing “actors on set with live environments” and doing “more in shorter periods of time.”
The rise in VP usage is having a big impact on Blackmagic’s output of cameras, switchers, editing tools and other products as it meets demand for things like the locational flexibility enabled by cloud-based workflows and the LED wall display quality stemming from use of 12K and, soon with a forthcoming release, 16K cameras. The trendline in VP is “definitely moving upwards,” Caniglia says.
SE’s Raffone, too, says the shift in perspectives on VP has been a big force behind new product development at her company. Notably, in late 2023 the company responded to customer demand for LED solutions that aren’t fixed components of big studios by introducing the Verona line of portable LED displays, which employ the ground-breaking Crystal LED technology Sony created to achieve life-like realism through independent illumination of each pixel.
Over the past year Verona displays have been deployed in multiple broadcast, cinema and VP educational projects, Raffone notes. New VP engagements in the broadcast industry include hybrid applications of Verona displays in live sports programming at WWE facilities in Stamford, CT and at the studios of another major sports broadcaster she declined to name.
SE has also strengthened its position in the hybrid VP space with a Virtual Production Systems toolset that supports ICVFX while utilizing the Epic Games Unreal Engine multi-display rendering platform to meld the real and virtual domains and to cut production time through virtualized renderings of camera settings ahead of actual production. Integration partnerships with Brompton Technology and Megapixel ensure out-of-the box compatibility between Verona LED displays and leading production controllers.
The upshot is a hybrid-optimized VP portfolio that’s gaining traction worldwide across the company’s traditional market base as well as in new areas of business, education and government. “We’ve come a long way,” she says.
The Growing Role of Extended Reality Technology in VP
Another element to VP has to do with the use of XR technology to project 3D objects and CGI-generated virtual surfaces into live broadcast spaces. This has allowed broadcasters to add compelling graphics dynamism and personalized viewing experiences to internet-streamed 2D renderings of news, sports, and other live broadcasts that are reaching viewers worldwide.
The broadcast industry’s embrace of volumetric technology in traditional 2D program production is a welcome development for a company like Arcturus Studios, which since its founding eight years ago has amassed a full suite of solutions, including acquisition of the Microsoft creative platform, that are designed to provide end-to-end support for building and streaming immersive 3D user experiences. “Humans by nature see things in 3D, so being able to use volumetric technology to more concisely convey the data producers are pulling off the field is top of mind,” says Mark Gerberman, vice president of sales at Arcturus.
With the amassing of ever more data that can be used to enrich viewing experiences, broadcasters need to make it easier for consumers to absorb and understand all that information.
The results of such efforts are on view just about everywhere. Across the globe consumers have grown accustomed to seeing 3D data displays generating stats in sync with the narratives of on-air personalities broadcasting from studios that are often embellished with virtual furnishings and backdrops.
Now, in the hybrid VP space, innovations are making it possible to blend the use of LED walls with real elements and people mixed with 3D objects. AI is playing a big role in getting such mixes of VP technologies to acceptable levels of verisimilitude.
A case in point is the graphics rendering capability Vizrt has introduced into the VP space to address one of the most vexing issues with ICVFX, namely, enabling realistic immersion of on-screen personalities into the virtual environment. As explained by Ray Ratliff, evangelist for Vizrt XR-related products that have helped propel the company to the forefront in next-gen VP., Vizrt’s latest version of its Viz Engine real-time graphics platform incorporates a new AI-driven function called Reality Connect into its workflows.
The solution utilizes AI algorithms in conjunction with continuously updated 3D models of people on set enabled by Viz Engine’s integration with Unreal Engine. The technique the delivers “significant improvements in reflections and shadow in the virtual environment,” Ratliff says.
AI is providing many other ways to save staff time and reduce the level of expertise required to execute tasks. For example, Vizrt is using the technology to track objects in a scene, obviating the need to implement multi-layer tracking systems. Ratliff also points to the recently introduced Adaptive Graphics tool, which eliminates the need to manually configure graphics for each type of display used in a production by allowing designers to build a graphic once and set parameters for automatic adaptation to different displays.
Ross Video is another major player benefitting from the hybrid VP wave. “Control and integration across all elements is probably our forte and strong suit,” says Mike Paquin, senior product manager for virtual solutions at Ross Video. He says Ross is seeing widespread use of LED walls in VP, but primarily in hybrid situations where broadcast productions can take place on stages without a VP component or be augmented with LED and/or XR elements to whatever degree is warranted in a given situation.
Paquin notes a big contributor to this versatility is Ross’s support for rendering graphic elements in whatever dimensions fit a given scenario. The company’s XPression Tessera platform employs a distributed workflow system that allows users to configure graphic elements for pixel-accurate rendering across multiple display environments with a dynamism that enables real-time changes in content in tandem with live events.
Such flexibility has created a multi-use studio environment that’s saving broadcasters a ton of money. “Building three or four studios for a TV station doesn’t work anymore,” he says, noting that many Ross customers, including smaller stations that “see what their network parents are doing,” are in the process of refreshing their studio environments. “Virtual is a part of almost every one of them,” he says.
Toward that end, at trade shows this year Ross is demonstrating support for a flexible workspace where a hard set and broadcast desk with display wall can become a weather reporting studio with the kind of green wall functionality weathercasters are accustomed to and then be turned instantly into a news or sports reporting venue with LED wall support for rendering the setting and attendant graphics. “Instead of needing one studio for the new set, another for sports, separate ones you have to set up for special events, you can flip a whole set from one use to another during a commercial break,” Paquin says.
Hybrid VP on Wheels
One of the more dramatic manifestations of the new locational flexibility enabled by hybrid applications of VP was recently on display in the middle of nowhere at Disney’s Golden Oak Ranch filming lot, a sprawling property north of Los Angeles where Brian Nowac, founder and CEO of MagicBox, was on hand for production of a series of Busch baked bean commercials. While the setting was appropriate for shooting the outdoor scenes, the rest of the filming was taking place in a tractor trailer outfitted by MagicBox as a complete self-contained LED studio.
Nowac describes the rapidly expanding role the uniquely equipped MagicBox trailers have been playing up and down the West Coast and soon will be playing elsewhere in hybrid M&E productions. “We’re building a fleet of super studio products we’ll be deploying all over the country,” he says.
The MagicBox trailers, measuring 52 feet long, 32 feet wide, and 13 ½ feet in height, can be used with a 10-foot high LED volume consuming 600 square feet of horizontal space or with individual LED walls. A motorized turntable floor makes it possible to film multidirectional car scenes that eliminate the need to shoot from a moving vehicle.
Nowac notes a recent Dodge Hornet commercial with a child version of an adult passenger at the wheel was shot entirely inside the MagicBox, which would have been impossible with a child driving on city streets. Another in-car shoot involving a woman and her dog in a Subaru commercial goes beyond what’s doable in the real world.
Along with the convenience of bringing LED studios to virtually any location, Nowac notes that MagicBox is making it possible for producers to save a lot of money by having another production space at hand when set-changing and other main studio downtimes would leave high-paid actors and staff standing idly by. “Actors can walk right outside a sound stage into the MagicBox studio and do alternate takes or more of their key lines using the exact same lighting and other aspects as they’re using in the big studio,” Nowac says.
PTZ Cameras Enhance Remote Video Versatility
Advances in cameras, too, are playing a big role in the production revolution, especially when it comes to the versatility and remote-control capabilities enabled by professional-grade PTZ (pan, tilt, zoom) cameras. They can be used for just about any purpose anywhere, from playing a complementary role with traditional cameras in live sports and other big event coverage to serving as the primary video sources in news reporting and in-studio camera work.
High-quality PTZ cameras can be controlled from a central location, eliminating the need for on-site camera operators and reducing travel and staffing costs. This allows production teams to cover multiple locations or events simultaneously, increasing efficiency and productivity.
Moreover, the integration of PTZ cameras into virtual set environments enables the creation of immersive and interactive experiences for viewers. With precise camera movements and seamless integration with virtual set technology, these cameras can be used to create dynamic and engaging content that would be difficult or impossible to achieve with traditional camera setups.
The Inevitability of Real-Time Connectivity in Production
It remains to be seen how quickly the transformations in approaches to M&E production enabled by all the advances discussed here lead to even greater freedom from distance restrictions through broader adoption of real-time streaming. But it seems safe to say that in this environment the benefits to be attained by supporting simultaneous collaboration on video workflows across dispersed locations have reached a tipping point in that direction.
The possibilities are endless when remote collaboration on video production is supported by virtually latency-free transfers of video, graphics and other assets among any number of workflow participants wherever they might be. As reported in our overview of real-time streaming platform providers, there are many browser-supported options available to put everyone involved in a project in the same real-time temporal space where shared experiences can be synchronized with accessibility to transferred assets from any source at end-to-end latencies no greater and often lower than 500ms.
Of course, there have long been real-time video transport options that require use of purpose-built appliances at the end points. But these hard-to-scale platforms are ill-suited to maintaining the flexibility to connect anyone anywhere at any time that comes with implementing RTIS platforms running on cloud computing systems.
Currently, most of the latter are WebRTC based, which avoids the need for special hardware or even software plug-ins on end devices. But, as we’ve also reported, other options are emerging that are more compatible with existing streaming infrastructure. The likelihood is that the need for real-time connectivity in the new production environment will be met at first by the WebRTC platforms with the other solutions kicking in to support more ubiquitous reliance on RTIS over time.