REAL TIME INTERACTIVE STREAMING






As Live Content Multiviewing Finally Gets Hot, Four New Approaches Vye to Be Game Changers


Highly Divergent Solutions Overcome Current Limitations


By Fred Dawson


Tiledmedia Multiview


Accelerating demand for multiviewing technology supporting single-screen displays of several channel or camera angle options has brought new solutions to market that surpass the capabilities typical of current applications in sports, news and other live TV and streaming productions.

While advancements in multiviewing introduced over the past couple of years by YouTube TV, Fubo, Roku, and major sports leagues like MLB and NBA accord users varying degrees of freedom to choose sports and other live channels they want to watch simultaneously on a single screen, they all limit the mix to no more than four streams. In contrast, at least four suppliers – MediaKind, Tiledmedia, Red5 and Eluvio – are taking highly divergent approaches to blowing out those restrictions.

Decades after multiviewing solutions were first broached in the cable TV business, the need to draw and retain audiences in an overcrowded marketplace has finally pushed multiviewing to the front burner, backed by overwhelming evidence that people want such capabilities. In a 2023 survey of 3,000 U.S. sports fans aged 14 and over, Deloitte found strong demand for more immersive streamed sports viewing experiences with over a third of respondents across all ages ranking control over camera angles as one of the two most sought after features, along with more advanced replay controls like slow-motion activation. An earlier Verizon Media-commissioned survey of 5,000 sports fans in the U.S. and four European countries produced almost identical results.

Moreover, such enhancements have become especially significant to engaging Z-generation users, whose comparatively lower interest in mainstream sports has become a major concern to rights holders. The question is, how far do producers want to go toward adoption of platforms that go beyond what we’ve been seeing so far?

Getting Beyond the Status Quo

The primary limitation on multiviewing that’s typically been imposed by solutions that rely on the dominant HLS and MPEG DASH streaming modes is two-fold. One issue has to do with the fact that the viewing options must be streamed at full resolution over the access bandwidth available to users, which means once you get to four 4K UHD live sports streams, even when you’re using High Efficiency Video Coding (HEVC or H.265) you’re consuming 100-120 megabits per second per user. The other hurdle relates to limited device support for decoding multiple channels, each of which may require different client software matched to their decoding requirements.

The choice of approaches to multiviewing taken by any given service provider heavily depends on whether the primary goal is to support users’ ability to watch more than one live program at the same time on a single screen or, more comprehensively, to also ensure that viewers can access multiple camera feeds from individual sporting events. When the latter is a priority, unless a provider is satisfied with limiting the camera feed options to four, the mainstream approaches currently in play won’t do.

As Tiledmedia co-founder and chief business officer Rob Koenen observes, the need for multiviewing of a single sports event isn’t limited to just offering different viewing angles on a team sporting event like football or baseball. “The most obvious use case is where you have a sport that has things going on in multiple areas,” he says, citing examples like individual driver perspectives in a race, multi-hole competition in golf and cycling competitions. “There are multi-game days like you have with a tennis tournament with multiple matches going on at the same time,” he adds.

MediaKind, Tiledmedia, Eluvio and Red5 all get around the current limitations with solutions operating at varying degrees of departure from conventional approaches. MediaKind’s and Tiledmedia’s solutions are the least disruptive to the usual way of doing things with MediaKind leveraging cloud processing techniques supplied by Skreens Technology to expand the multiviewing experience and Tiledmedia taking a client-side approach using a single player to clear the bandwidth hurdle and handle multi-stream decoding.

Eluvio’s support for multiviewing is intrinsic to the capabilities achieved with use of its Web 3 blockchain-based Content Fabric platform, which supplants the multi-siloed software stacks that comprise encoding, transcoding, packaging, DRM implementation, and feature enhancements used with conventional Hypertext Transport Protocol (HTTP) streaming platforms. Red5, avoiding reliance on HTTP-based streaming, achieves support for unlimited multiviewing options and high levels of use-case flexibility in both distribution and backend workflows related to live production and surveillance through orchestration of WebRTC-based streaming on its Experience Delivery Network (XDN) platform.

MediaKind Delivers What Comcast and Other SPs Are Looking For

The MediaKind Multiview solution, with adoption by Comcast for use with its Xfinity platform, boasts the biggest publicized win so far among these four. By making Multiview a component of the MK.IO cloud SaaS platform, MediaKind is making it easy for any MK.10 customer to implement the Multiview solution makes it possible to consolidates up to 16 individual streams from the cloud for delivery over a single TV channel or video stream, says MediaKind CTO Cory Zachman (see video of interview with Zachman at the 2025 NAB Show).

Audiences can toggle between multiple camera angles from a single event or programming from multiple channels, he continues, adding that, so far, customers are having a “ton of success” offering mixes of four or fewer feeds from different sports events. “It works on any device you have from old set-top boxes to new connected TVs” without requiring additional bandwidth or any reduction in resolution or frame rates, he says.

With the aid of the Skreens cloud processing platform, MediaKind Multiview can pull together into a single stream configurations of two side-by-side equally sized viewing options, three options with one consuming most of the space and two others occupying smaller areas or blocks of four equally sized displays which can be sent singly to fill the entire screen or in multiple blocks to accommodate up to 16 options. Viewers can shift moment to moment to the audio feed from any video in the assembly and expand any video to full screen. They set up personal displays of how the options are presented on their screen by picking from any collection of channels or camera angles made available by service providers for the type of multiviewing experiences the provider wants to support.

“Although on the client device, it might look like there’s multiple independent videos playing, it’s really just a composition of all those videos done on the server,” Zachman says. “So it requires the same amount of decoding capability and same amount of bandwidth for the end consumer [that’s required with a single channel].”.

All responses to user actions are instant, starting with the choices they make for how Multiview options are displayed on their screens at any point in a viewing session. To make these immediate set-ups possible, the Multiview platform ensures all possible configurations are available to users, Zachman says.

“Because our software runs in the cloud, we’re able to dynamically spin up hundreds of these streams at the same time, and we pick the most probable combinations of video and make those available as server-side composed Multiviews,” he explains. “So if you think there might be four football games on at the same time, we would create a lot of different configurations of those four football games.

“You might have a really big view of the Denver Broncos and then two smaller views of maybe the Dallas Cowboys and the San Francisco 49ers. But then, if there was a fan that was more of a Dallas Cowboys fan, we would have a second rendition with Dallas Cowboys as the main featured game and then the two smaller views.”

“That wouldn’t be possible with our software running on prem, unless you had procured a lot of extra hardware,” Zachman adds. “But with it running in the cloud, we’re able to spin up those resources just for the six or eight hours on Sunday and then shut them down when we’ve completed the event.”

Comcast’s use of the MediaKind Multiview platform began largely focused on football’s college game lineups on Saturdays and NFL games Sundays and was expanded to include the NCAA men’s and women’s basketball championships in March. But the company also used the app to give viewers the opportunity to watch two channels at once on election day in November and is now considering adding news and other programming besides sports to the Multiview list, Zachman says. So far, all customers, including Comcast, have limited the simultaneous viewing options to a maximum of four and have not included multiple camera feeds from a single event in their strategies, he adds.

Formula 1 TV Shows What Can be Done with Tiledmedia’s Solution

A very different use-case pattern has emerged so far with implementations of the Tiledmedia Multiview platform. “We recently launched with a service provider that has 24 feeds, and they’re all available on your device,” says Rob Koenen.

“You can see them all playing out in parallel,” he adds. “You can choose one or two [for large-screen display] and you can switch between them instantly. You can do a picture-in-picture; you can drag it around on your screen. That’s what the user experience looks like (see interview video from NAB Show).”

Though not officially announced, the widely acknowledged unnamed customer referenced by Koenen is Formula 1’s F1 TV service, which in March introduced a premium tier with multiviewing initially available on Apple iOS, Chrome web browser, and TVOS compatible devices. With productions worldwide featuring races over two or three days typically involving 20 drivers with ten teams fielding two cars each, F1 TV Premium allows viewers to switch back and forth across a field of thumbnail views to instantly call up full-screen 4K renderings of camera feeds from vehicles and pitstops.

Koenen says the application’s secret sauce is embodied in its multi-stream player. “It knows everything,” he says. “It knows the network. It has one ABR management engine. It has one engine that manages all the buffers. It has one engine that makes coherent decisions across all fields.” This contrasts with a situation where, to support simultaneous delivery of four different services “you have four ABR management engines that start to compete for bandwidth and that start to get out of sync.”

The fact that clicking on any thumbnail no matter how many are shown can instantly convert the view to full-screen display stems from the company’s use of its namesake tiling technology, which leverages the tiling mechanisms built into the HEVC standard. “There’s no server magic that needs to happen,” Koenen says.

Instead, the player quickly builds the full-screen display from the available feeds created in the compression process underlying each of the thumbnails. This starts with a set of pixels most essential to rendering primary elements in the viewscape and continues by adding more in microseconds to reach the full 4K resolution.

Koenen says the Tiledmedia player is very lightweight, enabling its use on just about any device. “We’ve done this on down-market Android phones and all the iPhones,” he says. “We do it on Android TVs, Apple TVs and web players.” Now that the platform is in commercial operations, “we’re getting a lot of interest, because it turns heads,” he adds.

Multiviewing Is Easy to Implement with Eluvio’s Silo-Busting Approach to Streaming

Eluvio hasn’t talked much about its multiviewing support, other than to parenthetically mention in its latest press packages that “multi-view” is one of the “Advanced Viewing Feature” enabled by its Content Fabric technology. That’s understandable given that Eluvio is focused on conveying a myriad of benefits that it touts as reason for converting to streaming on its Content Fabric platform, which, while compatible with HTTP-based streaming, represents a departure from conventional approaches to architecting streaming services.

A growing coterie of customers ranging from big-name M&E brands like MGM Studios, Fox, Microsoft, Paramount Home Entertainment, Sony Pictures and Warner Bros. Home Entertainment to independent filmmakers and off-shore interests like Telstra Broadband Services and the European Professional Club Rugby (EPCR) league’s EPCR-TV attest to the appeal of Eluvio’s break with prevailing streaming architectures.

As explained by Eluvio CEO and co-founder Michelle Munson, the Content Fabric software stack exploits the expanded computational power of microprocessors in conjunction with use of machine learning (ML) and blockchain technology to eliminate the need to format multiple versions of content files for multiple distribution scenarios from different transcoding and packaging centers across multiple CDNs.

Eluvio-developed applications of ML make it possible to identify, assemble and route all the content components on the fly. “We have built a native model into the fabric that allows for tagging all of the video content in multiple dimensions,” Munson says. Tagging based on object, action, people, place and key topic is derived from the ability of ML algorithms to understand all those things as they relate to each video.

The Fabric software stack consists of data, code and contract layers and a portfolio of APIs that enable use of additional tools for execution of functions ancillary to primary functionality in these layers. The data layer consists of objects or data representations of the media that comes into the Fabric, which can be assembled from multiple locations.

“When a request for media is made by a client, instead of having prebuilt variants in various file forms pushed out preloaded into caches and serving out based on that top-down approach, this componentized system builds and creates and serves the output that particular user has requested,” Munson says. “If what’s being requested isn’t already made and available on the nodes serving the particular client, the parts are found, fetched, transcoded, packaged and served out.”

She says the code layer loads up the packaging, content protection and transcoding operations that go into creating the output flows, while the contract layer applies an asset control interface directly on the objects to enable screening for authorization under content licensing policies. “The authorization itself can be coming from an OAuth [authentication protocol-based process], such as OIDC (OpenID Connect), Google, Facebook, etc. or it could come from a digital ticket, and, thirdly, could come from a direct account on the Fabric,” Munson says.

This all works in a flash of coordinated functions without reliance on file IOs (inputs/outputs). “That’s important because it allows for great speed, and it also works in a fully pipelined manner, so no time is wasted between processes or between memory content,” she notes. Because this happens in a global network where all the nodes are equal, “one of the other great benefits about this simple, dynamic approach is that it’s highly scalable,” she adds.

Along with everything else that might go into a streaming service, the Fabric provides silo-free execution of multiviewing strategies. The viewing options are advertised on the manifest for selection by the client based on user interactions with the multiviewing UI.

A new view selection follows seamlessly with the start of the next group of pictures (GOP) as designated by an I-frame following conclusion of the previous view’s GOP, which ensures no perceptible delay in the switch from one view to the next, Munson says. Any number of viewing options can be included in the thumbnail mosaic comprising the multiviewing UI, she adds.

Red5 Goes to New Heights Putting Multiviewing to Work in Multiple Use Cases

As in the case of Eluvio, Red5’s support for multiviewing is ancillary to the primary reason for running streaming over its platform, which has to do with achieving the sub-half-second latencies enabled by WebRTC in a dynamically orchestrated cloud environment.

But Red5 goes beyond the typical multiviewing app focus on UX in distribution. While the way its XDN streaming works as described by Red5 applies equally to backend and distribution with end-to-end latencies averaging 250ms at any distance and virtually unlimited user scalability in all directions, the company’s TrueTime MultiView SDKs support different approaches to multiview in keeping with the differences between distribution and backend use cases.

As revealed in demonstrations at the 2025 NAB Show and other recent conferences, Red5 sees the combination of real-time multidirectional streaming and multiviewing as a game changer supporting live production free of distance limitations. The same holds for the role multiviewing plays in Red5’s support for live surveillance operations involving multiple camera feeds, as reported in our recent update on advanced surveillance technology.

In fact, Red5 anticipates the experiences customers have solving urgent remote production issues using the XDN platform will lead them to recognize what the XDN capabilities could mean to them on the distribution side. Some of Red5’s current customers using the backend application are big players in sports distribution, notes Red5 CEO Chris Allen. “My theory is that’s going to help accelerate a lot of the end user experiences as well, because the second they start using this stuff in production then it makes it a lot easier to transfer to the consumer and create interactive apps and everything else as we go,” Allen says.

Adding to the application versatility, Allen says all the tools in the TrueTime MultiView suite rely on a common standards-based foundation that’s compatible with all the leading OSSs, including web, Android, iOS, MacOS, Windows and Linux. This allows customers to put them to use in any combination they deem appropriate to their solutions and services.

In all scenarios, live video feeds from all sources, be they finished productions or raw video feeds, are ingested at XDN origin nodes and relayed directly or through intermediate relay nodes via WebRTC to intelligent edge nodes serving a given segment of users.

In the distribution case, all video feeds, including a compilation of all feeds as thumbnails in a separate stream, are sent to the edge nodes. All users receive the thumbnail stream comprising their MultiView options, while the XDN intelligence allows each user’s choice of viewing angles or featured alternative programming to be unicast in full-screen resolution with no delay to the receiver, which in most cases can handle decoding without use of plugins owing to the support for WebRTC incorporated into all the major browsers.

There’s no limit to how many viewing options can be streamed to edge nodes for multiviewing, Allen says. It’s just a matter of how many thumbnail videos a provider wants to squeeze into the TrueTime MultiView selection space. Whatever the count might be, Allen adds, there’s no discernable lag between user selection and arrival of the full-screen choice, which means users can seamlessly stay with the live action while jumping from one viewing angle or one live sports service to another.

Multiviewing in Live Production and Surveillance Monitoring

Contrasting with the approach to supporting multiviewing on the distribution side, Red5’s approach in backend applications related to live production and surveillance entails use of its Mixer Node technology to enable any number of simultaneously displayed inputs to be managed from any computer screen, eliminating any need for special appliances.

The Mixer Node is a server-based software module used with other nodes in the Red5 XDN architecture. Whether the video streams ingested onto XDN infrastructure are from cameras used in live productions or in surveillance, they are aggregated at all points of ingestion for processing by the Mixer Node technology into a single-stream compilation of the feeds for delivery to operations centers in the case of surveillance or dispersed workstations in the case of live productions.

Users can click on any video in the compilation grid to attain full-screen visibility to perform analysis of the content in surveillance operations, often assisted by AI, or to perform editing tasks in live productions, including selection of frame sequences that go into the final output. In the latter scenarios, this gives producers collaborating over any distance in real time great latitude in determining what end users see moment to moment, ranging from the content in a single A/V feed to split-screen displays to composites of multiple video streams or single image captures.

In cases where distributors make use of Red5’s transcoding support for delivering multiple bitrate profiles in emulation of the adaptive bitrate (ABR) approach to accommodating variations in bandwidth conditions, the edited output is fed into Red5’s cloud-based Caudron transcoder for multi-profile distribution. All of this is done while maintaining end-to-end latency at or below 250ms, Allen says.

There’s no limit to the volume of streams technicians can work with, he notes. The XDN Mixer architecture can support production operations involving hundreds of streams through a chaining process that overcomes the capacity limitations of individual workstation CPUs. This is accomplished with a layering of Mixers where the Mixer in each layer loads a page that subscribes to multiple live streams, combines them into a custom HTML5 layout, and publishes the resulting blended stream to a Mixer in the next layer. The last Mixer in the chain publishes the composite stream to the XDN node cluster for handoff to whatever transport mechanisms are used to directly connect to audiences or to reach affiliated distributors.

-----------------------------------------------

It’s hard to imagine these new options in multiviewing won’t supplant the current ways of doing things sooner or later. They all have a lot to offer, leaving it up to how the cost-benefit analyses play out as to which ones take hold.

But given what they’ve all accomplished in terms of market receptivity so far, we won’t be surprised if they’re all in the hunt for a long time to come. Stay tuned.