German broadcaster NDR’s new virtual studio, powered by Avid’s virtual technology, allows a duo-presenter format to be introduced.
In the increasingly challenging quest to win viewer ratings, creativity and cost-efficiency are arguably two key attributes that will provide broadcasters with a competitive advantage. Virtual broadcast sets, while a relatively new concept, may well allow broadcasters to unleash their creativity and find more cost-effective ways of creating content. Shawn Liew reports.
When German public broadcaster Norddeutscher Rundfunk, or Northern German Broadcasting (NDR), decided that it needed to work with larger images that can better appeal to viewers and get them immediately involved in the topic, a key decision was made to convert its TV Studio 1 to a virtual production environment.
Because it is difficult to show large background images with different shots and perspectives, a large green box was an obvious solution, according to Matthias Rach, head of production of NDR radio and TV. “We then considered what technology we would need, and very quickly arrived at a virtual approach,” he adds.
The decision was also made to deploy camera robotics and a sensor system from Shotoku, alongside virtual technology from Avid. An essential difference between a virtual studio and a conventional green screen, the latter describes, is the camera tracking, which makes it possible to combine camera motions exactly with a graphics computer. This allows the virtually generated studio set to reproduce all of the camera motions and regardless of whether zooms, pans or tilting of the camera are involved, the graphics processor obtains position data that is as exact as possible, so that the graphics segment can be calculated precisely.
In NDR’s new virtual studio, there are four Sony HDC-2400 studio cameras, each with three 2/3-inch CCDs (charged-coupled device), and equipped with Fujinon’s DigiPower box lenses, 6.5 to 180mm, 1:1.5. A special feature of the virtual studio is that the cameras are mounted on mobile Vinten pedestals which, however, are locked in place. The camera supports are not moved, and the position of the cameras is changed only with regard to height, panning, tilt and zoom.
In this instance, the cameras and the graphics system must work together precisely, and the camera position data must be transmitted accurately. Norbert Sieben, video engineer at NDR, explains: “If the values no longer correspond, this can be seen in the images. The background then begins to float behind the background.”
During the system installation, a base calibration was carried out, where the cameras are aligned with fixed measured points, in approximately 10 different positions, to determine the exact location of the cameras. The robotics sensor from Shotoku captured the position data, which are then transmitted to the Avid system. Four Orad (an Avid company) HDVG+ render engines combine the real and virtual image content, with each engine responsible for one of the four cameras.
A tracking system processes the position data for the respective camera, transmitted by the Shotoku system via a network. For this, a render engine calculates the image segment and combines the real foreground with the virtual background. For this to succeed, a key or mask is also required, which NDR created with by using an Infuse keyer integrated into the HDVG+ system. “The keyer must always be exactly right,” says Sieben. “For example, shading can be seen here on the image, if the key is not set perfectly.”
The diffused green lighting of the studio background also presents a challenge, as he explains: “Due to the comparatively small distances from the green screen, a relatively large amount of green light from the background is reflected on the news desk and presenters.”
Painting the extensive walls of the studio set green is “far from sufficient” to achieve a clean chroma key, NDR notes, adding that uniform illumination of the entire area is necessary — this is because the mask is generated for a particular colour tone at a defined intensity. Maria Lindinger, lighting engineer, NDR, adds: “We illuminate the studio background with approximately 20 area lamps, each with three compartments.”
For background or studio graphics, they are fed in via the Avid Maestro Media Engine. The complete broadcast workflow is transmitted from the Annova OpenMedia editing system, via specially configured software interface, to the Maestro graphics system.
Editors develop the entire broadcast workflow in the OpenMedia system where, by means of an Avid plug-in, they can insert the broadcast graphics. These are then transmitted to the Maestro system, together with their exact position data.
The Maestro software runs on a control computer, which controls two other Avid HDVG2 graphics platforms. Each of these has two playout channels, creating a total of four channels. Jannis Redmer, graphics operator, NDR, explains: “I have two insert channels and two full-screen channels, which I can record independently. The background graphics and text inserts for the studio presentations are combined in the HDVG+ systems.”
While acknowledging that virtual production requires a higher level of concentration and better communication, NDR also lauds the operational possibilities it affords, including the integration of large images, and the introduction of a duo-presenter format. This, NDR elaborates, allows different perspectives between presenters on a topic to be divided.
Rach, head of production at NDR, summarises: “With the large-format images, we are also taking into account changed viewing habits that have developed due to tablets and smartphones. One objective was to improve the visibility of the broadcast on these mobile devices.”
And while the jury is still out on final viewer ratings, Rach optimistically concludes: “The trend is that, in comparison to last year, we have an increase in audience share of approximately 3%.”
As viewer expectations increase, and system costs and complexities decrease, media content providers, regardless of size or type, will begin to embrace virtual sets and augmented reality (AR), predicts Andrew Tan, director of sales, APAC, Ross Video.
He tells APB: “Virtual sets can actually lower operational costs, depending on the application. For instance, ‘blended’ environments combine the best of traditional physical design and virtual design to eliminate the need for video walls and on-set monitors.
“Not only can virtual sets be deployed at substantially less cost than traditional physical sets but multiple sets can be used in the same space. Virtual solutions are also enable to be operated and deployed in small spaces that require substantially less facility costs, and use less storage space due to needing fewer physical set pieces.”
If you are visiting IBC2017 this month, Tan invites you to visit booth 11.C10 to check out solutions such as the UX software application. Providing integration with tracking systems, keying products and real-time 3D rendering engines such as Ross Video’s XPression and Frontier, UX comes installed on a touchscreen PC for virtual set camera calibration, scene manipulation, media replacement, event triggering, animation control, robotic camera move control, and more.
XPression is a motion graphics platform Ross Video offers in sports studios, broadcast and venue control rooms, as well as OB vans and other mobile environments. “This advanced platform produces complex multi-layered 3D graphics for both SDI and IP infrastractures, and offers a wide range of capabilities such as virtual set, AR, clip server and transcoder with various workflow tools and software applications,” Tan describes.
The same creation and rendering platform is also the basis for Ross Video’s Tessera system, which is used to deliver graphics to any number of displays of various sizes in sports venues; and Ross Video’s Trackless Studio system, which provides a cost-effective virtual set using a stationary camera that works effectively in small spaces.
While XPression is “very well suited” for virtual sets, some studio facilities desire even more realistic backgrounds, Tan points out. Thus, Ross Video designed XPression, in tandem with Frontier — based on a video game engine technology optimised by Ross Video — to work in virtual studio environments that renders hyper-realistic imagery. “Even the most complex graphical elements, from rain drops and fire to live shadows, lens flares and dynamic highlights, can be created quickly, easily and with unprecedented realism,” Tan says.
What truly makes virtual sets and AR an attractive proposition, he adds, is the ability to offer more visually arresting looks for viewers, on top of more variety. XPression, for instance, integrates with dynamic data sources to deliver up-to-the-second, on-screen updates.
Using the same space, multiple virtual sets can be designed by combining physical and virtual elements, and be quickly changed for different shows. “Systems such as Tessera, XPression and Trackless Studio are utilised in stadiums and other venues to embellish the whole fan experience at the event,” Tan highlights.
And while he concedes that using virtual solutions require changes in workflows, skill sets and domain knowledge, there is also a monetisation opportunity to be explored. “Virtual solutions can provide new income sources through sponsorship of specific virtual elements,” Tan explains. “Advertising and sponsorships can be applied in the same manner for a variety of programming such as weather, traffic, sports, talk and variety on an annual, daily, or show-by-show basis.”