Adder Technology’s IP-based KVM solutions are capable of delivering 4K/UHD, HDR and HFR videos, as its solutions are designed to run over standard IP infrastructure supporting copper and fibre links. (Photo credit: ASHARKYU/Shutterstock.com)

Streamlining workflows in a multi-format media ecosystem

In facing a variety of formats — such as 4K/UHD, HDR and HFR — how should broadcasters and content producers evaluate the suitability of video formats for content that has to be delivered and stored over the long term? Josephine Tan finds out more.

Standards converters were first introduced to the broadcast industry to enable switching between the National Television System Committee (NTSC) and Phase Alternating Line (PAL) standards. Back then, these converters were analogue and physically large in size.

With the advent of digital video, standards converters are now becoming more compact and cheaper, says David Smith, technology manager at Rohde & Schwarz (R&S).

He tells APB: “Within the production workflows, many different camera codecs and resolutions have been developed and used for many years. This results in the role of format converters, and their position in the broadcast chain, being changed significantly in the last decade.”

4K/Ultra HD (UHD), high dynamic range (HDR) and high frame rate (HFR) are some technologies that have been developed, enabling content producers to enhance image quality while improving the overall viewing experience for viewers. Due to the emergence of these technologies, Smith points out that today’s format converters are employed in the digital uncompressed domain for the conversion of colour space for HDR, temporal for HFR, and spatial for resolution and de-interlacing.

He explains that the first stage in any conversion starts with de-interlacing, which is the process of converting interlaced video, such as analogue TV signals or 1080i format HDTV signals, into a non-interlaced form. “In practice, de-interlacing is often combined with frame-rate conversion to 1080p50 as this will give the best result with minimal effect on compressed bandwidth, when used in conjunction with high-efficiency video coding (HEVC)/H.265.”

As for spatial resolution conversion, he says that the process is “relatively straightforward” for progressive content to be incorporated into any workflows, whether up/downconverting. He continues: “Similarly, for frame-rate conversion, several experiments have shown that doubling the frame rate gives a significantly better perceived picture compared to doubling the resolution.”

To enhance workflow efficiency in a multi-format broadcast ecosystem, R&S developed the AVHE100 headend solution for encoding and multiplexing to manage de-interlacing, and up/down video resolution conversion, to or from 4K/UHD, HD, SD and over-the-top (OTT) formats. The system comprises the AVG100 audio/video gateway, which converts the feed formats used in broadcasting to IP, and the AVS100 audio/video processing platform, which uses software options to enable functions such as decoding, encoding and multiplexing.

AVHE100 also features R&S CrossFlowIP technology, which enables “optimum utilisation of all system components” while providing enhanced redundancy solutions. Additionally, the headend is equipped with an intuitive headend management system (HMS) that allows users to control and monitor the entire workflow via a single graphical user interface (GUI).

But when it comes to switching between different HDR formats, Smith suggests that the conversion is “more challenging” due to the requirement of extensive floating-point computation power, and is also “more complicated” as there are a number of different HDR formats and metadata streams involved.

He elaborates: “There are two broadcast HDR standards — hybrid log gamma (HLG) and perceptual quantiser (PQ). While HLG does not involve metadata, PQ requires one metadata set per programme. Even so, users can easily convert between HLG and PQ while achieving high-quality images.”

Despite the standard dynamic range (SDR) format being the widest viewed format by most viewers, SDR content can be upconverted to either PQ or HLG for archival purpose.

“Different approaches can be used for upconversion, which can give reasonable results, but the widely varying brightness of content requires the experienced eye of a colourist to achieve the best possible result,” Smith says. “Therefore, SDR content is usually obtained either by downconverting from HDR, or produced within the post-production workflows alongside the HDR content.”

There are also several distribution formats — such as HDR 10, HDR10+ and Dolby Vision — alongside mastering formats such as Sony’s S-Log3, available for content producers’ adoption. Smith continues: “In post, these formats will once again require colourists to ensure all the images are colour-graded against a specific reference monitor with a known maximum brightness.

“Conversion between these formats is a compute-intensive process because the mastering workflow uses much larger bit-depths — 16 bits per colour or more — during post production. However, generating multiple standards would be a part of the workflow, and not usually as a conversion step.”

These technologies, however, are exceeding the rate of adoption. And in this multi-format broadcast ecosystem, it is important to distinguish between the storage format, the file format and the content format, he declares.

The file format, according to Smith, is the most important aspect for longevity of access and reuse. One such format is the MXF (Mat­erial Exchange Format) digital file format that contains audio, video, subtitles, timecodes and other metadata, including HDR metadata.

He continues: “A further standard that uses MXF files is IMF (Interoperable Media Format), a methodology in which multiple versions of a programme can be packaged into a single deliverable. For example, the main programme might be aired in English, but the same deliverable can include a dubbed soundtrack to produce a French version, as well as opening and closing video credits for the German version.

“IMF can even define the conversion from the master content in the deliverable package to a specific version required for broadcast use. Under this circumstance, an IMF deliverable might include UHD-1 content in ProRes format, since this will provide the highest starting quality.”

Moving forward, Smith believes that IMF is able to offer “exciting opportunities in streamlining and enhancing the complete production chain” while maintaining the director’s original “artistic intent”. The ability to generate and easily manage multiple versions simplifies monetisation of content through customisation for different target audiences, delivery systems or displays, and is a compelling and future-proof proposition for any content producer, he concludes.

The demand for 4K/UHD, HDR and HFR productions are primarily driven by sports events as well as online video platforms such as Netflix and Amazon.

Particularly for Netflix and Amazon, both OTT service providers have been providing 4K/UHD HDR programmes as a premium offering to their subscribers. In anticipation of the upcoming 2018 Winter Olympics in Pyeongchang, South Korea, the country has been progressively rolling out 4K/UHD channels and programmes over terrestrial networks.

A practical challenge is: How can engineers fulfil the task of a 4K/UHD production on their existing 3G infrastructure, questions Sebastian Schaffrath, chief technology innovation officer for Lynx Technik.

He explains: “Practical consideration such as using four 3G inputs of an existing video router certainly might work in a number of cases, or using 12G connections can be an alternative solution.

“Discussions about 12G-SDI arise from manufacturers’ promises some years ago that broadcast IP technology will be ready for 4K/UHD. However, production demands have overtaken the development and adoption of IP standards. Therefore, broadcasters who are in need to deliver 4K/UHD in production choose 12G-SDI as an intermediate bridging technology.”

On the other hand, Schaffrath also sees IP as the future-proof solution to accommodate broadcasters’ requirements as they look to navigate through the rapidly changing media landscape.

Stressing that one of the key benefits of IP technology in broadcast is scalability, he explains that the transition to IP not only enables the delivery of 4K/UHD but also provides broadcasters with the freedom of being able to move along with new formats typically driven by consumers’ devices.

Schaffrath concludes: “As production requirements change frequently, app-based and software-defined signal-format conversion will be the approach to take as it equips operators with the flexibility to change formats. Moreover, scalability provides operators with the freedom of changing an app or a software-parameter to fulfil the need for a dedicated conversion.”

The increase in resolution within broadcast workflows — from capture through to post production and delivery — will also see more data being transported across and within campuses. While broadcast standards evolve, and are adopted gradually by various manufacturers, technologies such as KVM (keyboard, video and mouse) and IP-based KVM that support the broadcast equipment must adapt accordingly, suggests John Halksworth, senior product manager at Adder Technology.

He says: “Simply put, IP-based KVM transcends the broadcast technology and their formats because it operates on an infrastructure that supports and networks those formats. This is especially true given the widespread adoption of IP as a standard transport infrastructure across the broadcast ecosystem.

“As a result, transporting larger data, video and audio files to users across facilities and teams can get easier, regardless of the formats. The role that IP-based KVM plays here is that it supports users in streamlining workflows, optimising efficiencies and accelerating return on investment (ROI).”

For KVM solutions provider like Adder, Halksworth reveals that the company is looking to move high-resolution HDR and HFR videos to screens as part of the post-production process. He says: “Adder’s IP-based KVM solutions are capable of supporting these feature sets, as the solutions are designed to run over standard IP infrastructure supporting copper and fibre links.

“We continue to follow broadcast trends to ensure our products enhance workflows, especially in distributing content to the gallery or post production. 4K/UHD and HDR are some of the functionalities which we have integrated into our solutions, enhancing applications like post production for viewing digitally produced content.”

 

Share Button