Olivier Crête
February 18, 2022
Reading time:
Made available earlier this month, GStreamer 1.20 is the fruitful result of 17 months of hard work from the entire community. Over 250 developers contributed code to make this release happen, and once again, Collabora had more contributors than any other organization.
Our work focused on the two areas in which we believe GStreamer shines the brightest: embedded systems, and network streaming, in particular WebRTC. Below is a summary of the impact our team of engineers had on this latest release.
As usual, you can also learn more about the enhancements done by the rest of the community by looking at the project's 1.20 release notes.
GStreamer is already the pre-eminent media framework for embedded systems, and this is an area where Collabora has been very active over the last release cycle. Here are some of the improvements that we've made.
After many years of efforts by Guillaume, Nicolas, Stéphane, and Aaron, we finally landed the support for sub-frame decoding. This has made it possible to start decoding video frames before the entire frame has been received from the network if the decoder supports this. We've implemented this for JPEG2000 with OpenJPEG, with H.264 with ffmpeg, as well as in gst-omx when using the Allegro extensions present on the Xilinx Zynq UltraScale+ MPSoC EV processors.
In partnership with Huawei, we also improved the GStreamer build system to make it possible to create a library containing only the specific parts of GStreamer used by a particular application or a set of applications. Take a look at this blog post to learn more.
Nicolas added MPEG2 and VP9 Stateless Linux support and contributed to enhancing the VP9 parser. The H264 Stateless Linux decoder also gained support for interlaced video streams, though only for slice-based decoders and not for frame-based decoders since no driver in the mainline Linux kernel supports that. Nicolas also added support for a rendering delay that allows multiple frames to be queued in a stateless decoder and enhances throughput at the cost of higher latency. He implemented this for the MPEG 2 Video, VP8, and VP9 decoders. He also added support for HEVC decoding to the new "va" plug-in that uses the new GStreamer common decoder implementation to support VA-API-based decoders.
Nicolas also implemented videocodectestsink: a small element that computes the checksum of incoming frames to compare them against a known good reference. This is useful for creating tests that ensure no regression in decoder implementations. He also added the necessary code in GStreamer to react to resolution changes in Video4Linux source. This is primarily relevant if the source is, for example, an HDMI input.
For much of the framework’s history, GStreamer’s principal focus has been on streaming media over a network. This is an area in which we've also made several contributions over this cycle.
We've contributed many improvements related to GStreamer's WebRTC stack, which is one of the most complete and flexible independent implementations of the WebRTC protocols. I've worked on GStreamer's WebRTC stack and have added many features. I included support for explicit notification of the end of candidates so that failing connections can be recognized faster. I reworked the WebRTC library API to ensure it is thread-safe by hiding all information behind properties. I also added support for the "priority" to media streams; setting the various priorities now adds the correct DSCP markings making it possible for network administrators to prioritize the traffic accordingly. I significantly improved the WebRTC statistics to expose most statistics that existed somewhere in the GStreamer RTP stack through the convenient WebRTC API, particularly those coming from the RTP jitter buffer.
Jakub has implemented an RTP Header extension making it possible to send colorspace information per frame; this enables GStreamer to share Dynamic HDR content over RTP. The extension we implemented is compatible with the proposal from Google's libwebrtc team. The basic specification for sending Opus over RTP only supports mono and stereo. The Google libwebrtc team has created an extension called "multiopus," making it possible to send multiple stereo Opus streams together to serve more than two channels. Jakub implemented this in GStreamer's Opus RTP payloader and depayloader. We've implemented RFC 6464, an RTP header extension allowing a client to send the server the relative level (volume) of the audio in the packet; this allows the server to prioritize clients who are speaking over others without having to decode all audio. We've also added support for the iSAC codec; it is a legacy audio codec that was open-sourced by Google in libwebrtc a couple of years ago. We've added a plug-in that wraps the reference implementation of the codec, and we've also written an RTP payloader and depayloader to enable GStreamer to send and receive audio encoded with this codec over RTP.
As part of the Hwangsaeul project sponsored by SK Telecom, we've improved the SRT support. Raghavendra added support for authentication, while Jakub added a way for the application to be notified of broken connections and added more options to the URI in a way that is compatible with the SRT demo application.
GStreamer being an incredibly flexible cross-platform framework, we've also made several improvements that fall outside of the two main categories.
Nicolas implemented support for decoding alpha channels in WebM videos. This is a bit special as the alpha channel is carried as a second video stream. He's also added support for decoding those using hardware-accelerated decoders such as V4L2 based decoders.
Aaron added the first element specifically for machine learning to the core GStreamer plug-in collection. It uses the ONNX library to do object detection; we hope to add more elements using the ONNX library in the future.
Aaron also helped Rabindra Harlalka from NICE to contribute upstream elements that can encrypt a stream using AES encryption. This just applies the AES in CBC mode to the incoming stream with the key provided by the application.
Xavier again made numerous improvements to the Meson build system; in particular, he replaced GStreamer's custom pkg-config file generator with one he contributed to Meson itself. This ensures that the generated pkg-config files match the libraries that are in the build system.
I added a "stats" property to the identity element; this makes it easier to instrument pipeline to get statistics for monitoring. I added support for the newer "constrained high" and "progressive high" H.264 profiles to the various GStreamer elements where those are relevant. Those profiles are just a subset of the existing High profile.
Jakub improved the d3d11desktopup plug-in to capture the Windows desktop to DirectX 11 textures. He implemented support to follow dynamic resolution changes of the desktop and also for capture Windows User Account Control (UAC) prompts.
I improved GstAudioAggregator base class used by elements such as the audiomixer and audiointerleave elements, it now emit a QoS message telling the application whenever it drops incoming buffers because they are late.
Stéphane fixed the MXF and Matroska demuxers to seek precisely to a frame; this makes it possible to use them as a source for video editing.
Xavier spent quite some time helping with the merge of the GStreamer repositories into a single one. This was an effort of the whole community, making our CI system more simple and generally makes life easier for GStreamer developers.
As usual, we have also contributed a large number of bug fixes across the board, but we won’t list them all out here.
Our team of engineers already has a number of contributions planned for the next release. These include a rework of the MPEG PS demuxer for more accurate seeking, improvements to the Wayland support like a GTK3 sink that can take advantage of Wayland's support for hardware video overlays, and support for DRM modifiers to enable higher performance zero-copy between hardware decoders and display.
If you are ready to explore GStreamer 1.20, or have any questions about how to take advantage of its exciting new features to get the maximum performance from your hardware, please do not hesitate to contact us. Collabora's multimedia team is always available to assist you to leverage or implement the latest feature releases of GStreamer.
15/11/2024
The Linux Foundation Member Summit is an opportune time to gather on the state of open source. Our talk will address the concerns and challenges…
14/11/2024
Today, we are delighted to announce a growing collaboration with MediaTek which will enable Collabora to introduce, improve, and maintain…
06/11/2024
Join us at electronica 2024! In partnership with Renesas, Collabora will be showcasing GStreamer open source AI video analytics on the Renesas…
Comments (5)
Jay:
Feb 19, 2022 at 12:00 AM
Much appreciated updates. May I also suggest av1 support?
Reply to this comment
Reply to this comment
Olivier Crête:
Feb 19, 2022 at 10:14 PM
GStreamer has had pretty complete AV1 support for a while!
Reply to this comment
Reply to this comment
Mark D:
Feb 02, 2023 at 03:15 PM
Hi, What is the syntax for rtp-hdrext/color-space?
I am trying ... ! rtpvp9pay ! 'application/x-rtp,extmap-1=(string)' ! udpsink
I have tried "primaries=9,transfer=18,matrix=9,range=??" (values from according to ITU-T H.273 Tables) totally unsure about the range and chroma sitting.
Caps seem to parse but always "streaming stopped, reason not-negotiated (-4)"
Reply to this comment
Reply to this comment
Olivier Crête:
Feb 02, 2023 at 08:32 PM
You need to specify the extension so it knows the value.
"... ! rtpvp9pay ! 'application/x-rtp, extmap-1="http://www.webrtc.org/experiments/rtp-hdrext/color-space"' ! udpsink"
Reply to this comment
Reply to this comment
Mark D:
Feb 02, 2023 at 10:37 PM
D'oh! Thank you.
Reply to this comment
Reply to this comment
Add a Comment