We're hiring!
*

The state of GFX virtualization using virglrenderer

Gert Wollny avatar

Gert Wollny
January 15, 2025

Share this post:

Reading time:

In this blog post Corentin Noël, Dmitrii Osipenko, Igor Torrente, and Gert Wollny look at the latest updates around the different approaches to GFX virtualization with virglrenderer.

GFX virtualization aims at providing support for hardware accelerated 3D graphics in virtual machines. Unlike GPU-passthrough, with GFX virtualization the host and all VM guests can access the host GPU simultaneously.

Vulkan and OpenGL are supported by virglrenderer using various approaches: Venus offers virtualized support for Vulkan, OpenGL can make use of VirGL or Zink, the latter of which translates to Vulkan to be handled by Venus. These approaches have the advantage that the guest OS doesn't need to provide specific drivers for the used host hardware. Instead Mesa 3D provides the virgl and zink gallium drivers for the respective OpenGL drivers, and the Vulkan driver named virtio that is also known as Venus.

The requirements for the guest OS are that it runs a kernel that has VirtIO enabled and Mesa 3D with the respective drivers compiled in.

As an alternative, there are relatively new Virtual DRM contexts (vDRM) that expose a direct interface to the host hardware. Consequently, in this case the guest must use hardware specific GFX drivers.

Below we will discuss the different approaches and their availability in detail.

Venus

Venus can be seen as a transport layer for the Vulkan API. In this sense, it forwards Vulkan calls to the host driver and delivers the results back to the application. Forwarding of the calls can often be done with very little overhead to map between the resources as seen on the host and the guest, for example shaders are simply passed as SPIR-V binary and can be consumed directly be the host driver. As a result, Venus is a rather thin layer and often an application can reach a performance that is close to running it directly on the host. The main factor that might limit performance can usually be found in host-guest synchronization.

Collabora - GFX virtualization using Venus

Currently, Venus is only available when the virtual machine manager (VMM) supports blob resources. As of now this is the case with CrosVM; support for Qemu is tagged for stable. In addition, Venus requires support for the VK_EXT_image_drm_format_modifier extension which is not available on pre-GFX9 AMD GPUs. We still have some issues issues with the NVIDIA proprietary driver (#524), which are expected to be addressed by NVIDIA in future versions of their driver. Venus is the only virglrenderer context type that runs in an isolated process on the host for hardened security and reliability, i.e. Vulkan crashing on the host won't take down the whole VMM.

Venus has been supporting all extensions required by DXVK and Zink for a considerable period of time and has already been tagged to stable in May/2023.

For the beginning of 2025, Venus supports Vulkan API versions up to v1.3.

VirGL

VirGL implements support for OpenGL. Unlike Venus, it uses an approach where the guest emits command and resource handling calls that need considerable extra work to be run on the host. For example, shaders are compiled and linked in the guest, emitted to the host in an intermediate format (TGSI), and then translated to GLSL so that the host driver has to compile and link the shaders again. In essence VirGL requires most work to be done twice: once in the guest and once on the host, significantly impacting performance. In addition, the guest-host communication is serialized across all guest OpenGL applications, so that the host receives just one command stream that is decoded in just one thread on the host. Because of this, running more than one OpenGL application in the guest results in a significantly lower performance for all OpenGL applications involved.

Collabora - GFX virtualization using VirGL

There are however some benefits. Given it's architecture, certain features that are not supported by the host driver can be emulated resulting in a guest that is able to run most OpenGL applications, even though the host may only support GLES.

In its current state, VirGL supports OpenGL 4.3 and GLES 3.2 in Qemu when the host supports it or if the host supports GLES 3.2. In the latter case a few rarely used features of OpenGL will not work correctly though. VirGL can support up to OpenGL 4.6, but OpenGL 4.4 and above require that the VMM supports blob resources, so the same limitations as with Venus apply. In summary, VirGL is basically feature complete. The current focus is only on fixing bugs, and here we put a special focus on using fuzzing to uncover remaining security issues.

Zink/Venus as alternative to VirGL

As an alternative to VirGL one can use Zink+Venus to provide support for OpenGL in the VM guest. This has the advantage that some limitations of VirGL are no longer relevant: For instance running multiple OpenGL applications at the same time without severe loss of performance becomes possible. Currently, work is ongoing to get Zink to work at least as well as VirGL when running just one OpenGL application. As of now this is already the case for many work loads, but some bugs still need fixing, and in some cases especially the Zink-Venus interaction leads to severe performance degradation. Fixing these issues continues apace, especially when running Zink+Venus with Sommelier. There are still some issues when running Weston on Zink+Venus.

Virtual DRM context

Virtual DRM context (vDRM) is a mediated method of Linux graphics virtualization which operates on the level of the Linux kernel driver UAPI, making a host GPU appear in a guest like a native one.

In comparison to VirGL and Venus, the main advantages of mediating lower level graphics APIs are better performance, lower resource usage, and lower code complexity. These all come with a price of lacking portability: A new vDRM driver needs to be created for every GPU driver and supporting new GPU cards may require updating vDRM drivers. In practice, this disadvantage is minor because the number of GPU vendors is limited.

Collabora - GFX virtualization using vDRM

Since vDRM is a much thinner layer compared to VirGL and Venus for the majority of applications it is able to achieve native GPU performance, where VirGL and Venus may struggle to overcome expensive host/guest synchronizations mandated by OpenGL and Vulkan APIs. The main performance obstacle observed for vDRM is the performance of guest memory management, in particular mapping GPU memory in a VM takes much more time compared to host native. While vDRM utilizes the same optimization tricks as VirGL and Venus to overcome memory management overhead, certain applications may still see a loss in performance.

As of today, at the beginning of 2025, vDRM is partially supported by crosvm. With crosvm you'll be using Sommelier because KMS display isn't yet supported for vDRM. The work on adding vDRM support to Qemu is in progress too.

Known problems with regards to supporting blob memory

Because of the way KVM and TTM interact within the Linux kernel, blob support is not supported by all drivers. The latest patches that resolve the issue can be found in the upcoming Linux kernel version 6.13.

Summary

With VirGL, Venus, and vDRM, virglrenderer offers three approaches to obtain access to accelerated GFX in a virtual machine. VirGL offers only OpenGL support and has its performance limitations, especially if one wants to run multiple concurrent OpenGL applications. Nevertheless, it has the advantage that it only requires host support for OpenGL, enabling it to run on older hardware that doesn't support Vulkan. Venus on the other hand provides all the advantages of Vulkan and OpenGL support (via Zink), but it requires host support for Vulkan, and certain aspects like synchronization still need some work to become fully stable.

Neither VirGL nor Venus require a guest driver that is aware of the specific host hardware, they simply query the host features by using the public OpenGL or Vulkan interface of the host driver, and offer a corresponding level of support in the guest.

Virtual DRM context (vDRM) on the other hand requires a driver in the guest OS that is tailored to the host hardware. While work on vDRM is still in progress it is expected to offer the best performance.

Conclusion

With this article we only gave a birds-eye view on the possibilities of GFX virtualization with virglrenderer. Once the missing pieces for Venus and vDRM are about to be merged we will follow up with an article on how to run these three variants using Qemu and CrosVM, and also provide and discuss some performance numbers.

Comments (14)

  1. Alex:
    Jan 15, 2025 at 11:43 PM

    Hi, thank you for this thorough explanation! I am curious though, how does gfxstream fit into all of this? It looks like a Venus alternative but maybe it's more to do with vDRM? Or maybe it's more Android/Chromium-specific and tailored for that ecosystem.

    Cheers

    Reply to this comment

    Reply to this comment

    1. Dmitrii Osipenko:
      Jan 16, 2025 at 02:38 PM

      Hi, Alex. Gfxstream is an alternative to entire virglrenderer. The alternative of virglrenderer's Venus will be the gfxstream's Magma. Gfxstream is indeed tailored towards the Android ecosystem, but you may get it running on a usual Linux distro with some efforts. During past year, basic gfxstream support landed to QEMU and Mesa, Linux distros began packaging gfxstream. QEMU-based Android emulators are the main area of interest for supporting gfxstream in upstream today, while for everything else virglrenderer likely will be a better choice.

      Reply to this comment

      Reply to this comment

  2. EliasOfWaffle:
    Jan 21, 2025 at 10:19 PM

    Perfect explanation, sorry but has a possibility to instead use sommelier
    using vKMS with a zink ksmro or just use vKMS with radeonsi?

    Reply to this comment

    Reply to this comment

    1. Dmitrii Osipenko:
      Mar 03, 2025 at 03:32 PM

      Hi, assume you're asking about using VKMS on guest instead of VirtIO-GPU. This is not possible, VirtIO-GPU is the core part that is not replaceable.

      If the question about using VKMS on host side for running VMM on a headless machine, then the answer should be that likely it won't work easily and you may not need VKMS for that. On a host side it would be either a dmabuf or GL texture that will be displayed using methods supported by VMM. In case of Sommelier, it would be a Wayland buffer that can be displayed remotely over network and etc.

      If question is about something else, then please try to rephrase and expand the question.

      Reply to this comment

      Reply to this comment

      1. EliasOfWaffle:
        Mar 07, 2025 at 02:59 PM

        Sorry, is because i'm not a native speaker, but thanks to the reply, my question is if is possible using virtiogpu drm native context just for drm context in guest and use a render only node in zink or radeonsi using kmsro like panfrost uses to use a render node context
        only on gallium driver for a display controller using kms/drm display driver, but instead of a hw drm driver, trying to use vkms against to provide support for guest side compositors like mutter instead of use sommelier or specific x11 shared proxy implementation.
        It can be possible?

        Reply to this comment

        Reply to this comment

        1. Dmitrii Osipenko:
          Mar 10, 2025 at 01:20 PM

          The question is still unclear. Please describe the problem you want to solve, what you want achieve. Do you want to use kmsro on host or guest? And why. On guest kmsro doesn't support virtio-gpu at least today, so it won't work, if that's the question.

          Reply to this comment

          Reply to this comment

  3. libfreeboys:
    Feb 26, 2025 at 02:43 AM

    > Because of this, running more than one OpenGL application in the guest results in a significantly lower performance for all OpenGL applications involved

    I started glmark2 in an Ubuntu guest virtual machine. A few seconds later, I opened a new terminal in the guest and ran another glmark2, keeping them running simultaneously. What puzzles me is that I didn't observe a significant drop in glmark2 scores.

    My qemu command:

    ```
    qemu-system-x86_64 -machine q35,accel=kvm -cpu host -smp 4 -m 4G -drive fileubuntu24.10.qcow2,format=qcow2,if=virtio -netdev user,id=net0 -device virtio-net-pci,netdev=net0 -device virtio-vga-gl -display sdl,gl=on
    ```

    Guest result:
    ```
    ubuntu@ubuntu:~$glmark2
    =======================================================
    glmark2 2023.01
    =======================================================
    OpenGL Information
    GL_VENDOR: Mesa
    GL_RENDERER: virgl (Mesa Intel(R) UHD Graphics 630 (CFL GT2))
    GL_VERSION: 4.3 (Compatibility Profile) Mesa 25.0.0-devel (git-a994ef4158)
    Surface Config: buf=32 r=8 g=8 b=8 a=8 depth=32 stencil=0 samples=0
    Surface Size: 800x600 windowed
    =======================================================
    [build] use-vbo=false: FPS: 273 FrameTime: 3.668 ms
    [build] use-vbo=true: FPS: 250 FrameTime: 4.000 ms
    [texture] texture-filter=nearest: FPS: 236 FrameTime: 4.253 ms
    [texture] texture-filter=linear: FPS: 238 FrameTime: 4.205 ms
    [texture] texture-filter=mipmap: FPS: 340 FrameTime: 2.948 ms
    [shading] shading=gouraud: FPS: 234 FrameTime: 4.280 ms
    [shading] shading=blinn-phong-inf: FPS: 245 FrameTime: 4.091 ms
    [shading] shading=phong: FPS: 226 FrameTime: 4.433 ms
    [shading] shading=cel: FPS: 233 FrameTime: 4.302 ms
    [bump] bump-render=high-poly: FPS: 213 FrameTime: 4.709 ms
    [bump] bump-render=normals: FPS: 270 FrameTime: 3.711 ms
    =======================================================
    glmark2 Score: 249
    =======================================================
    ubuntu@ubuntu:~$


    another terminal:

    ubuntu@ubuntu:~$glmark2
    =======================================================
    glmark2 2023.01
    =======================================================
    OpenGL Information
    GL_VENDOR: Mesa
    GL_RENDERER: virgl (Mesa Intel(R) UHD Graphics 630 (CFL GT2))
    GL_VERSION: 4.3 (Compatibility Profile) Mesa 25.0.0-devel (git-a994ef4158)
    Surface Config: buf=32 r=8 g=8 b=8 a=8 depth=32 stencil=0 samples=0
    Surface Size: 800x600 windowed
    =======================================================
    [build] use-vbo=false: FPS: 292 FrameTime: 3.427 ms
    [build] use-vbo=true: FPS: 210 FrameTime: 4.772 ms
    [texture] texture-filter=nearest: FPS: 228 FrameTime: 4.390 ms
    [texture] texture-filter=linear: FPS: 205 FrameTime: 4.893 ms
    [texture] texture-filter=mipmap: FPS: 224 FrameTime: 4.469 ms
    [shading] shading=gouraud: FPS: 240 FrameTime: 4.169 ms
    =======================================================
    glmark2 Score: 232
    =======================================================
    ```

    Reply to this comment

    Reply to this comment

    1. Dmitrii Osipenko:
      Mar 03, 2025 at 03:40 PM

      Hi, your glmark2 score is rather low, likely it should be multiple times bigger. At first, run glmark2 natively on host and see what score you will get with that. On guest, in a VM, you should get glmark2 score comparable to the host's one. Most likely your glmark2 is CPU-bounded on guest for whatever reason and the problem is unrelated to VirGL.

      Reply to this comment

      Reply to this comment

      1. libfreeboys:
        Mar 04, 2025 at 09:04 AM

        I don't quite understand what you mean by "CPU-bounded." On the host, glmark2 scores over 3000, and the virtual machine shows that VirGL is being used, but the score is only around 200. The Ubuntu virtual machine isn't running anything else.

        When I run glmark2 --off-screen in the virtual machine, the score reaches over 2000. Additionally, when I open two terminals and run glmark2 --off-screen simultaneously, both scores remain above 2000 without any noticeable drop.

        Reply to this comment

        Reply to this comment

        1. Dmitrii Osipenko:
          Mar 06, 2025 at 02:45 PM

          Virgl is fine if offscreen is fast. Could be a problem with the QEMU's display output. Try updating QEMU to the latest version, create an issue if still having the problem.

          Reply to this comment

          Reply to this comment

  4. Adel:
    Mar 04, 2025 at 01:12 PM

    Maybe a stupid question, but was there any talks about making a Metal backend for virglrenderer? Or generally speaking is there any movement on making Windows and macOS guests work with VirGL?

    Reply to this comment

    Reply to this comment

    1. Dmitrii Osipenko:
      Mar 06, 2025 at 02:50 PM

      Hi, I'm not aware about anyone working on the macOS/Metal support. For the Windows please see https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/24223

      Reply to this comment

      Reply to this comment

  5. Kelvin Hu:
    Mar 10, 2025 at 04:51 AM

    Is vDRM the same as Virtio-GPU Native Context or are they separate projects/ideas?

    Reply to this comment

    Reply to this comment

    1. Dmitrii Osipenko:
      Mar 10, 2025 at 02:15 PM

      Hi, vDRM the same as Virtio-GPU Native Context. There multiple variants of the native context naming. vDRM is one of the variants, sometime you may see VNC variant (virtual native context). It depends on a person you're talking to, everyone has own naming preference.

      Reply to this comment

      Reply to this comment


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

Implementing Bluetooth on embedded Linux: Open source BlueZ vs proprietary stacks

27/02/2025

If you are considering deploying BlueZ on your embedded Linux device, the benefits in terms of flexibility, community support, and long-term…

The state of GFX virtualization using virglrenderer

15/01/2025

With VirGL, Venus, and vDRM, virglrenderer offers three different approaches to obtain access to accelerated GFX in a virtual machine. Here…

Faster inference: torch.compile vs TensorRT

19/12/2024

In the world of deep learning optimization, two powerful tools stand out: torch.compile, PyTorch’s just-in-time (JIT) compiler, and NVIDIA’s…

Mesa CI and the power of pre-merge testing

08/10/2024

Having multiple developers work on pre-merge testing distributes the process and ensures that every contribution is rigorously tested before…

A shifty tale about unit testing with Maxwell, NVK's backend compiler

15/08/2024

After rigorous debugging, a new unit testing framework was added to the backend compiler for NVK. This is a walkthrough of the steps taken…

A journey towards reliable testing in the Linux Kernel

01/08/2024

We're reflecting on the steps taken as we continually seek to improve Linux kernel integration. This will include more detail about the…

Open Since 2005 logo

Our website only uses a strictly necessary session cookie provided by our CMS system. To find out more please follow this link.

Collabora Limited © 2005-2025. All rights reserved. Privacy Notice. Sitemap.