Erik Faye-Lund
October 31, 2018
Reading time:
For the last month or so, I've been playing with a new project during my work at Collabora, and as I've already briefly talked about at XDC 2018, it's about time to talk about it to a wider audience.
Zink is an OpenGL implementation on top of Vulkan. Or to be a bit more specific, Zink is a Mesa Gallium driver that leverages the existing OpenGL implementation in Mesa to provide hardware accelerated OpenGL when only a Vulkan driver is available.
glxgears on Zink |
Here's an overview of how this fits into the Mesa architecture, for those unfamiliar with it:
Architectural overview |
There's several motivation behind this project, but let's list a few:
I'll go through each of these points in more detail below.
But there's another, less concrete reason; someone had to do this. I was waiting for someone else to do it before me, but nobody seemed to actually go ahead. At least as long as you don't count solutions who only implement some variation of OpenGL ES (which in my opinion doesn't solve the problem; we need full OpenGL for this to be really valuable).
One problem is that OpenGL is a big API with a lot of legacy stuff that has accumulated since its initial release in 1992. OpenGL is well-established as a requirement for applications and desktop compositors.
But since the very successful release of Vulkan, we now have two main-stream APIs for essentially the same hardware functionality.
It's not looking like neither OpenGL nor Vulkan is going away, and the software-world is now hard at work implementing Vulkan support everywhere, which is great. But this leads to complexity. So my hope is that we can simplify things here, by only require things like desktop compositors to support one API down the road. We're not there yet, though; not all hardware has a Vulkan-driver, and some older hardware can't even support it. But at some point in the not too far future, we'll probably get there.
This means there might be a future where OpenGL's role could purely be one of legacy application compatibility. Perhaps Zink can help making that future a bit closer?
The amount of drivers to maintain is only growing, and we want the amount of code to maintain for legacy hardware to be as little as possible. And since Vulkan is a requirement already, maybe we can get good enough performance through emulation?
Besides, in the Open Source world, there's even new drivers being written for old hardware, and if the hardware is capable of supporting Vulkan, it could make sense to only support Vulkan "natively", and do OpenGL through Zink.
It all comes down to the economics here. There aren't infinite programmers out there that can maintain every GPU driver forever. But if we can make it easier and cheaper, maybe we can get better driver-support in the long run?
Because Zink is implemented as a Gallium driver in Mesa, there's some interesting side-benefits that comes "for free". For instance, projects like Gallium Nine or Clover could in theory work on top of the i965 Vulkan driver through Zink. Please note that this hasn't really been tested, though.
It should also be possible to run Zink on top of a closed-source Vulkan driver, and still get proper window system integration. Not that I promote the idea of using a closed-source Vulkan driver.
This might sound a bit strange, but it might be possible to extend Zink in ways where it can act as a cooperation-layer between OpenGL and Vulkan code in the same application.
The thing is, big CAD applications etc won't realistically rewrite all of their rendering-code to Vulkan in a wave of a hand. So if they can for instance prototype some Vulkan-code inside an OpenGL application, it might be easier to figure out if Vulkan is worth it or not for them.
Zink currently requires a Vulkan 1.0 implementation, with the following extensions (there's a few more, due to extensions requiring other extensions, but I've decided to omit those for simplicity):
VK_KHR_maintenance1
: This is required for the viewport flipping. It's also possible to do without this extension, and we have some experimental patches for that. I would certainly love to require as few extensions as possible.VK_KHR_external_memory_fd
: This is required as a way of getting the rendered result on screen. This isn't technically a hard requirement, as we also have a copy-based approach, but that's almost unusably slow. And I'm not sure if we'll bother keeping it around.Zink has to my knowledge only been tested on Linux. I don't think there's any major reasons why it wouldn't run on any other operating system supporting Vulkan, apart from the fact that some window-system integration code might have to be written.
Right now, it's not super-impressive: we implement OpenGL 2.1, and OpenGL ES 1.1 and 2.0 plus some extensions. Please note that the list of extensions might depend on the Vulkan implementation backing this, as we forward capabilities from that.
The list of extensions is too long to include here in a sane way, but here's a link to the output of glxinfo as of today on top of i965.
Here's some screenshots of applications and games we've tested that renders more or less correctly:
OpenArena on Zink |
Weston on Zink |
Quake 3 on Zink |
Extreme Tux Racer on Zink |
Yeah, so when I say OpenGL 2.1, I'm ignoring some features that we simply do not support yet:
glPointSize()
is currently not supported. Writing to gl_PointSize
from the vertex shader does work. We need to write some code to plumb this through the vertex shader to make it work.GL_ALPHA_TEST
support yet. There's some support code in NIR for this, we just need to start using it. This will depend on control-flow, though.glShadeModel(GL_FLAT)
isn't supported yet. This isn't particularly hard or anything, but we currently emit the SPIR-V before knowing the drawing-state. We should probably change this. Another alternative is to patch in a flat-decoration on the fly.glPolygonMode(GL_FRONT, ...)
and glPolygonMode(GL_BACK, ...)
. This one is tricky to do correct, at least if we want to support newer shader-stages like geometry and tessellation at the same time. It's also hardto do performant, even without these shader-stages, as we need to draw these primitives in the same order as they were specified but with different primitive types. Luckily, Vulkan can do pretty fast geometry submission, so there might be some hope for some compromise-solution, at least. It might also be possible to combine stream-out and a geometry-shader or something here if we really end up caring about this use-case.And most importantly, we are not a conformant OpenGL implementation. I'm not saying we will never be, but as it currently stands, we do not do conformance testing, and as such we neither submit conformance results to Khronos.
It's also worth noting that at this point, we tend to care more about applications than theoretical use-cases and synthetic tests. That of course doesn't mean we do not care about correctness at all, it just means that we have plenty of work ahead of us, and the work that gets us most real-world benefit tends to take precedence. If you think otherwise, please send some patches! ;)
One thing should be very clear; a "native" OpenGL driver will always have a better performance-potential, simply because anything clever we do, they can do as well. So I don't expect to beat any serious OpenGL drivers on performance any time soon.
But the performance loss is already kinda less than I feared, especially since we haven't done anything particularly fancy with performance yet.
I don't yet have any systematic benchmark-numbers, and we currently have some kinda stupid bottlenecks that should be very possible to solve. So I'm reluctant to spend much time on benchmarking until those are fixed. Let's just say that I can play Quake 3 at tolerable frame rates right now ;)
But OK, I will say this: I currently get around 475 FPS on glxgears on top of Zink on my system. The i965 driver gives me around 1750 FPS. Don't read too much into those results, though; glxgears isn't a good benchmark. But for that particular workload, we're about a quarter of the performance. As I said, I don't think glxgears is a very good benchmark, but it's the only thing somewhat reproducible that I've run so far, so it's the only numbers I have. I'll certainly be doing some proper benchmarking in the future.
In the end, I suspect that the pipeline-caching is going to be the big hot-spot. There's a lot of state to hash, and finally compare once a hit has been found. We have some decent ideas on how to speed it up, but there's probably going to be some point where we simply can't get it any better.
But even then, perhaps we could introduce some OpenGL extension that allows an application to "freeze" the render-state into some objects, similar to Vertex Array Objects, and that way completely bypass this problem for applications willing to do a bit of support-code? The future will tell...
All in all, I'm not too worried about this yet. We're still early in the project, and I don't see any major, impenetrable walls.
Zink is only available as source code at the moment. No distro-packages exits yet.
In order to build Zink, you need the following:
The code currently lives in the zink-branch in my Mesa fork.
The first thing you have to do, is to clone the repository and build the zink
-branch. Even though Mesa has an autotools build-system, Zink only supports the Meson build-system. Remember to enable the zink
gallium-driver (-Dgallium-drivers=zink
) when configuring the build.
Install the driver somewhere appropriate, and use the $MESA_LOADER_DRIVER_OVERRIDE
environment variable to force the zink
-driver. From here you should be able to run many OpenGL applications using Zink.
Here's a rough recipe:
$ git clone https://gitlab.freedesktop.org/kusma/mesa.git mesa-zink Cloning into 'mesa-zink'... ... Checking out files: 100% (5982/5982), done. $ cd mesa-zink $ git checkout zink Branch 'zink' set up to track remote branch 'zink' from 'origin'. Switched to a new branch 'zink' $ meson --prefix=/tmp/zink -Dgallium-drivers=zink build-zink The Meson build system ... Found ninja-X.Y.Z at /usr/bin/ninja $ ninja -C build-zink install ninja: Entering directory `build-zink' ... installing /home/kusma/temp/mesa-zink/build-zink/src/gallium/targets/dri/libgallium_dri.so to /tmp/zink/lib64/dri/zink_dri.so $ LIBGL_DRIVERS_PATH=/tmp/zink/lib64/dri/ MESA_LOADER_DRIVER_OVERRIDE=zink glxgears -info GL_RENDERER = zink (Intel(R) UHD Graphics 620 (Kabylake GT2)) GL_VERSION = 2.1 Mesa 18.3.0-devel (git-395b12c2d7) GL_VENDOR = Collabora Ltd GL_EXTENSIONS = GL_ARB_multisample GL_EXT_abgr ...
Currently, the development happens on #dri-devel
on Freenode. Ping me (my handle is kusma
) with a link your branch, and I'll take a look.
Well, I think "forwards" is the only way to move . I'm currently working 1-2 days per week on this at Collabora, so things will keep moving forward on my end. In addition, Dave Airlie seems to have a high momentum at the moment also. He has a work-in-progress branch that hints at GL 3.3 being around the corner!
I also don't think there's any fundamental reason why we shouldn't be able to get to full OpenGL 4.6 eventually.
Besides the features, I also want to try to get this upstream in Mesa in some not-too-distant future. I think we're already beyond the point where Zink is useful.
I also would like to point out that David Airlie of Red Hat has contributed a lot of great patches, greatly advancing Zink from what it was before his help! At this point, he has implemented at least as many features as I have. So this is very much his accomplishment as well.
Visit Erik's blog.
19/12/2024
In the world of deep learning optimization, two powerful tools stand out: torch.compile, PyTorch’s just-in-time (JIT) compiler, and NVIDIA’s…
08/10/2024
Having multiple developers work on pre-merge testing distributes the process and ensures that every contribution is rigorously tested before…
15/08/2024
After rigorous debugging, a new unit testing framework was added to the backend compiler for NVK. This is a walkthrough of the steps taken…
01/08/2024
We're reflecting on the steps taken as we continually seek to improve Linux kernel integration. This will include more detail about the…
27/06/2024
With each board running a mainline-first Linux software stack and tested in a CI loop with the LAVA test framework, the Farm showcased Collabora's…
26/06/2024
WirePlumber 0.5 arrived recently with many new and essential features including the Smart Filter Policy, enabling audio filters to automatically…
Comments (12)
QwertyChouskie:
Nov 01, 2018 at 05:26 PM
Could you test running SuperTuxKart with this? I would but I'm on Ivy Bridge hardware (and also too lazy to compile :)
I could see this as a way to get semi-decent OpenGL on macOS via MoltenVK.
Reply to this comment
Reply to this comment
Erik Faye-Lund:
Nov 02, 2018 at 10:39 AM
I already answered this question elsewhere, but just for completeness: Yeah, SuperTuxKart works fine for me. There's a warning on start-up that the OpenGL version is too low, but the game seems to run just fine.
Reply to this comment
Reply to this comment
Someone:
Nov 02, 2018 at 04:29 PM
You said you don't know about any other attempt, what about https://github.com/kbiElude/VKGL?
Reply to this comment
Reply to this comment
Erik Faye-Lund:
Nov 03, 2018 at 12:18 AM
Yeah, I'm aware of VKGL, and maybe I should have worded this a bit differently. The problem with VKGL is that it targets OpenGL core profile, which doesn't really solve the problems I'm after.
Reply to this comment
Reply to this comment
teknohog:
Nov 04, 2018 at 11:13 AM
That seems to be for OpenGL 3.2 core, which won't be able to run old games using versions 1/2.
OpenGL ditched all that 90s cruft with versions 3.0/3.1 around 2008, for precisely reasons 1 and 2 of the original post. But those goals haven't really been fulfilled, because drivers continue to come with compatibility profiles for old code.
A project such as this should aim for OpenGL 1/2 specifically, so that modern OpenGL drivers could be cleaned up. I'm sure the author is aware of the modern/legacy OpenGL separation, but it wouldn't hurt to make it clearer.
Reply to this comment
Reply to this comment
Erik Faye-Lund:
Nov 05, 2018 at 03:15 PM
Zink isn't just about emulating legacy GL, it's about emulating *all* of OpenGL. We're already up to OpenGL 3.0 on some hardware since the blog-post was published.
One of my own personal reasons to work on this, (which I didn't cover in as much detail as maybe I should in the blog-post), is supporting GPU emulation on virtual machines with as few as possible high-privilege code-paths through the VM. This way we *only* need to virtualize Vulkan (which is an API that was designed with virtualization in mind, unlike OpenGL), and keep the high-privilege code to a minimum. Most of the complexity happens in user-space in the virtual machine. This makes it a lot easier to reason about security.
Don't get me wrong, emulating legacy GL has high priority, as more applications exist for that. I'm just saying we're doing both; luckily both Mesa and Vulkan makes this reasonably simple for us ;-)
Reply to this comment
Reply to this comment
msm:
Nov 08, 2018 at 01:53 PM
Is it possible to pick a different name? LWN.net reports: Zinc, a kernel cryptography layer. LWN.net reports: Zink, an OpenGL layer for Vulkan. Have people learnt nothing from fuse and FUSE? :-(
Reply to this comment
Reply to this comment
Erik Faye-Lund:
Nov 08, 2018 at 03:40 PM
While I'm not married to the name, I would like a better reason than "it's confusing in tech-news" to change it.
Practically speaking, this and Zinc have very little overlap. Zinc is a kernel crypto-layer internal name, and Zink is a mesa-internal name. To be honest, I'd leave it unnamed if I could, but each driver in mesa needs a name (and Zink has some meaning; you GALvanize metal by applying a layer of Zinc to it... and the the "k" is for VulKan).
And to be honest, I had never heard about Zinc before people brought it up. It's very far removed from what I'm doing, and it didn't show up when I performed some Google searches to look for similarly named things. Maybe that's a result of Google personalized searches, I dunno.
I don't really see this as becoming a problem. Neither of these are likely to have enough main-stream appeal that they end up causing confusion often.
Reply to this comment
Reply to this comment
QwertyChouskie:
Nov 08, 2018 at 04:29 PM
I think Zink has plenty of mainsteam appeal, it would allow OpenGL on any platform that supports Vulkan. Think e.g. quick ports to nex-gen consoles. Consoles usually don't support GL natively (except for the Switch), but there's a pretty good chance of the next gen consoles supporting Vulkan.
Reply to this comment
Reply to this comment
Erik Faye-Lund:
Nov 08, 2018 at 04:46 PM
Well, sure. What I mean is that the name Zink won't really be user-facing in a large degree. Kinda like swrast or llvmpipe in Mesa, most people will care about it under the name "Mesa"; it'll be packaged as a part of mesa, for instance.
There's of course cases where the driver is important to talk about, but I doubt think those cases will tend to be easily confused with a kernel crypto-layer.
Reply to this comment
Reply to this comment
Juan:
Jul 28, 2020 at 02:29 AM
Your article is very well written, but I don't want OpenGL become "legacy".
Reply to this comment
Reply to this comment
Keopsys From The Future:
Nov 21, 2021 at 04:54 PM
It’s funny to read an article about Zink from 2018 and from its creator. As of today, it’s running opengl/es until their last version and running with great perfs, and that is going to be even better with mesa 22.
The main advantages are well explained already: not have to reinvent the wheel for every hardware and focus driver development on Vulkan (that is the future) instead of both. Maybe the seasoned opengl driver developers will spend their time on Zink all together instead of fixing bolts in many hardware specific drivers at some point.
Zink if the best way to implement opengl now Vulkan is here to stay: A shiny hardware agnostic unified driver that compiles the latest and greatest of specifications and performances.
Reply to this comment
Reply to this comment
Add a Comment