Italo Nicola
December 15, 2022
Reading time:
Machine learning is increasingly seeing more applications and it's important to have FOSS options to accelerate such workloads. Unfortunately, the present options in this space are often not appealing, causing users to opt for vendor-specific alternatives with downstream kernels and userspace. An example of this is VeriSilicon's VIPNano-QI NPU IP, which is used for ML workloads but isn't supported upstream.
This post will give a brief overview of the state of FOSS ML options and announce some work that we are doing to support OpenCL on the Etnaviv driver.
Until recently, there were no good options for executing machine learning workloads on accelerators with FOSS. The most realistic option was to use the TensorFlow Lite GPU delegate with the OpenGL backend on a Mesa driver with good support for compute shaders as specified in OpenGL ES 3.1.
The main problem with this approach was the dependency on OpenGL ES 3.1, which isn't always present in ML accelerators. Even when it is, the performance ceiling for GL ES 3.1 will be lower than what is possible with OpenCL, given its constraints.
There's also the option of using ONNX Runtime with oneDNN on top of a Mesa driver with good OpenCL support, or using TFLite GPU delegate's OpenCL backend with a CL-capable Mesa driver. But the OpenCL support we had in Mesa up until September 2022 was not comprehensive, to say the least; mostly only working on r600.
Mesa has had OpenCL support through Clover since 2012, but it only supported AMD hardware and it wasn't being very actively developed. One of the main limitations it had was the reliance on a specific flavor of LLVM IR. In August 2019, Karol Herbst added support for NIR, allowing other drivers that don't consume LLVM IR to potentially use Clover.
Sometime later, Karol started working on another avenue to support OpenCL on Mesa, coined Rusticl, which was eventually merged on September 12th, 2022. Like Clover, Rusticl also implements OpenCL as a gallium frontend, but the similarities stop there. Rusticl is written in Rust, is based on SPIR-V and NIR, aims to behave very closely to st/mesa
, supports images, is OpenCL 3.0 conformant on Intel 12th-Gen, and also passes the 3.0 tests on radeonsi.
The addition of Rusticl was one step towards getting more gallium drivers to support OpenCL, which comes in handy for our goal of adding OpenCL support to Etnaviv.
One of the IPs present in several SoCs in the market is VeriSilicon's VIPNano-QI NPU IP, which is used to accelerate neural networks and is present for instance in the VIM3. Thanks to the Etnaviv community, in particular to Christian Gmeiner and Lucas Stach, we have some information on this IP, and it turns out to be very closely related to other Vivante GPU IPs such as the GC7000.
With this in mind, we chose as our goal for this project to run a TFLite model with Etnaviv on the VIM3 NPU using OpenCL. We are specifically targeting the features commonly used by machine learning workloads, not aiming for full OpenCL conformance yet.
As the starting point for this project, the etnaviv driver didn't support GL ES 3.0, and more specifically, it didn't support the compute extensions, which are relevant to get OpenCL working, be it through Clover or Rusticl. Furthermore, there was no support for the specific NPU that is present on the VIM3, which is the VIPNano-QI.
On the kernel side, we were using a downstream Khadas VIM3 linux kernel with a few patches to help develop the userspace driver. Tomeu added the DT node and etnaviv hardware database entry for the VIPNano, allowing us to switch back and forth between using the galcore driver and etnaviv, which is useful for debugging.
Tomeu also implemented the gallium compute APIs and the missing NIR intrinsics, to the point where we could actually emit kernel jobs and, with some luck, read results back from the VIM3 NPU.
Using information and tools from the etnaviv RE repositories, we could then continue from here by comparing cmdstream and shader assembly against the galcore driver.
At this point, we were still using Clover because Rusticl hadn't been merged. Once it was merged, we switched to using it, which was mostly straightforward, aside from a few changes here and there to support devices with 32-bit address spaces.
We also had to make compiler changes because when compared to GL ES 2.0, OpenCL is more strict about what the compiler needs to support. These differences include requiring support for more bitsizes and vector sizes, stricter alignment rules, and less strict control flow. Besides this, we also ended up adding support for some missing operations, as well as fixing quite a few bugs that appeared along the way.
Lastly, Tomeu started adding continuous integration with piglit CL, so that we can prevent regressing Etnaviv OpenCL in the future.
As of November 30th 2022, etnaviv with our patches gives these results on piglit's CL tests:
2022-11-30 10:41:45.189086: Pass: 1459, ExpectedFail: 775, Skip: 390, Flake: 9, Duration: 8:13, Remaining: 0
We're also passing a fraction of the OpenCL CTS tests, but we haven't tried a full run yet.
Running clinfo
results in:
Number of platforms 1 Platform Name rusticl Platform Vendor Mesa/X.org Platform Version OpenCL 3.0 Platform Profile FULL_PROFILE Platform Extensions cl_khr_icd Platform Extensions with Version cl_khr_icd 0x400000 (1.0.0) Platform Numeric Version 0xc00000 (3.0.0) Platform Extensions function suffix MESA Platform Host timer resolution 0ns Platform Name rusticl Number of devices 1 Device Name Vivante GC8000 rev 7120 Device Vendor Vivante Device Vendor ID 0 Device Version OpenCL 3.0 Device Numeric Version 0xc00000 (3.0.0) Driver Version 23.0.0-devel (git-aadbe80383) Device OpenCL C Version OpenCL C 1.2 Device OpenCL C all versions OpenCL C 0xc00000 (3.0.0) OpenCL C 0x402000 (1.2.0) OpenCL C 0x401000 (1.1.0) OpenCL C 0x400000 (1.0.0) Device OpenCL C features (n/a) Latest comfornace test passed v0000-01-01-00 Device Type GPU Device Profile EMBEDDED_PROFILE Device Available Yes Compiler Available Yes Linker Available Yes Max compute units 9999 Max clock frequency 800MHz Device Partition (core) Max number of sub-devices 0 Supported partition types None Supported affinity domains (n/a) Max work item dimensions 3 Max work item sizes 256x256x256 Max work group size 256 Preferred work group size multiple (device) 4 Preferred work group size multiple (kernel) 4 Max sub-groups per work group 0 Preferred / native vector sizes char 1 / 1 short 1 / 1 int 1 / 1 long 1 / 1 half 0 / 0 (n/a) float 1 / 1 double 0 / 0 (n/a) Half-precision Floating-point support (n/a) Single-precision Floating-point support (core) Denormals No Infinity and NANs Yes Round to nearest Yes Round to zero No Round to infinity No IEEE754-2008 fused multiply-add No Support is emulated in software No Correctly-rounded divide and sqrt operations No Double-precision Floating-point support (n/a) Address bits 32, Little-Endian Global memory size 536870912 (512MiB) Error Correction support No Max memory allocation 536870912 (512MiB) Unified memory for Host and Device Yes Shared Virtual Memory (SVM) capabilities (core) Coarse-grained buffer sharing No Fine-grained buffer sharing No Fine-grained system sharing No Atomics No Minimum alignment for any data type 128 bytes Alignment of base address 4096 bits (512 bytes) Preferred alignment for atomics SVM 0 bytes Global 0 bytes Local 0 bytes Atomic memory capabilities relaxed, work-group scope Atomic fence capabilities relaxed, acquire/release, work-group scope Max size for global variable 0 Preferred total size of global vars 0 Global Memory cache type None Image support No Pipe support No Max number of pipe args 0 Max active pipe reservations 0 Max pipe packet size 0 Local memory type Global Local memory size 32768 (32KiB) Max number of constant args 1024 Max constant buffer size 134217728 (128MiB) Generic address space support No Max size of kernel argument 4096 (4KiB) Queue properties (on host) Out-of-order execution No Profiling Yes Device enqueue capabilities (n/a) Queue properties (on device) Out-of-order execution No Profiling No Preferred size 0 Max size 0 Max queues on device 0 Max events on device 0 Prefer user sync for interop Yes Profiling timer resolution 0ns Execution capabilities Run OpenCL kernels Yes Run native kernels No Non-uniform work-groups No Work-group collective functions No Sub-group independent forward progress No IL version (n/a) ILs with version (n/a) printf() buffer size 1048576 (1024KiB) Built-in kernels (n/a) Built-in kernels with version (n/a) Device Extensions cl_khr_byte_addressable_store cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics Device Extensions with Version cl_khr_byte_addressable_store 0x400000 (1.0.0) cl_khr_global_int32_base_atomics 0x400000 (1.0.0) cl_khr_global_int32_extended_atomics 0x400000 (1.0.0) cl_khr_local_int32_base_atomics 0x400000 (1.0.0) cl_khr_local_int32_extended_atomics 0x400000 (1.0.0) NULL platform behavior clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform clCreateContext(NULL, ...) [default] No platform clCreateContext(NULL, ...) [other] Success [MESA] clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) Success (1) Platform Name rusticl Device Name Vivante GC8000 rev 7120 clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) Success (1) Platform Name rusticl Device Name Vivante GC8000 rev 7120 clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) Success (1) Platform Name rusticl Device Name Vivante GC8000 rev 7120
In terms of where we are right now, we have a driver with previously no compute capabilities running some CL kernels and some OpenCL-CTS tests on the VIM3 NPU. This means that the work done here could also be the basis for implementing GL ES 3.1 compute shaders on Etnaviv.
In the future, we plan to add support for EVIS instructions and to make use of the hardware's SRAM to get better performance on ML workloads. We are also planning to support supporting images and fp16, which are used in many ML TFLite models.
For now, most of this work is not merged upstream yet. With a couple more iterations on the patches and CI support we should be able to upstream it soon. But there is a Mesa merge request and a kernel patch series if you're curious to follow its development.
19/12/2024
In the world of deep learning optimization, two powerful tools stand out: torch.compile, PyTorch’s just-in-time (JIT) compiler, and NVIDIA’s…
08/10/2024
Having multiple developers work on pre-merge testing distributes the process and ensures that every contribution is rigorously tested before…
15/08/2024
After rigorous debugging, a new unit testing framework was added to the backend compiler for NVK. This is a walkthrough of the steps taken…
01/08/2024
We're reflecting on the steps taken as we continually seek to improve Linux kernel integration. This will include more detail about the…
27/06/2024
With each board running a mainline-first Linux software stack and tested in a CI loop with the LAVA test framework, the Farm showcased Collabora's…
26/06/2024
WirePlumber 0.5 arrived recently with many new and essential features including the Smart Filter Policy, enabling audio filters to automatically…
Comments (0)
Add a Comment