Paweł Wieczorek
August 10, 2023
Reading time:
Collabora's main testing laboratory has grown to automate testing on over 150 devices of about 30 different types. The lab receives job submissions from several CI systems, e.g. KernelCI, MesaCI, and Apertis QA.
Automating validation work is definitely convenient for developers, but how much more can underlying software (LAVA) scale (for supported devices and submission systems)? It is crucial to predict its possible bottlenecks well before they might appear. Once identified, what can be done to push these boundaries even further and how can we track these improvements?
Although not found to be included in any automated system, upstream benchmarks are available and track CPU and memory activity of several LAVA subsystems. These were designed for an older version of the LAVA software though, so an update will be necessary prior to including them in the current CI pipelines. You can learn more about them from "Testing a large testing software" talk (starting at 20:10).
In Collabora's lab, a very basic setup for testing LAVA performance was initially in use: a docker-compose instance with imported production database dump and Apache benchmark for mocking traffic to the server combined with pgAdmin for easier PostgreSQL analysis.
Running benchmarks in this setup helped pinpoint performance issues and patch them: from optimizing branching logic to introducing lazy pagination in the LAVA REST API.
Having further performance-focused development in mind, reducing the possibility of regressions (preferably by running per-patch benchmarks) was necessary. This environment wasn't, however, very comfortable for prolonged use with multiple patches to verify, so some changes to the CI pipeline seemed necessary. The main components were still: loading test data on the server, generating traffic, and a way to compare results between tests.
As benchmarks are not a part of any projects' CI yet, enabling them in the Collabora downstream CI pipeline could prove itself useful. This CI pipeline creates a very similar environment to the one described above:
Although this pipeline could be easily extended with a benchmark step, based on previous efforts it would not solve the main concern: guarding upcoming patches with new features from performance regressions. It would still be helpful to ensure features developed by Collabora continuously improve LAVA software performance, but let's aim a bit higher.
The main difference between downstream and upstream CI pipelines is that the upstream one doesn't spin up an actual LAVA instance. The whole testing procedure is performed using separately built CI images. These are later used as the environment for running in-tree tests, source code analysis, building packages, and documentation.
If someone needs to provide a specifically-crafted environment (e.g. pre-generated database to create a specific load on the server), an additional ci-image
is needed. This way the whole testing procedure can be much quicker - excluding database generation time (and even pre-generated database import time!).
In the next parts, the following topics will be discussed:
08/10/2024
Having multiple developers work on pre-merge testing distributes the process and ensures that every contribution is rigorously tested before…
15/08/2024
After rigorous debugging, a new unit testing framework was added to the backend compiler for NVK. This is a walkthrough of the steps taken…
01/08/2024
We're reflecting on the steps taken as we continually seek to improve Linux kernel integration. This will include more detail about the…
27/06/2024
With each board running a mainline-first Linux software stack and tested in a CI loop with the LAVA test framework, the Farm showcased Collabora's…
26/06/2024
WirePlumber 0.5 arrived recently with many new and essential features including the Smart Filter Policy, enabling audio filters to automatically…
12/06/2024
Part 3 of the cmtp-responder series with a focus on USB gadgets explores several new elements including a unified build environment with…
Comments (0)
Add a Comment