We're hiring!
*

Pushing testing laboratory performance limits by benchmarking LAVA - Part 1

Paweł Wieczorek avatar

Paweł Wieczorek
August 10, 2023

Share this post:

Reading time:

Collabora's main testing laboratory has grown to automate testing on over 150 devices of about 30 different types. The lab receives job submissions from several CI systems, e.g. KernelCI, MesaCI, and Apertis QA.

Automating validation work is definitely convenient for developers, but how much more can underlying software (LAVA) scale (for supported devices and submission systems)? It is crucial to predict its possible bottlenecks well before they might appear. Once identified, what can be done to push these boundaries even further and how can we track these improvements?

Previous efforts

Although not found to be included in any automated system, upstream benchmarks are available and track CPU and memory activity of several LAVA subsystems. These were designed for an older version of the LAVA software though, so an update will be necessary prior to including them in the current CI pipelines. You can learn more about them from "Testing a large testing software" talk (starting at 20:10).

In Collabora's lab, a very basic setup for testing LAVA performance was initially in use: a docker-compose instance with imported production database dump and Apache benchmark for mocking traffic to the server combined with pgAdmin for easier PostgreSQL analysis.

 PostgreSQL analysis environment.png
PostgreSQL analysis environment

 

Running benchmarks in this setup helped pinpoint performance issues and patch them: from optimizing branching logic to introducing lazy pagination in the LAVA REST API.

Having further performance-focused development in mind, reducing the possibility of regressions (preferably by running per-patch benchmarks) was necessary. This environment wasn't, however, very comfortable for prolonged use with multiple patches to verify, so some changes to the CI pipeline seemed necessary. The main components were still: loading test data on the server, generating traffic, and a way to compare results between tests.

Downstream CI pipeline

As benchmarks are not a part of any projects' CI yet, enabling them in the Collabora downstream CI pipeline could prove itself useful. This CI pipeline creates a very similar environment to the one described above:

Although this pipeline could be easily extended with a benchmark step, based on previous efforts it would not solve the main concern: guarding upcoming patches with new features from performance regressions. It would still be helpful to ensure features developed by Collabora continuously improve LAVA software performance, but let's aim a bit higher.

Upstream CI pipeline

The main difference between downstream and upstream CI pipelines is that the upstream one doesn't spin up an actual LAVA instance. The whole testing procedure is performed using separately built CI images. These are later used as the environment for running in-tree tests, source code analysis, building packages, and documentation.

Jobs Using CI images.png
Jobs using CI images

 

If someone needs to provide a specifically-crafted environment (e.g. pre-generated database to create a specific load on the server), an additional ci-image is needed. This way the whole testing procedure can be much quicker - excluding database generation time (and even pre-generated database import time!).

Going forward

In the next parts, the following topics will be discussed:

  • How to plug into upstream CI 
  • How to provide a server load that would create a 1:1 database mock (that behaves just like a production one)

Comments (0)


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

Mesa CI and the power of pre-merge testing

08/10/2024

Having multiple developers work on pre-merge testing distributes the process and ensures that every contribution is rigorously tested before…

A shifty tale about unit testing with Maxwell, NVK's backend compiler

15/08/2024

After rigorous debugging, a new unit testing framework was added to the backend compiler for NVK. This is a walkthrough of the steps taken…

A journey towards reliable testing in the Linux Kernel

01/08/2024

We're reflecting on the steps taken as we continually seek to improve Linux kernel integration. This will include more detail about the…

Building a Board Farm for Embedded World

27/06/2024

With each board running a mainline-first Linux software stack and tested in a CI loop with the LAVA test framework, the Farm showcased Collabora's…

Smart audio filters with WirePlumber 0.5

26/06/2024

WirePlumber 0.5 arrived recently with many new and essential features including the Smart Filter Policy, enabling audio filters to automatically…

The latest on cmtp-responder, a permissively-licensed MTP responder implementation

12/06/2024

Part 3 of the cmtp-responder series with a focus on USB gadgets explores several new elements including a unified build environment with…

Open Since 2005 logo

Our website only uses a strictly necessary session cookie provided by our CMS system. To find out more please follow this link.

Collabora Limited © 2005-2024. All rights reserved. Privacy Notice. Sitemap.