We're hiring!
*

Improving the reliability of file system monitoring tools

Gabriel Krisman Bertazi avatar

Gabriel Krisman Bertazi
March 14, 2022

Share this post:

Reading time:

A fact of life, one that almost every computer user has to face at some point, is that file systems fail. Whether it is for an unknown reason, usually explained to managers as Alpha particles flying around the data center, or a more mundane (and way more likely) reason - a software bug - users don't usually enjoy losing their data for no reason. This is why file system developers put a huge effort in not only testing their code, but also in developing tools to recover volumes when they fail. In fact, all persistent file systems deployed in production are accompanied by check and repair tools, usually exposed through the fsck front-end. Some even go a step further with online repair tools.

fsck, the file system check and repair tool, is usually run by an administrator when they suspect the volume to be corrupted, sometimes following a mount command that failed. It is also run at boot-time on every few boots in almost every distro, through the systemd-fsck service, or equivalent logic.

Indeed, fsck is quite efficient in recovering from errors of several file systems, but it sometimes requires placing the file system offline and either walking through the disk to check for errors, or poking the super block for an error status. It is not the right tool to monitor the health of a file system in real-time, raising alarms and sirens when a problem is detected.

This kind of real-time monitoring is quite important to ensure data consistency and availability in data centers. In fact, it is essential that administrators or recovery daemons be notified as soon as an error occurs, such that they can start emergency recovery procedures, like kickstarting a backup, rebuilding a RAID, replacing a disk or maybe just running fsck. And, once one needs to watch over a large quantity of machines, like in a cloud provider with hundreds of machines, a reliable monitoring tool is essential.

The problem is that Linux didn't really expose a good interface to notify applications when a file system error happened. There wasn't much going on other than the error code returned to the application that executed the failed operation, which doesn't tell much about the cause of the error, nor is useful for a health monitoring application. Therefore, the approach taken by the existing monitoring tools was to either watch the kernel log, which is a risky business, since it might be wrapped by newer messages, or to query file system specific sysfs files, which register the last error. Both approaches are polling mechanisms, subject to missing messages that would cause the notification to be lost.

This is why we worked on a new mechanism for closely monitoring volumes and notifying recovery tools and sysadmins in real-time that an error occurred. The feature, merged in kernel 5.16, won't prevent failures from happening, but will help reduce the effects of such errors by guaranteeing any listener application receives the message. A monitoring application can then reliably report it to system administrators and forward the detailed error information to whomever is unlucky enough to be tasked with fixing it.

The new mechanism leverages the fanotify interface by adding a new FAN_FS_ERROR event type, which is issued by the file systems code itself, whenever an error is detected. By leveraging fanotify, the event is now tracked on an dedicated event queue to the listener, and it won't get overwritten by further errors. We also made sure that there is always enough memory to report it, even on low memory conditions.

The kernel documentation explains how to receive and interpret a FAN_FS_ERROR event . There is also an example tracer implementation in the kernel tree.

The feature, which is already on the upstream Linux kernel, will soon pop up in distribution kernels, and be taken up by distros around the globe. Soon enough, we will have better file system error monitoring tools on data centers, and also on our Linux desktops.

Comments (0)


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

Mesa CI and the power of pre-merge testing

08/10/2024

Having multiple developers work on pre-merge testing distributes the process and ensures that every contribution is rigorously tested before…

A shifty tale about unit testing with Maxwell, NVK's backend compiler

15/08/2024

After rigorous debugging, a new unit testing framework was added to the backend compiler for NVK. This is a walkthrough of the steps taken…

A journey towards reliable testing in the Linux Kernel

01/08/2024

We're reflecting on the steps taken as we continually seek to improve Linux kernel integration. This will include more detail about the…

Building a Board Farm for Embedded World

27/06/2024

With each board running a mainline-first Linux software stack and tested in a CI loop with the LAVA test framework, the Farm showcased Collabora's…

Smart audio filters with WirePlumber 0.5

26/06/2024

WirePlumber 0.5 arrived recently with many new and essential features including the Smart Filter Policy, enabling audio filters to automatically…

The latest on cmtp-responder, a permissively-licensed MTP responder implementation

12/06/2024

Part 3 of the cmtp-responder series with a focus on USB gadgets explores several new elements including a unified build environment with…

Open Since 2005 logo

Our website only uses a strictly necessary session cookie provided by our CMS system. To find out more please follow this link.

Collabora Limited © 2005-2024. All rights reserved. Privacy Notice. Sitemap.