Stop the Guesswork: Using Auditd for Linux Incident Response

Security tutorial - IT technology blog
Security tutorial - IT technology blog

The 2 AM Nightmare: Who Gutted the Config?

2:15 AM. My pager screamed. The production Nginx server was throwing 500 errors, and the site was down for over 10,000 active users. I logged in, checked the service status, and saw that the nginx.conf had been modified just minutes prior. The mtime didn’t lie, but the root user’s shell history was a total blank. Someone—or some script—had modified a critical file and wiped the .bash_history to cover their tracks.

Standard logs like /var/log/auth.log or syslog were dead ends. They confirmed a login occurred but offered zero insight into what happened afterward. This is the exact moment many sysadmins start losing sleep. If you rely on basic application logging, you are flying blind during a breach. To fix this, I needed a way to record activity directly at the kernel level before it could be hidden.

The Blind Spot: Why Standard Logs Fail

Standard Linux logging is mostly “event-based” at the application layer. SSH logs the connection; Nginx logs the request. However, these systems don’t understand the intent of file manipulation. If a malicious actor uses vim, sed, or a Python one-liner to overwrite /etc/shadow, the kernel processes the system call, but your logs usually ignore it.

We need granular visibility. We need to know who opened a file, which process they used, and exactly what they executed. This data must persist even if the user deletes their history file. By hooking into kernel system calls (syscalls), we can capture this activity in real-time.

Comparing the Solutions

When searching for a way to close this visibility gap, I weighed three main options:

  • Shell History (HISTFILE): This is for convenience, not security. Anyone can bypass it in seconds by running unset HISTFILE or kill -9 $$.
  • EDR (Endpoint Detection and Response): Tools like CrowdStrike or Wazuh are powerful. However, they often carry a heavy resource footprint and high licensing costs that don’t make sense for every single specialized server.
  • Auditd (The Linux Audit Framework): This is the native, lightweight choice. It is built into almost every major distribution and sits between the user space and the kernel. It’s virtually impossible for a standard user to bypass.

Auditd is the clear winner for forensic depth. It doesn’t just report that a file changed; it provides the PID, the UID, the executable path, and the precise timestamp of the syscall. On most systems, it adds less than 3% CPU overhead, making it a no-brainer for production.

The Better Way: Implementing Auditd

Installation is the easy part. On Ubuntu or Debian, you can get it running with one command:

sudo apt update && sudo apt install auditd audispd-plugins -y

Before diving into configuration, I ensured my own access was locked down. I used the password generator at toolcraft.app/en/tools/security/password-generator to rotate my server credentials. It generates high-entropy strings locally in the browser, which is a simple way to prevent credential leaks from the start.

1. Monitoring Sensitive Files

To prevent another 2 AM fire drill, I set up “watches” on critical files. You define a path, the permissions to monitor (read, write, execute, or attribute change), and a “key” for easy filtering.

Add these rules to /etc/audit/rules.d/audit.rules:

# Monitor changes to the audit rules themselves
-w /etc/audit/ -p wa -k auditconfig

# Watch for changes to Nginx configs
-w /etc/nginx/nginx.conf -p wa -k nginx_change

# Watch for access to sensitive credential files
-w /etc/shadow -p wa -k password_mod
-w /etc/passwd -p wa -k user_mod

The -p wa flag triggers a log on write and attribute changes. The -k (key) acts as a custom tag so you don’t have to sift through thousands of lines of noise later.

2. Tracking Every Executed Command

Watching files is only half the battle. To see every command run on the system, you must monitor the execve system call. This is the kernel’s way of starting a new process. If someone runs a script, Auditd will catch it here.

# Track 64-bit execution
-a exit,always -F arch=b64 -S execve -k cmd_track

# Track 32-bit execution
-a exit,always -F arch=b32 -S execve -k cmd_track

Apply these changes by restarting the service:

sudo service auditd restart

Investigating the Logs

Auditd logs live in /var/log/audit/audit.log. By default, they look like a dense wall of text. Don’t waste your time with grep. Use ausearch instead.

To identify who touched the Nginx config, run:

sudo ausearch -k nginx_change -i

The -i flag is essential. It translates numeric IDs into human-readable names, like turning UID 0 into “root.”

If you need a high-level overview of all commands run in the last hour, use aureport:

sudo aureport -x --summary

A Real-World Log Entry

When a user runs sudo vi /etc/nginx/nginx.conf, Auditd generates an entry like this:

type=SYSCALL msg=audit(1692812400.123:456): arch=c000003e syscall=59 success=yes exit=0 a0=55d... 
items=2 ppid=1234 pid=5678 auid=1001 uid=0 gid=0 euid=0 exe="/usr/bin/vi" key="nginx_change"

Notice the auid=1001. This is the Audit User ID. Even if a user runs sudo to become root, Auditd remembers the original ID they used to log in. There is no more guessing who was behind the keyboard.

Final Thoughts for the Night Shift

Auditd isn’t just a tool for catching bad actors. It is about accountability. When a configuration drifts or a cron job goes rogue, you need a verifiable trail of evidence to fix it quickly.

Don’t wait for your next outage to start logging. Install Auditd today, set up watches on your /etc/ directory, and start tracking execve calls. Your future, well-rested self will thank you.

Share: