The Problem with Standard Python Logging
Python’s built-in logging library is reliable, but its API feels like a relic from the early 2000s. Setting up simple file rotation or custom formatting often requires 20 to 30 lines of boilerplate before you even write your first log. When I was building my first microservice, I spent more time wrestling with logging.basicConfig and handlers than I did on the actual business logic.
In production, logs are your only lifeline. If they are messy or lack context, you are essentially flying blind. Combining Loguru and Sentry fixes this. Loguru removes the configuration headache with a “no-boilerplate” approach, while Sentry catches exceptions before your users even have a chance to email support.
Quick Start: Better Logs in Under 5 Minutes
You don’t need a complex configuration file to get professional results. Start by installing the libraries:
pip install loguru sentry-sdk
Loguru works immediately without the usual multi-line setup. Here is a production-ready snippet that handles both console output and file rotation:
from loguru import logger
import sys
# Clean up the default configuration
logger.remove()
logger.add(sys.stderr, level="INFO")
logger.add("logs/app_{time}.log", rotation="500 MB", retention="10 days", compression="zip")
def calculate_division(a, b):
try:
return a / b
except Exception:
logger.exception("Calculation failed unexpectedly")
calculate_division(10, 0)
This setup is surprisingly powerful. It automatically rotates files once they hit 500MB, archives old logs into ZIP files, and purges anything older than 10 days. The logger.exception method is a standout feature; it captures the entire stack trace so you don’t have to manually inspect the error object.
Why Loguru Outperforms the Standard Library
Loguru is built to be intuitive and thread-safe. You no longer need to instantiate a named logger in every single file using logger = logging.getLogger(__name__). Instead, you simply import the global logger object and start writing. It handles the underlying complexity of concurrent writes for you.
1. The @logger.catch Decorator
The @logger.catch decorator is a massive time-saver. It ensures that if a function crashes, the error is logged with a full, colorized backtrace before the process dies. This is perfect for background workers or scheduled cron jobs that might otherwise fail silently.
@logger.catch
def main_process():
# Any crash here is automatically caught and logged in detail
result = 1 / 0
2. Native Structured Logging (JSON)
Modern observability tools like Datadog or the ELK stack prefer JSON over plain text. Loguru handles this with a single argument. By setting serialize=True, every log entry becomes a structured object. This makes it easy for aggregators to index fields like timestamps, severity levels, and custom metadata without complex regex patterns.
Advanced Usage: Real-time Alerts with Sentry
Local logs are great for debugging, but they won’t wake you up when a critical service goes down at 3:00 AM. Sentry fills this gap. It tracks exceptions, groups identical errors together, and shows you exactly which commit caused the regression.
In a recent project handling roughly 15,000 requests per hour, this integration reduced our mean time to recovery (MTTR) by nearly 40%. We achieved this by piping Loguru errors directly into Sentry.
Connecting the Two Tools
import sentry_sdk
from loguru import logger
sentry_sdk.init(
dsn="https://[email protected]/project_id",
traces_sample_rate=0.1, # Monitor 10% of transactions to save on costs
environment="production"
)
# Create a bridge between Loguru and Sentry
def sentry_sink(message):
record = message.record
level = record["level"].name.lower()
if record.get("exception"):
sentry_sdk.capture_exception(record["exception"])
else:
sentry_sdk.capture_message(record["message"], level=level)
# Only send ERROR or CRITICAL logs to Sentry
logger.add(sentry_sink, level="ERROR")
With this “sink” in place, calling logger.error() does two things at once. It writes a detailed line to your local disk and triggers an immediate alert on your Sentry dashboard.
Lessons from 6 Months in Production
Running this setup in high-traffic environments has taught me a few ways to keep logs clean and costs manageable.
1. Scrub Sensitive Data
Logging request.body is helpful until you realize you’ve saved 5,000 plain-text passwords to your disk. Use Loguru’s filtering or a custom sink to redact PII (Personally Identifiable Information) before it leaves your application.
2. Use Contextual Logging with .bind()
When debugging a specific user’s issue, you need to see only their logs. Loguru’s bind() method lets you attach a user_id or request_id to a logger instance without affecting other parts of the app.
# Attach context to a local logger instance
context_logger = logger.bind(user_id="88", task_id="upload-01")
context_logger.info("Processing file")
# Every log from context_logger now includes those IDs
3. Respect the Levels
Avoid the temptation to log everything as INFO. Use DEBUG for verbose developer notes and reserve WARNING for non-critical issues like a slow API response. I typically set my production console to INFO and my Sentry alerts to ERROR. This strategy cuts down on “alert fatigue” and ensures that when your phone buzzes, it actually matters.
Moving from the standard library to Loguru and Sentry shifts your focus from “searching through text files” to proactive incident management. It’s a small change that makes your Python applications significantly more resilient.

