The Django Logging Guide: Debug Production Like a Expert
From print() to enterprise-grade observability
I’ve debugged 10+ production issues in Django apps.
Here’s the pattern: Most developers have no idea what’s happening in production. They use print() in development, delete it for production, then when things break, they’re blind.
The truth:
80% of production issues could be solved in 5 minutes with proper logging
90% of teams have inadequate logging
99% of developers learn logging AFTER their first production fire
Today, I’ll show you:
How to set up comprehensive logging
What to log (and what NOT to log)
Production debugging strategies
Monitoring and alerting setup
Real-world debugging scenarios
This isn’t theory. This is the logging system that’s saved me countless hours.
Let’s never be blind in production again.
The Logging Levels
import logging
# From least to most severe:
logging.DEBUG # Detailed info for diagnosing (development only)
logging.INFO # General informational messages
logging.WARNING # Something unexpected but handled
logging.ERROR # Serious problem, function failed
logging.CRITICAL # System-level failureWhen to use each:
# DEBUG - Development only
logger.debug(f’User {user.id} accessed profile page’)
logger.debug(f’Query executed: {query}’)
# INFO - Important business events
logger.info(f’User {user.id} registered’)
logger.info(f’Payment processed: ${amount}’)
logger.info(f’Email sent to {email}’)
# WARNING - Recoverable issues
logger.warning(f’API rate limit approaching: {calls}/1000’)
logger.warning(f’Slow query detected: {duration}ms’)
logger.warning(f’Cache miss for key: {key}’)
# ERROR - Function failures
logger.error(f’Failed to send email to {email}’, exc_info=True)
logger.error(f’Database connection failed’, exc_info=True)
logger.error(f’External API error: {response.status_code}’)
# CRITICAL - System failures
logger.critical(f’Database unreachable’)
logger.critical(f’Disk space < 5%’)
logger.critical(f’Memory exhausted’)Django Logging Architecture
┌──────────────┐
│ Loggers │ What to log
└──────┬───────┘
│
↓
┌──────────────┐
│ Filters │ Which logs to process
└──────┬───────┘
│
↓
┌──────────────┐
│ Handlers │ Where to send logs
└──────┬───────┘
│
↓
┌──────────────┐
│ Formatters │ How to format logs
└──────────────┘Complete Logging Configuration
Development Settings
# settings/development.py
LOGGING = {
‘version’: 1,
‘disable_existing_loggers’: False,
‘formatters’: {
‘verbose’: {
‘format’: ‘{levelname} {asctime} {module} {process:d} {thread:d} {message}’,
‘style’: ‘{’,
},
‘simple’: {
‘format’: ‘{levelname} {message}’,
‘style’: ‘{’,
},
},
‘filters’: {
‘require_debug_true’: {
‘()’: ‘django.utils.log.RequireDebugTrue’,
},
},
‘handlers’: {
‘console’: {
‘level’: ‘DEBUG’,
‘filters’: [’require_debug_true’],
‘class’: ‘logging.StreamHandler’,
‘formatter’: ‘simple’
},
‘file’: {
‘level’: ‘DEBUG’,
‘class’: ‘logging.FileHandler’,
‘filename’: ‘logs/debug.log’,
‘formatter’: ‘verbose’,
},
},
‘loggers’: {
‘django’: {
‘handlers’: [’console’],
‘level’: ‘INFO’,
},
‘django.db.backends’: {
‘handlers’: [’console’],
‘level’: ‘DEBUG’, # Show SQL queries
},
‘myapp’: {
‘handlers’: [’console’, ‘file’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
},
}Production Settings
# settings/production.py
import os
LOGGING = {
‘version’: 1,
‘disable_existing_loggers’: False,
‘formatters’: {
‘verbose’: {
‘format’: ‘{levelname} {asctime} {module} {process:d} {thread:d} {message}’,
‘style’: ‘{’,
},
‘json’: {
‘()’: ‘pythonjsonlogger.jsonlogger.JsonFormatter’,
‘format’: ‘%(asctime)s %(name)s %(levelname)s %(message)s’
},
},
‘filters’: {
‘require_debug_false’: {
‘()’: ‘django.utils.log.RequireDebugFalse’,
},
},
‘handlers’: {
‘console’: {
‘level’: ‘INFO’,
‘class’: ‘logging.StreamHandler’,
‘formatter’: ‘verbose’,
},
‘file’: {
‘level’: ‘INFO’,
‘class’: ‘logging.handlers.RotatingFileHandler’,
‘filename’: ‘/var/log/django/app.log’,
‘maxBytes’: 1024 * 1024 * 10, # 10 MB
‘backupCount’: 10,
‘formatter’: ‘verbose’,
},
‘error_file’: {
‘level’: ‘ERROR’,
‘class’: ‘logging.handlers.RotatingFileHandler’,
‘filename’: ‘/var/log/django/error.log’,
‘maxBytes’: 1024 * 1024 * 10, # 10 MB
‘backupCount’: 10,
‘formatter’: ‘verbose’,
},
‘mail_admins’: {
‘level’: ‘ERROR’,
‘class’: ‘django.utils.log.AdminEmailHandler’,
‘filters’: [’require_debug_false’],
},
‘sentry’: {
‘level’: ‘WARNING’,
‘class’: ‘raven.contrib.django.raven_compat.handlers.SentryHandler’,
},
},
‘loggers’: {
‘django’: {
‘handlers’: [’console’, ‘file’],
‘level’: ‘INFO’,
},
‘django.request’: {
‘handlers’: [’error_file’, ‘mail_admins’, ‘sentry’],
‘level’: ‘ERROR’,
‘propagate’: False,
},
‘django.security’: {
‘handlers’: [’error_file’, ‘sentry’],
‘level’: ‘ERROR’,
‘propagate’: False,
},
‘myapp’: {
‘handlers’: [’console’, ‘file’, ‘error_file’],
‘level’: ‘INFO’,
‘propagate’: False,
},
‘myapp.critical’: {
‘handlers’: [’error_file’, ‘mail_admins’, ‘sentry’],
‘level’: ‘ERROR’,
‘propagate’: False,
},
},
‘root’: {
‘handlers’: [’console’, ‘file’, ‘sentry’],
‘level’: ‘INFO’,
},
}
# Create log directory
LOG_DIR = ‘/var/log/django’
os.makedirs(LOG_DIR, exist_ok=True)Using Loggers in Code
Basic Usage
# blog/views.py
import logging
# Get logger for this module
logger = logging.getLogger(__name__)
def create_post(request):
logger.info(f’User {request.user.id} creating post’)
try:
post = Post.objects.create(
title=request.POST[’title’],
author=request.user
)
logger.info(f’Post created: {post.id}’)
return redirect(’post_detail’, post.id)
except KeyError as e:
logger.error(f’Missing required field: {e}’, exc_info=True)
return render(request, ‘error.html’)
except Exception as e:
logger.critical(f’Unexpected error creating post’, exc_info=True)
return render(request, ‘error.html’)Structured Logging
# Use extra parameter for structured data
logger.info(
‘User registered’,
extra={
‘user_id’: user.id,
‘email’: user.email,
‘source’: ‘web’,
‘ip_address’: request.META.get(’REMOTE_ADDR’)
}
)
logger.warning(
‘Slow query detected’,
extra={
‘query’: str(queryset.query),
‘duration_ms’: duration,
‘table’: ‘posts’,
}
)
logger.error(
‘Payment failed’,
extra={
‘user_id’: user.id,
‘amount’: amount,
‘error_code’: error.code,
‘payment_method’: method,
},
exc_info=True
)Context Managers
from contextlib import contextmanager
import time
@contextmanager
def log_execution_time(operation):
“”“Context manager to log execution time”“”
start = time.time()
logger.info(f’Starting: {operation}’)
try:
yield
finally:
duration = time.time() - start
logger.info(
f’Completed: {operation}’,
extra={’duration_ms’: duration * 1000}
)
# Usage
with log_execution_time(’generate_report’):
report = generate_complex_report()Decorator for Logging
from functools import wraps
def log_function_call(func):
“”“Decorator to log function calls”“”
@wraps(func)
def wrapper(*args, **kwargs):
logger.debug(
f’Calling {func.__name__}’,
extra={
‘args’: args,
‘kwargs’: kwargs,
}
)
try:
result = func(*args, **kwargs)
logger.debug(f’{func.__name__} succeeded’)
return result
except Exception as e:
logger.error(
f’{func.__name__} failed’,
exc_info=True,
extra={
‘args’: args,
‘kwargs’: kwargs,
}
)
raise
return wrapper
# Usage
@log_function_call
def process_payment(user_id, amount):
# Function implementation
passWhat to Log
✅ DO Log
1. Business Events
logger.info(f’User {user.id} registered’)
logger.info(f’Payment processed: ${amount}’)
logger.info(f’Order {order.id} shipped’)
logger.info(f’Report generated for user {user.id}’)2. Errors and Exceptions
logger.error(’Database query failed’, exc_info=True)
logger.error(f’API call failed: {response.status_code}’)
logger.error(’Email sending failed’, exc_info=True)3. Security Events
logger.warning(f’Failed login attempt for {username}’)
logger.warning(f’Rate limit exceeded for IP {ip}’)
logger.critical(’Unauthorized access attempt detected’)
logger.critical(’SQL injection attempt blocked’)4. Performance Issues
logger.warning(f’Slow query: {duration}ms’)
logger.warning(f’Memory usage high: {memory_percent}%’)
logger.warning(f’API response slow: {response_time}ms’)5. External Service Calls
logger.info(f’Calling external API: {url}’)
logger.info(f’API response: {response.status_code}’)
logger.error(f’API timeout after {timeout}s’)❌ DON’T Log
1. Sensitive Information
# ❌ NEVER log passwords
logger.info(f’User login: {username} / {password}’) # NO!
# ❌ NEVER log credit card numbers
logger.info(f’Payment: {card_number}’) # NO!
# ❌ NEVER log API keys
logger.info(f’API call with key: {api_key}’) # NO!
# ❌ NEVER log PII in plain text
logger.info(f’SSN: {ssn}’) # NO!
# ✅ DO log safely
logger.info(f’User login: {username}’) # OK
logger.info(f’Payment processed with card ending {card_last4}’) # OK
logger.info(f’API call to {service_name}’) # OK2. High-Frequency Events
# ❌ Don’t log every request (use middleware/access logs)
for item in items: # 10,000 items
logger.debug(f’Processing {item}’) # Floods logs!
# ✅ Log summaries instead
logger.info(f’Processing {len(items)} items’)
# Process items
logger.info(f’Completed processing {len(items)} items’)3. Debugging Code in Production
# ❌ Don’t leave debug logs in production
logger.debug(f’Variable x = {x}’) # Remove before deploy
# ✅ Use appropriate level
if settings.DEBUG:
logger.debug(f’Debug info: {x}’)Middleware for Request Logging
# myapp/middleware.py
import logging
import time
import json
logger = logging.getLogger(__name__)
class RequestLoggingMiddleware:
“”“Log all requests and responses”“”
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
# Before view
start_time = time.time()
# Log request
logger.info(
‘Request started’,
extra={
‘method’: request.method,
‘path’: request.path,
‘user_id’: request.user.id if request.user.is_authenticated else None,
‘ip’: self.get_client_ip(request),
‘user_agent’: request.META.get(’HTTP_USER_AGENT’, ‘’),
}
)
# Process request
response = self.get_response(request)
# After view
duration = time.time() - start_time
# Log response
logger.info(
‘Request completed’,
extra={
‘method’: request.method,
‘path’: request.path,
‘status_code’: response.status_code,
‘duration_ms’: duration * 1000,
‘user_id’: request.user.id if request.user.is_authenticated else None,
}
)
# Warn on slow requests
if duration > 1.0:
logger.warning(
‘Slow request detected’,
extra={
‘path’: request.path,
‘duration_ms’: duration * 1000,
}
)
return response
def get_client_ip(self, request):
x_forwarded_for = request.META.get(’HTTP_X_FORWARDED_FOR’)
if x_forwarded_for:
ip = x_forwarded_for.split(’,’)[0]
else:
ip = request.META.get(’REMOTE_ADDR’)
return ip
class ErrorLoggingMiddleware:
“”“Log all errors with context”“”
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
return self.get_response(request)
def process_exception(self, request, exception):
logger.error(
‘Unhandled exception’,
exc_info=True,
extra={
‘url’: request.build_absolute_uri(),
‘method’: request.method,
‘user_id’: request.user.id if request.user.is_authenticated else None,
‘POST_data’: request.POST.dict() if request.method == ‘POST’ else None,
‘GET_data’: request.GET.dict(),
}
)Add to settings:
MIDDLEWARE = [
‘myapp.middleware.RequestLoggingMiddleware’,
‘myapp.middleware.ErrorLoggingMiddleware’,
# ... other middleware
]Database Query Logging
Log Slow Queries
# myapp/middleware.py
from django.db import connection
from django.utils.deprecation import MiddlewareMixin
import logging
logger = logging.getLogger(__name__)
class QueryLoggingMiddleware(MiddlewareMixin):
“”“Log slow database queries”“”
def process_response(self, request, response):
# Get queries from this request
queries = connection.queries
# Log query count
if len(queries) > 50:
logger.warning(
‘High query count’,
extra={
‘path’: request.path,
‘query_count’: len(queries),
}
)
# Log slow queries
for query in queries:
time = float(query[’time’])
if time > 0.1: # 100ms
logger.warning(
‘Slow query detected’,
extra={
‘duration_ms’: time * 1000,
‘sql’: query[’sql’][:200], # First 200 chars
‘path’: request.path,
}
)
return responseCustom Database Backend
# myapp/db_backend.py
from django.db.backends.postgresql import base
import logging
logger = logging.getLogger(’django.db.backends’)
class DatabaseWrapper(base.DatabaseWrapper):
“”“Custom database wrapper with query logging”“”
def execute(self, sql, params=None):
import time
start = time.time()
try:
result = super().execute(sql, params)
duration = time.time() - start
if duration > 0.1:
logger.warning(
‘Slow query’,
extra={
‘duration_ms’: duration * 1000,
‘sql’: sql[:200],
}
)
return result
except Exception as e:
logger.error(
‘Query failed’,
exc_info=True,
extra={’sql’: sql[:200]}
)
raiseSentry Integration (Error Tracking)
Setup Sentry
pip install sentry-sdk# settings/production.py
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
sentry_sdk.init(
dsn=os.environ.get(’SENTRY_DSN’),
integrations=[DjangoIntegration()],
# Performance monitoring
traces_sample_rate=0.1, # 10% of transactions
# Profiles
profiles_sample_rate=0.1,
# Environment
environment=’production’,
# Release tracking
release=os.environ.get(’GIT_COMMIT’, ‘unknown’),
# Don’t send PII
send_default_pii=False,
# Filter errors
before_send=filter_sentry_events,
)
def filter_sentry_events(event, hint):
“”“Filter out unwanted errors”“”
# Ignore specific errors
if ‘exc_info’ in hint:
exc_type, exc_value, tb = hint[’exc_info’]
# Ignore 404s
if isinstance(exc_value, Http404):
return None
# Ignore permission denied
if isinstance(exc_value, PermissionDenied):
return None
# Scrub sensitive data
if ‘request’ in event:
request = event[’request’]
# Remove passwords from POST data
if ‘data’ in request and isinstance(request[’data’], dict):
if ‘password’ in request[’data’]:
request[’data’][’password’] = ‘[FILTERED]’
if ‘credit_card’ in request[’data’]:
request[’data’][’credit_card’] = ‘[FILTERED]’
return eventCapture Custom Events
import sentry_sdk
# Capture exception
try:
risky_operation()
except Exception as e:
sentry_sdk.capture_exception(e)
# Capture message
sentry_sdk.capture_message(’Something went wrong’, level=’error’)
# Add context
with sentry_sdk.configure_scope() as scope:
scope.set_user({
‘id’: user.id,
‘email’: user.email,
})
scope.set_tag(’payment_method’, ‘credit_card’)
scope.set_context(’payment’, {
‘amount’: 99.99,
‘currency’: ‘USD’,
})
sentry_sdk.capture_message(’Payment processed’)
# Add breadcrumbs
sentry_sdk.add_breadcrumb(
category=’auth’,
message=’User logged in’,
level=’info’,
)Custom Logging Handlers
Slack Handler
# myapp/handlers.py
import logging
import requests
from django.conf import settings
class SlackHandler(logging.Handler):
“”“Send critical errors to Slack”“”
def emit(self, record):
log_entry = self.format(record)
# Only send ERROR and above
if record.levelno < logging.ERROR:
return
# Build Slack message
message = {
‘text’: f’🚨 {record.levelname}: {record.getMessage()}’,
‘attachments’: [{
‘color’: ‘danger’,
‘fields’: [
{
‘title’: ‘Module’,
‘value’: record.module,
‘short’: True
},
{
‘title’: ‘Function’,
‘value’: record.funcName,
‘short’: True
},
{
‘title’: ‘Details’,
‘value’: f’```{log_entry}```’,
‘short’: False
}
]
}]
}
# Send to Slack
try:
requests.post(
settings.SLACK_WEBHOOK_URL,
json=message,
timeout=3
)
except Exception:
# Don’t fail if Slack is down
passAdd to LOGGING:
‘handlers’: {
‘slack’: {
‘level’: ‘ERROR’,
‘class’: ‘myapp.handlers.SlackHandler’,
},
},
‘loggers’: {
‘myapp.critical’: {
‘handlers’: [’slack’],
‘level’: ‘ERROR’,
},
},Telegram Handler
class TelegramHandler(logging.Handler):
“”“Send critical errors to Telegram”“”
def emit(self, record):
if record.levelno < logging.ERROR:
return
message = f’🚨 *{record.levelname}*\n\n{record.getMessage()}’
try:
requests.post(
f’https://api.telegram.org/bot{settings.TELEGRAM_BOT_TOKEN}/sendMessage’,
json={
‘chat_id’: settings.TELEGRAM_CHAT_ID,
‘text’: message,
‘parse_mode’: ‘Markdown’
},
timeout=3
)
except Exception:
passMonitoring Dashboard
Custom Metrics Logger
# myapp/metrics.py
import logging
from datetime import datetime
from django.core.cache import cache
logger = logging.getLogger(’metrics’)
class Metrics:
“”“Track application metrics”“”
@staticmethod
def increment(metric_name, value=1):
“”“Increment a counter”“”
key = f’metric:{metric_name}’
cache.incr(key, value)
logger.info(
f’Metric incremented: {metric_name}’,
extra={’metric’: metric_name, ‘value’: value}
)
@staticmethod
def timing(metric_name, duration_ms):
“”“Record timing”“”
logger.info(
f’Timing: {metric_name}’,
extra={
‘metric’: metric_name,
‘duration_ms’: duration_ms,
‘type’: ‘timing’
}
)
@staticmethod
def gauge(metric_name, value):
“”“Record gauge value”“”
logger.info(
f’Gauge: {metric_name}’,
extra={
‘metric’: metric_name,
‘value’: value,
‘type’: ‘gauge’
}
)
# Usage
from myapp.metrics import Metrics
def create_post(request):
Metrics.increment(’posts.created’)
start = time.time()
post = Post.objects.create(...)
duration = (time.time() - start) * 1000
Metrics.timing(’posts.create_duration’, duration)
return redirect(’post_detail’, post.id)Health Check Endpoint
# myapp/views.py
from django.http import JsonResponse
from django.db import connection
from django.core.cache import cache
import logging
logger = logging.getLogger(__name__)
def health_check(request):
“”“Health check endpoint for monitoring”“”
status = {
‘status’: ‘healthy’,
‘checks’: {}
}
# Check database
try:
with connection.cursor() as cursor:
cursor.execute(’SELECT 1’)
status[’checks’][’database’] = ‘ok’
except Exception as e:
status[’status’] = ‘unhealthy’
status[’checks’][’database’] = ‘failed’
logger.error(’Health check: database failed’, exc_info=True)
# Check cache
try:
cache.set(’health_check’, ‘ok’, 10)
cache.get(’health_check’)
status[’checks’][’cache’] = ‘ok’
except Exception as e:
status[’status’] = ‘unhealthy’
status[’checks’][’cache’] = ‘failed’
logger.error(’Health check: cache failed’, exc_info=True)
# Check disk space
import shutil
stat = shutil.disk_usage(’/’)
free_percent = (stat.free / stat.total) * 100
if free_percent < 10:
status[’status’] = ‘unhealthy’
status[’checks’][’disk’] = f’low ({free_percent:.1f}%)’
logger.critical(f’Low disk space: {free_percent:.1f}%’)
else:
status[’checks’][’disk’] = ‘ok’
# HTTP status code
http_status = 200 if status[’status’] == ‘healthy’ else 503
return JsonResponse(status, status=http_status)Production Debugging Scenarios
Scenario 1: Intermittent 500 Errors
Problem: Users occasionally see 500 errors, but you can’t reproduce.
Solution:
# Add detailed error logging
logger.error(
‘View error’,
exc_info=True,
extra={
‘user_id’: request.user.id,
‘url’: request.build_absolute_uri(),
‘method’: request.method,
‘POST’: request.POST.dict(),
‘GET’: request.GET.dict(),
‘session’: dict(request.session),
‘headers’: dict(request.headers),
}
)Check logs:
# Find patterns
grep “ERROR” /var/log/django/error.log | grep “500” | less
# Find specific user’s errors
grep “user_id.*123” /var/log/django/error.logScenario 2: Slow Performance
Problem: App suddenly slow, not sure why.
Solution:
# Log all slow requests
class PerformanceLoggingMiddleware:
def __call__(self, request):
start = time.time()
response = self.get_response(request)
duration = time.time() - start
if duration > 1.0: # Slow request
logger.warning(
‘Slow request’,
extra={
‘path’: request.path,
‘duration_ms’: duration * 1000,
‘query_count’: len(connection.queries),
‘queries’: [q[’sql’][:100] for q in connection.queries],
}
)
return responseCheck logs:
# Find slowest endpoints
grep “Slow request” /var/log/django/app.log | \
jq ‘.path’ | sort | uniq -c | sort -rn | head -10Scenario 3: Memory Leak
Problem: Memory usage growing over time.
Solution:
import tracemalloc
# Start tracking
tracemalloc.start()
def view_with_memory_tracking(request):
# Your view code
# Take snapshot
snapshot = tracemalloc.take_snapshot()
top_stats = snapshot.statistics(’lineno’)
# Log top 10 memory consumers
for stat in top_stats[:10]:
logger.debug(
f’Memory: {stat.traceback} - {stat.size / 1024 / 1024:.1f} MB’
)Scenario 4: Database Deadlocks
Problem: Occasional database deadlocks.
Solution:
from django.db import transaction
@transaction.atomic
def transfer_money(from_account, to_account, amount):
try:
# Always lock in same order to prevent deadlocks
accounts = sorted([from_account, to_account], key=lambda x: x.id)
for account in accounts:
account.refresh_from_db()
from_account.balance -= amount
to_account.balance += amount
from_account.save()
to_account.save()
logger.info(f’Transfer complete: {amount}’)
except Exception as e:
logger.error(’Transfer failed’, exc_info=True, extra={
‘from_account’: from_account.id,
‘to_account’: to_account.id,
‘amount’: amount,
})
raiseLog Analysis Tools
1. grep and jq
# Find errors in last hour
grep “ERROR” /var/log/django/error.log | tail -100
# Parse JSON logs
cat /var/log/django/app.log | jq ‘select(.level == “ERROR”)’
# Count errors by type
cat /var/log/django/app.log | \
jq -r ‘select(.level == “ERROR”) | .message’ | \
sort | uniq -c | sort -rn
# Find slow queries
cat /var/log/django/app.log | \
jq ‘select(.duration_ms > 1000)’2. Log Management Tools
ELK Stack (Elasticsearch, Logstash, Kibana):
Centralized logging
Powerful search
Visualization
Alerting
Graylog:
Open source
Simple setup
Good for medium projects
Datadog / New Relic:
Commercial
All-in-one
Great visualizations
3. Python Log Analysis
# analyze_logs.py
import json
from collections import Counter
from datetime import datetime
def analyze_logs(log_file):
errors = []
slow_requests = []
with open(log_file) as f:
for line in f:
try:
log = json.loads(line)
if log.get(’level’) == ‘ERROR’:
errors.append(log)
if log.get(’duration_ms’, 0) > 1000:
slow_requests.append(log)
except json.JSONDecodeError:
continue
# Error summary
error_types = Counter(e.get(’message’) for e in errors)
print(f”\nTop 10 Errors:”)
for error, count in error_types.most_common(10):
print(f” {count:4d} - {error[:80]}”)
# Slow endpoints
slow_paths = Counter(r.get(’path’) for r in slow_requests)
print(f”\nTop 10 Slow Endpoints:”)
for path, count in slow_paths.most_common(10):
print(f” {count:4d} - {path}”)
if __name__ == ‘__main__’:
analyze_logs(’/var/log/django/app.log’)Logging Best Practices
1. Use Appropriate Levels
# ✅ GOOD
logger.debug(’Query executed’) # Development detail
logger.info(’User registered’) # Important event
logger.warning(’API rate limited’) # Recoverable issue
logger.error(’Database failed’) # Function failed
logger.critical(’Disk full’) # System failure
# ❌ BAD
logger.error(’User registered’) # Wrong level
logger.info(’Database failed’) # Wrong level2. Include Context
# ❌ BAD - No context
logger.error(’Failed’)
# ✅ GOOD - With context
logger.error(
‘Payment processing failed’,
exc_info=True,
extra={
‘user_id’: user.id,
‘amount’: amount,
‘error_code’: error.code,
}
)3. Don’t Log in Loops
# ❌ BAD - Floods logs
for item in items: # 10,000 items
logger.info(f’Processing {item}’)
# ✅ GOOD - Log summary
logger.info(f’Processing {len(items)} items’)
process_items(items)
logger.info(f’Processed {len(items)} items’)4. Use exc_info for Exceptions
# ❌ BAD - No stack trace
try:
risky_operation()
except Exception as e:
logger.error(f’Failed: {e}’)
# ✅ GOOD - Full stack trace
try:
risky_operation()
except Exception as e:
logger.error(’Operation failed’, exc_info=True)5. Sanitize Logs
def sanitize_for_logging(data):
“”“Remove sensitive data before logging”“”
if isinstance(data, dict):
sanitized = data.copy()
for key in [’password’, ‘credit_card’, ‘ssn’, ‘api_key’]:
if key in sanitized:
sanitized[key] = ‘[REDACTED]’
return sanitized
return data
# Usage
logger.info(’User data’, extra=sanitize_for_logging(user_data))Logging Checklist
Development ✅
Console logging enabled
SQL queries visible (DEBUG)
Debug level for app loggers
File logging for reference
Production ✅
No DEBUG level logs
Rotating file handlers (size limits)
Error tracking (Sentry)
Request/response logging
Slow query logging
Security event logging
No sensitive data in logs
Alerting configured
Monitoring ✅
Health check endpoint
Metrics tracking
Log aggregation (ELK, Datadog)
Dashboards created
Alerts configured
On-call rotation
Conclusion
The truth: You can’t fix what you can’t see.
Logging strategy:
Development:
Log everything (DEBUG level)
See SQL queries
Console output
2. Production:
Log business events (INFO)
Log errors with context (ERROR)
Track performance (WARNING for slow)
Never log sensitive data
3. Monitoring:
Aggregate logs (ELK, Datadog)
Set up alerts
Create dashboards
Review regularly
Remember:
✅ Log events, not states
✅ Include context
✅ Use structured logging
✅ Set up monitoring BEFORE you need it
❌ Never log passwords/PII
❌ Don’t log in loops
❌ Don’t leave DEBUG in production
Start today:
Configure LOGGING in settings
Add request logging middleware
Set up Sentry
Create health check endpoint
Review logs weekly
The goal: Never be blind in production again.
Thanks for reading Build Smart Engineering!
If this post helped you think or build better, consider subscribing, restacking, or sharing it with someone who’d benefit.
A publication without readers is just notes in the void — your time and attention truly matter.
Let’s keep building smarter, together. 💙

