Enhanced Logging and Monitoring
Introduction
Effective logging is crucial for maintaining, debugging, and monitoring production applications. The Hypermodern Enhanced Logging Module extends the basic logging capabilities with comprehensive features including multiple output destinations, intelligent error notifications, performance tracking, and structured logging with correlation IDs.
This chapter covers the complete logging solution that transforms simple log statements into a powerful observability platform for your Hypermodern applications.
Why Enhanced Logging Matters
Traditional Logging Limitations
// Traditional approach - limited and scattered
print('User login failed');
logger.error('Database connection timeout');
// Where do these logs go? How do we track them? How do we get alerted?
Enhanced Logging Benefits
// Enhanced approach - comprehensive and structured
await logger.error(
'User login failed',
component: 'authentication',
context: {
'username': 'john.doe',
'ip_address': '192.168.1.100',
'attempt_number': 3,
'failure_reason': 'invalid_password',
},
stackTrace: stackTrace,
);
// Automatically: logged to file, database, sent to Slack, correlated with request ID
Core Architecture
Log Entry Structure
The enhanced logging system uses a rich LogEntry structure that captures comprehensive context:
class LogEntry {
LogEntry({
required this.level, // DEBUG, INFO, WARNING, ERROR, FATAL
required this.message, // Human-readable message
required this.timestamp, // Precise timestamp
this.component, // Application component (auth, database, etc.)
this.operation, // Specific operation being performed
this.context, // Additional structured data
this.stackTrace, // Full stack trace for errors
this.errorSource, // Automatically detected file:line
this.userId, // Current user (if available)
this.requestId, // Request correlation ID
this.sessionId, // Session correlation ID
});
Map<String, dynamic> toJson() {
return {
'level': level.name,
'message': message,
'timestamp': timestamp.toIso8601String(),
'component': component,
'operation': operation,
'context': context,
'stackTrace': stackTrace?.toString(),
'errorSource': errorSource,
'userId': userId,
'requestId': requestId,
'sessionId': sessionId,
};
}
}
Multiple Output Destinations
The logging module supports multiple simultaneous output destinations:
abstract class LogDestination {
Future<void> initialize();
Future<void> write(LogEntry entry);
Future<void> dispose();
}
File Logging with Rotation
Automatic File Rotation
class FileLogDestination implements LogDestination {
FileLogDestination({
required this.filePath,
this.maxFileSize = 10 * 1024 * 1024, // 10MB default
this.maxFiles = 5, // Keep 5 rotated files
this.rotateDaily = true, // Daily rotation
});
@override
Future<void> write(LogEntry entry) async {
// Check if rotation is needed
if (await _needsRotation()) {
await _rotateLogFile();
}
final jsonEntry = jsonEncode(entry.toJson());
_sink?.writeln(jsonEntry);
await _sink?.flush();
}
Future<void> _rotateLogFile() async {
await _sink?.close();
// Rotate existing files: app.log.1 -> app.log.2, etc.
for (int i = maxFiles - 1; i > 0; i--) {
final oldFile = File('${_currentFilePath!}.$i');
final newFile = File('${_currentFilePath!}.${i + 1}');
if (await oldFile.exists()) {
await oldFile.rename(newFile.path);
}
}
// Move current file to .1
final currentFile = File(_currentFilePath!);
if (await currentFile.exists()) {
await currentFile.rename('${_currentFilePath!}.1');
}
await _openLogFile();
}
}
Usage Example
final logger = LoggerFactory.createDevelopmentLogger(
logFilePath: 'logs/app.log',
level: LogLevel.debug,
);
await logger.initialize();
// Logs will automatically rotate when they reach size limits
await logger.info('Application started', component: 'main');
await logger.debug('Processing user request',
component: 'api',
context: {'endpoint': '/users', 'method': 'GET'},
);
Database Logging
Structured Storage
Database logging provides structured, queryable log storage with batch processing for performance:
class DatabaseLogDestination implements LogDestination {
DatabaseLogDestination({
required this.connectionString,
this.tableName = 'logs',
this.batchSize = 100, // Batch writes for performance
this.flushInterval = const Duration(seconds: 30),
});
@override
Future<void> write(LogEntry entry) async {
_buffer.add(entry);
if (_buffer.length >= batchSize) {
await _flush();
}
}
Future<void> _flush() async {
if (_buffer.isEmpty) return;
final entries = List<LogEntry>.from(_buffer);
_buffer.clear();
// Batch insert for efficiency
await _batchInsertEntries(entries);
}
}
Database Schema
The logging module expects this table structure:
CREATE TABLE logs (
id SERIAL PRIMARY KEY,
level VARCHAR(10) NOT NULL,
message TEXT NOT NULL,
timestamp TIMESTAMP NOT NULL,
component VARCHAR(100),
operation VARCHAR(100),
context JSONB, -- Structured context data
stack_trace TEXT,
error_source VARCHAR(255), -- file:line information
user_id VARCHAR(100),
request_id VARCHAR(100), -- Request correlation
session_id VARCHAR(100), -- Session correlation
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
-- Indexes for common queries
INDEX idx_logs_level (level),
INDEX idx_logs_timestamp (timestamp),
INDEX idx_logs_component (component),
INDEX idx_logs_user_id (user_id),
INDEX idx_logs_request_id (request_id)
);
Querying Logs
With structured database storage, you can perform powerful queries:
-- Find all errors for a specific user in the last hour
SELECT * FROM logs
WHERE user_id = 'user-123'
AND level = 'ERROR'
AND timestamp > NOW() - INTERVAL '1 hour'
ORDER BY timestamp DESC;
-- Find all requests that took longer than 1 second
SELECT request_id, COUNT(*) as log_count,
MIN(timestamp) as start_time,
MAX(timestamp) as end_time
FROM logs
WHERE context->>'duration_ms' > '1000'
GROUP BY request_id
ORDER BY start_time DESC;
-- Error frequency by component
SELECT component, COUNT(*) as error_count
FROM logs
WHERE level IN ('ERROR', 'FATAL')
AND timestamp > NOW() - INTERVAL '24 hours'
GROUP BY component
ORDER BY error_count DESC;
Redis Streaming
Real-time Log Streaming
Redis streams provide real-time log distribution for monitoring dashboards and alerting systems:
class RedisLogDestination implements LogDestination {
RedisLogDestination({
required this.host,
required this.port,
this.password,
this.streamKey = 'hypermodern:logs',
this.maxStreamLength = 10000,
});
@override
Future<void> write(LogEntry entry) async {
// Add to Redis stream with automatic trimming
await _redisClient.xAdd(
streamKey,
fields: entry.toJson(),
maxLength: maxStreamLength,
);
}
}
Consuming Log Streams
// Real-time log monitoring
class LogStreamConsumer {
Future<void> startConsuming() async {
await for (final entry in _redisClient.xRead(['hypermodern:logs'])) {
await _processLogEntry(entry);
}
}
Future<void> _processLogEntry(StreamEntry entry) async {
final logEntry = LogEntry.fromJson(entry.fields);
// Real-time processing
if (logEntry.level == LogLevel.error) {
await _triggerAlert(logEntry);
}
if (logEntry.component == 'payment') {
await _updatePaymentDashboard(logEntry);
}
}
}
Intelligent Notifications
Multi-Channel Alerting
The notification system supports multiple channels with intelligent rate limiting:
class NotificationConfig {
NotificationConfig({
this.emailRecipients = const [],
this.webhookUrls = const [],
this.slackWebhooks = const [],
this.minLevel = LogLevel.error, // Only alert on errors and above
this.rateLimitMinutes = 5, // Don't spam the same error
this.maxNotificationsPerHour = 10, // Maximum alerts per hour
});
}
Email Notifications
Future<void> _sendEmailNotifications(LogEntry entry) async {
if (config.emailRecipients.isEmpty) return;
final subject = '[${entry.level.name}] ${entry.component ?? 'System'} Error';
final body = '''
Error Details:
Level: ${entry.level.name}
Message: ${entry.message}
Component: ${entry.component ?? 'Unknown'}
Timestamp: ${entry.timestamp}
Request ID: ${entry.requestId ?? 'N/A'}
${entry.context != null ? 'Context: ${jsonEncode(entry.context)}' : ''}
${entry.stackTrace != null ? 'Stack Trace:\n${entry.stackTrace}' : ''}
''';
await mailService.send(
to: config.emailRecipients,
subject: subject,
body: body,
);
}
Slack Integration
Future<void> _sendSlackNotifications(LogEntry entry) async {
for (final webhookUrl in config.slackWebhooks) {
final payload = {
'text': '🚨 ${entry.level.name} Alert',
'attachments': [
{
'color': _getSlackColor(entry.level),
'fields': [
{'title': 'Component', 'value': entry.component ?? 'Unknown', 'short': true},
{'title': 'Message', 'value': entry.message, 'short': false},
{'title': 'Timestamp', 'value': entry.timestamp.toIso8601String(), 'short': true},
if (entry.errorSource != null)
{'title': 'Source', 'value': entry.errorSource, 'short': true},
if (entry.requestId != null)
{'title': 'Request ID', 'value': entry.requestId, 'short': true},
],
}
],
};
await _sendWebhookPayload(webhookUrl, payload);
}
}
Rate Limiting
class LogNotificationService {
bool _shouldRateLimit(String key) {
final lastTime = _lastNotificationTimes[key];
if (lastTime == null) return false;
final now = DateTime.now();
final timeDiff = now.difference(lastTime);
// Don't send same error type within rate limit window
if (timeDiff.inMinutes < config.rateLimitMinutes) return true;
// Check hourly limit
final hourlyCount = _notificationCounts[key] ?? 0;
return hourlyCount >= config.maxNotificationsPerHour;
}
}
Performance Tracking
Automatic Operation Timing
// Wrap any operation to automatically track performance
final result = await logger.trackPerformance(
'database-query',
() async {
return await database.query('SELECT * FROM users WHERE active = true');
},
component: 'database',
context: {
'table': 'users',
'operation': 'select',
'filter': 'active=true',
},
);
// Automatically logs:
// - Operation start time
// - Duration in milliseconds
// - Success/failure status
// - Any errors with stack traces
Performance Metrics
Future<T> trackPerformance<T>(
String operation,
Future<T> Function() function, {
String? component,
Map<String, dynamic>? context,
}) async {
final stopwatch = Stopwatch()..start();
final startTime = DateTime.now();
try {
final result = await function();
stopwatch.stop();
await logEnhanced(
LogLevel.info,
'Performance: $operation completed',
component: component,
operation: operation,
context: {
...?context,
'duration_ms': stopwatch.elapsedMilliseconds,
'start_time': startTime.toIso8601String(),
'end_time': DateTime.now().toIso8601String(),
'status': 'success',
},
);
return result;
} catch (error, stackTrace) {
stopwatch.stop();
await logEnhanced(
LogLevel.error,
'Performance: $operation failed',
component: component,
operation: operation,
context: {
...?context,
'duration_ms': stopwatch.elapsedMilliseconds,
'error': error.toString(),
'status': 'error',
},
stackTrace: stackTrace,
);
rethrow;
}
}
Correlation IDs and Request Tracking
Setting Correlation Context
class RequestMiddleware {
Future<void> handleRequest(Request request) async {
// Generate or extract correlation IDs
final requestId = request.headers['x-request-id'] ?? _generateRequestId();
final sessionId = request.headers['x-session-id'];
final userId = await _extractUserId(request);
// Set correlation context for all subsequent logs
logger.setCorrelationIds(
requestId: requestId,
sessionId: sessionId,
userId: userId,
);
try {
await _processRequest(request);
} finally {
// Clear correlation context
logger.setCorrelationIds();
}
}
}
Automatic Correlation
// All logs within the request context automatically include correlation IDs
await logger.info('Processing user registration',
component: 'auth',
context: {'email': user.email},
);
// Results in log entry with:
// {
// "message": "Processing user registration",
// "component": "auth",
// "context": {"email": "user@example.com"},
// "requestId": "req-abc123",
// "sessionId": "sess-def456",
// "userId": "user-789012",
// "timestamp": "2023-12-01T10:30:45.123Z"
// }
Cross-Service Correlation
class ServiceClient {
Future<Response> makeRequest(String endpoint, Map<String, dynamic> data) async {
// Propagate correlation IDs to downstream services
final headers = {
'x-request-id': logger.currentRequestId,
'x-session-id': logger.currentSessionId,
'x-user-id': logger.currentUserId,
};
await logger.info('Making downstream request',
component: 'service-client',
context: {
'endpoint': endpoint,
'downstream_service': _getServiceName(endpoint),
},
);
return await httpClient.post(endpoint, data: data, headers: headers);
}
}
Configuration Management
JSON Configuration
{
"level": "info",
"enableNotifications": true,
"enableStructuredLogging": true,
"enableCorrelationIds": true,
"enablePerformanceTracking": true,
"destinations": [
{
"type": "file",
"filePath": "logs/app.log",
"maxFileSize": 52428800,
"maxFiles": 10,
"rotateDaily": true
},
{
"type": "database",
"connectionString": "postgresql://user:pass@localhost:5432/myapp",
"tableName": "application_logs",
"batchSize": 200,
"flushIntervalSeconds": 15
},
{
"type": "redis",
"host": "redis.myapp.com",
"port": 6379,
"streamKey": "myapp:logs",
"maxStreamLength": 50000
}
],
"notifications": {
"emailRecipients": [
"alerts@myapp.com",
"dev-team@myapp.com"
],
"slackWebhooks": [
"https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
],
"webhookUrls": [
"https://monitoring.myapp.com/webhook/logs"
],
"minLevel": "error",
"rateLimitMinutes": 10,
"maxNotificationsPerHour": 5
}
}
Loading Configuration
// Load from configuration file
final config = await LoggingConfigLoader.fromFile('config/logging.json');
final logger = EnhancedHypermodernLogger(config);
await logger.initialize();
// Or use environment-specific presets
final prodLogger = EnhancedHypermodernLogger(
LoggingPresets.production(
logPath: 'logs/production.log',
databaseConnectionString: Platform.environment['DATABASE_URL']!,
alertEmails: ['alerts@myapp.com'],
redisHost: 'redis.myapp.com',
redisPort: 6379,
),
);
Environment Presets
class LoggingPresets {
// Development: verbose logging, file only, no notifications
static EnhancedLoggerConfig development({String logPath = 'logs/dev.log'}) {
return EnhancedLoggerConfig(
level: LogLevel.debug,
destinations: [FileLogDestination(filePath: logPath)],
enableNotifications: false,
);
}
// Testing: warnings and errors only, minimal overhead
static EnhancedLoggerConfig testing({String logPath = 'logs/test.log'}) {
return EnhancedLoggerConfig(
level: LogLevel.warning,
destinations: [FileLogDestination(filePath: logPath)],
enableNotifications: false,
enableCorrelationIds: false,
enablePerformanceTracking: false,
);
}
// Production: comprehensive logging with notifications
static EnhancedLoggerConfig production({
required String logPath,
required String databaseConnectionString,
required List<String> alertEmails,
String? redisHost,
int? redisPort,
}) {
final destinations = <LogDestination>[
FileLogDestination(
filePath: logPath,
maxFileSize: 50 * 1024 * 1024, // 50MB
maxFiles: 10,
rotateDaily: true,
),
DatabaseLogDestination(
connectionString: databaseConnectionString,
batchSize: 200,
flushInterval: const Duration(seconds: 15),
),
];
if (redisHost != null && redisPort != null) {
destinations.add(RedisLogDestination(
host: redisHost,
port: redisPort,
));
}
return EnhancedLoggerConfig(
level: LogLevel.info,
destinations: destinations,
enableNotifications: true,
notificationConfig: NotificationConfig(
emailRecipients: alertEmails,
minLevel: LogLevel.error,
rateLimitMinutes: 10,
maxNotificationsPerHour: 5,
),
);
}
}
Integration Patterns
Middleware Integration
class LoggingMiddleware implements Middleware {
final EnhancedHypermodernLogger logger;
LoggingMiddleware(this.logger);
@override
Future<Response> handle(Request request, RequestHandler next) async {
final requestId = _generateRequestId();
final startTime = DateTime.now();
// Set correlation context
logger.setCorrelationIds(
requestId: requestId,
sessionId: request.headers['x-session-id'],
userId: await _extractUserId(request),
);
// Log request start
await logger.info('Request started',
component: 'http-middleware',
context: {
'method': request.method,
'path': request.path,
'user_agent': request.headers['user-agent'],
'ip_address': request.clientIp,
},
);
try {
final response = await next(request);
final duration = DateTime.now().difference(startTime);
// Log successful request
await logger.info('Request completed',
component: 'http-middleware',
context: {
'status_code': response.statusCode,
'duration_ms': duration.inMilliseconds,
'response_size': response.contentLength,
},
);
return response;
} catch (error, stackTrace) {
final duration = DateTime.now().difference(startTime);
// Log failed request
await logger.error('Request failed',
component: 'http-middleware',
context: {
'duration_ms': duration.inMilliseconds,
'error_type': error.runtimeType.toString(),
},
stackTrace: stackTrace,
);
rethrow;
} finally {
// Clear correlation context
logger.setCorrelationIds();
}
}
}
Error Handler Integration
class EnhancedErrorHandler {
final EnhancedHypermodernLogger logger;
EnhancedErrorHandler(this.logger);
Future<void> handleError(
Object error,
StackTrace stackTrace, {
String? component,
Map<String, dynamic>? context,
}) async {
// Determine error severity
final level = _determineLogLevel(error);
// Extract error source from stack trace
final errorSource = _extractErrorSource(stackTrace);
await logger.logEnhanced(
level,
'Unhandled error: ${error.toString()}',
component: component ?? 'error-handler',
context: {
'error_type': error.runtimeType.toString(),
'error_message': error.toString(),
...?context,
},
stackTrace: stackTrace,
errorSource: errorSource,
);
// Additional error-specific handling
if (error is DatabaseException) {
await _handleDatabaseError(error, stackTrace);
} else if (error is NetworkException) {
await _handleNetworkError(error, stackTrace);
}
}
LogLevel _determineLogLevel(Object error) {
if (error is ValidationException) return LogLevel.warning;
if (error is AuthenticationException) return LogLevel.warning;
if (error is NetworkTimeoutException) return LogLevel.warning;
if (error is DatabaseConnectionException) return LogLevel.fatal;
return LogLevel.error;
}
}
Service Integration
class UserService {
final EnhancedHypermodernLogger logger;
final Database database;
UserService(this.logger, this.database);
Future<User> createUser(CreateUserRequest request) async {
return await logger.trackPerformance(
'create-user',
() async {
await logger.info('Creating new user',
component: 'user-service',
context: {
'email': request.email,
'registration_source': request.source,
},
);
// Validate request
final validation = await _validateUserRequest(request);
if (!validation.isValid) {
await logger.warning('User creation validation failed',
component: 'user-service',
context: {
'email': request.email,
'validation_errors': validation.errors,
},
);
throw ValidationException(validation.errors);
}
// Create user
final user = await database.users.create(request.toUser());
await logger.info('User created successfully',
component: 'user-service',
context: {
'user_id': user.id,
'email': user.email,
},
);
return user;
},
component: 'user-service',
context: {'operation': 'create-user'},
);
}
}
Custom Destinations
Creating Custom Destinations
class ElasticsearchLogDestination implements LogDestination {
final String host;
final int port;
final String index;
ElasticsearchLogDestination({
required this.host,
required this.port,
this.index = 'hypermodern-logs',
});
@override
Future<void> initialize() async {
// Initialize Elasticsearch client
_client = ElasticsearchClient(host: host, port: port);
// Create index if it doesn't exist
await _createIndexIfNotExists();
}
@override
Future<void> write(LogEntry entry) async {
await _client.index(
index: _getIndexName(entry.timestamp),
document: entry.toJson(),
);
}
@override
Future<void> dispose() async {
await _client.close();
}
String _getIndexName(DateTime timestamp) {
// Create daily indices: hypermodern-logs-2023-12-01
final dateStr = timestamp.toIso8601String().split('T')[0];
return '$index-$dateStr';
}
}
Webhook Destination
class WebhookLogDestination implements LogDestination {
final String webhookUrl;
final Map<String, String> headers;
final LogLevel minLevel;
WebhookLogDestination({
required this.webhookUrl,
this.headers = const {},
this.minLevel = LogLevel.warning,
});
@override
Future<void> write(LogEntry entry) async {
if (entry.level.value < minLevel.value) return;
final payload = {
'timestamp': entry.timestamp.toIso8601String(),
'level': entry.level.name,
'message': entry.message,
'component': entry.component,
'context': entry.context,
'requestId': entry.requestId,
};
await _sendWebhook(payload);
}
Future<void> _sendWebhook(Map<String, dynamic> payload) async {
final client = HttpClient();
try {
final request = await client.postUrl(Uri.parse(webhookUrl));
// Set headers
request.headers.contentType = ContentType.json;
headers.forEach((key, value) {
request.headers.set(key, value);
});
request.write(jsonEncode(payload));
final response = await request.close();
if (response.statusCode >= 400) {
print('Webhook failed with status: ${response.statusCode}');
}
} catch (e) {
print('Webhook error: $e');
} finally {
client.close();
}
}
}
Production Best Practices
Log Level Strategy
// Development: Debug everything
final devConfig = LoggingPresets.development();
// Staging: Info and above, with notifications
final stagingConfig = EnhancedLoggerConfig(
level: LogLevel.info,
destinations: [
FileLogDestination(filePath: 'logs/staging.log'),
DatabaseLogDestination(connectionString: stagingDbUrl),
],
enableNotifications: true,
notificationConfig: NotificationConfig(
emailRecipients: ['dev-team@myapp.com'],
minLevel: LogLevel.error,
),
);
// Production: Info and above, full monitoring
final prodConfig = LoggingPresets.production(
logPath: 'logs/production.log',
databaseConnectionString: prodDbUrl,
alertEmails: ['alerts@myapp.com', 'oncall@myapp.com'],
redisHost: 'redis.prod.myapp.com',
redisPort: 6379,
);
Performance Considerations
// Use appropriate batch sizes for database logging
DatabaseLogDestination(
connectionString: dbUrl,
batchSize: 500, // Larger batches for high-volume apps
flushInterval: Duration(seconds: 10), // More frequent flushes
);
// Configure file rotation to prevent disk space issues
FileLogDestination(
filePath: 'logs/app.log',
maxFileSize: 100 * 1024 * 1024, // 100MB per file
maxFiles: 20, // Keep 20 files (2GB total)
rotateDaily: true, // Daily rotation regardless of size
);
// Limit Redis stream length to prevent memory issues
RedisLogDestination(
host: 'redis.myapp.com',
port: 6379,
maxStreamLength: 100000, // Keep last 100k entries
);
Security Considerations
// Sanitize sensitive data in logs
class SecureLogger extends EnhancedHypermodernLogger {
@override
Future<void> logEnhanced(
LogLevel level,
String message, {
Map<String, dynamic>? context,
// ... other parameters
}) async {
// Sanitize context data
final sanitizedContext = _sanitizeContext(context);
await super.logEnhanced(
level,
message,
context: sanitizedContext,
// ... other parameters
);
}
Map<String, dynamic>? _sanitizeContext(Map<String, dynamic>? context) {
if (context == null) return null;
final sanitized = <String, dynamic>{};
for (final entry in context.entries) {
if (_isSensitiveField(entry.key)) {
sanitized[entry.key] = '[REDACTED]';
} else {
sanitized[entry.key] = entry.value;
}
}
return sanitized;
}
bool _isSensitiveField(String fieldName) {
const sensitiveFields = {
'password', 'token', 'secret', 'key', 'credit_card',
'ssn', 'social_security', 'api_key', 'private_key',
};
return sensitiveFields.any((field) =>
fieldName.toLowerCase().contains(field));
}
}
Monitoring and Alerting
// Set up comprehensive alerting rules
final alertConfig = NotificationConfig(
emailRecipients: [
'alerts@myapp.com', // Primary alerts
'oncall@myapp.com', // On-call engineer
],
slackWebhooks: [
'https://hooks.slack.com/services/.../alerts', // #alerts channel
'https://hooks.slack.com/services/.../oncall', // #oncall channel
],
minLevel: LogLevel.error,
rateLimitMinutes: 5, // Don't spam same error
maxNotificationsPerHour: 20, // Allow more alerts in production
);
// Different alert levels for different components
class ComponentAwareNotificationService extends LogNotificationService {
@override
Future<void> notify(LogEntry entry) async {
// Critical components get immediate alerts
if (_isCriticalComponent(entry.component)) {
await _sendImmediateAlert(entry);
}
// Payment errors always alert regardless of rate limits
if (entry.component == 'payment' && entry.level == LogLevel.error) {
await _sendPaymentAlert(entry);
}
// Standard notification flow
await super.notify(entry);
}
bool _isCriticalComponent(String? component) {
const criticalComponents = {
'payment', 'authentication', 'database', 'security'
};
return criticalComponents.contains(component);
}
}
Troubleshooting and Debugging
Common Issues
High Memory Usage
// Problem: Large log buffers consuming memory
DatabaseLogDestination(
batchSize: 10000, // Too large!
flushInterval: Duration(minutes: 10), // Too infrequent!
);
// Solution: Reduce batch size and increase flush frequency
DatabaseLogDestination(
batchSize: 100, // Reasonable batch size
flushInterval: Duration(seconds: 30), // Frequent flushing
);
Missing Log Entries
// Problem: Not calling dispose() properly
final logger = EnhancedHypermodernLogger(config);
await logger.initialize();
// ... use logger
// Missing: await logger.dispose(); // Buffered entries may be lost!
// Solution: Always dispose properly
try {
final logger = EnhancedHypermodernLogger(config);
await logger.initialize();
// ... use logger
} finally {
await logger.dispose(); // Ensures all buffered entries are written
}
Notification Spam
// Problem: No rate limiting
NotificationConfig(
rateLimitMinutes: 0, // No rate limiting!
maxNotificationsPerHour: 1000, // Too high!
);
// Solution: Appropriate rate limiting
NotificationConfig(
rateLimitMinutes: 5, // Don't repeat same error within 5 minutes
maxNotificationsPerHour: 10, // Maximum 10 alerts per hour
);
Debug Mode
// Enable debug logging for the logging module itself
final config = EnhancedLoggerConfig(
level: LogLevel.debug,
destinations: [
FileLogDestination(filePath: 'logs/debug.log'),
],
enableDebugMode: true, // Shows internal logging operations
);
Health Checks
class LoggingHealthCheck {
final EnhancedHypermodernLogger logger;
LoggingHealthCheck(this.logger);
Future<HealthCheckResult> checkHealth() async {
final results = <String, bool>{};
final errors = <String>[];
// Test each destination
for (final destination in logger.config.destinations) {
try {
await _testDestination(destination);
results[destination.runtimeType.toString()] = true;
} catch (e) {
results[destination.runtimeType.toString()] = false;
errors.add('${destination.runtimeType}: $e');
}
}
// Test notifications
if (logger.config.enableNotifications) {
try {
await _testNotifications();
results['notifications'] = true;
} catch (e) {
results['notifications'] = false;
errors.add('Notifications: $e');
}
}
return HealthCheckResult(
healthy: results.values.every((healthy) => healthy),
results: results,
errors: errors,
);
}
Future<void> _testDestination(LogDestination destination) async {
final testEntry = LogEntry(
level: LogLevel.info,
message: 'Health check test',
timestamp: DateTime.now(),
component: 'health-check',
);
await destination.write(testEntry);
}
}
Migration from Basic Logging
Gradual Migration Strategy
// Phase 1: Parallel logging (both old and new)
class MigrationLogger {
final HypermodernLogger oldLogger;
final EnhancedHypermodernLogger newLogger;
MigrationLogger(this.oldLogger, this.newLogger);
Future<void> info(String message, {String? component, Map<String, dynamic>? context}) async {
// Log to old system
oldLogger.info(message);
// Log to new system
await newLogger.info(message, component: component, context: context);
}
// Similar methods for other log levels...
}
// Phase 2: Feature flag controlled
class FeatureFlagLogger {
final HypermodernLogger oldLogger;
final EnhancedHypermodernLogger newLogger;
final FeatureFlags featureFlags;
Future<void> info(String message, {String? component, Map<String, dynamic>? context}) async {
if (featureFlags.isEnabled('enhanced-logging')) {
await newLogger.info(message, component: component, context: context);
} else {
oldLogger.info(message);
}
}
}
// Phase 3: Complete migration
// Replace all instances with EnhancedHypermodernLogger
Configuration Migration
// Convert old logger config to new format
class ConfigMigrator {
static EnhancedLoggerConfig migrateFromOldConfig(LoggerConfig oldConfig) {
return EnhancedLoggerConfig(
level: oldConfig.level,
destinations: [
if (oldConfig.outputToFile && oldConfig.logFilePath != null)
FileLogDestination(filePath: oldConfig.logFilePath!),
],
enableNotifications: false, // Start conservative
enableStructuredLogging: true,
enableCorrelationIds: true,
enablePerformanceTracking: false, // Enable gradually
);
}
}
Conclusion
The Enhanced Logging Module transforms basic logging into a comprehensive observability platform. With multiple output destinations, intelligent notifications, performance tracking, and correlation IDs, it provides the foundation for maintaining and monitoring production Hypermodern applications.
Key benefits include:
- Comprehensive Context: Rich log entries with automatic correlation
- Multiple Destinations: File, database, Redis, and custom outputs
- Intelligent Alerting: Multi-channel notifications with rate limiting
- Performance Insights: Automatic operation timing and metrics
- Production Ready: Proper error handling, resource management, and security
The module integrates seamlessly with existing Hypermodern applications and provides a clear migration path from basic logging. Whether you're building a simple API or a complex distributed system, the Enhanced Logging Module scales to meet your observability needs.
In the next chapter, we'll explore how to integrate the logging module with monitoring dashboards and alerting systems for complete production observability.
No Comments