Logging
File-based logging system for tracking application events, errors, and queries.
Overview
- File-based logging with multiple streams
- Log levels for different severity
- Separate log files for different purposes
- Query logging for database monitoring
- Console output in development
- Singleton pattern for logger instance
Logger Architecture
FileLogger Implementation
// shared/logging/file.ts
import { createWriteStream, WriteStream } from 'fs';
import { join } from 'path';
export class FileLogger {
private static instance: FileLogger;
private combinedStream: WriteStream;
private errorStream: WriteStream;
private queryStream: WriteStream;
private constructor() {
const logsDir = 'logs';
// Create write streams for different log files
this.combinedStream = createWriteStream(
join(logsDir, 'combined.log'),
{ flags: 'a' }
);
this.errorStream = createWriteStream(
join(logsDir, 'error.log'),
{ flags: 'a' }
);
this.queryStream = createWriteStream(
join(logsDir, 'query.log'),
{ flags: 'a' }
);
}
static getInstance(): FileLogger {
if (!FileLogger.instance) {
FileLogger.instance = new FileLogger();
}
return FileLogger.instance;
}
private formatMessage(level: string, message: string): string {
const timestamp = new Date().toISOString();
return `[${timestamp}] ${level.toUpperCase()}: ${message}\n`;
}
info(message: string): void {
const formatted = this.formatMessage('info', message);
this.combinedStream.write(formatted);
if (process.env.NODE_ENV === 'development') {
console.log(formatted.trim());
}
}
error(message: string, error?: Error): void {
const errorDetails = error
? `${message}\nStack: ${error.stack}`
: message;
const formatted = this.formatMessage('error', errorDetails);
this.combinedStream.write(formatted);
this.errorStream.write(formatted);
if (process.env.NODE_ENV === 'development') {
console.error(formatted.trim());
}
}
warn(message: string): void {
const formatted = this.formatMessage('warn', message);
this.combinedStream.write(formatted);
if (process.env.NODE_ENV === 'development') {
console.warn(formatted.trim());
}
}
debug(message: string): void {
if (process.env.LOG_LEVEL === 'debug') {
const formatted = this.formatMessage('debug', message);
this.combinedStream.write(formatted);
if (process.env.NODE_ENV === 'development') {
console.debug(formatted.trim());
}
}
}
query(sql: string, params?: any[]): void {
const message = params
? `SQL: ${sql}\nParams: ${JSON.stringify(params)}`
: `SQL: ${sql}`;
const formatted = this.formatMessage('query', message);
this.queryStream.write(formatted);
if (process.env.LOG_QUERIES === 'true') {
console.log(formatted.trim());
}
}
}
// Export singleton instance
export const logger = FileLogger.getInstance();
Log Levels
Severity Hierarchy
- ERROR – Critical failures requiring immediate attention
- WARN – Warning conditions that should be monitored
- INFO – General informational messages
- DEBUG – Detailed debugging information
Usage Examples
import { logger } from '@/shared/logging/file';
// Info logs
logger.info('Application started');
logger.info('User logged in: user-123');
// Error logs
try {
await processPayment(orderId);
} catch (error) {
logger.error('Payment processing failed', error);
}
// Warning logs
if (service.stock < 5) {
logger.warn(`Low stock for service: ${service.id}`);
}
// Debug logs
logger.debug(`Processing order: ${JSON.stringify(order)}`);
// Query logs
logger.query('SELECT * FROM users WHERE id = ?', [userId]);
Log Files
combined.log
Contains all log messages:
[2024-01-15T10:30:00.000Z] INFO: Application started
[2024-01-15T10:30:05.123Z] INFO: Database connected
[2024-01-15T10:31:22.456Z] WARN: Low stock for service: svc-123
[2024-01-15T10:32:10.789Z] ERROR: Payment failed for order: ord-456
Stack: Error: Insufficient funds
at processPayment (payment.ts:45)
at OrderWorkflow.handlePayment (workflow.ts:120)
error.log
Contains only error messages:
[2024-01-15T10:32:10.789Z] ERROR: Payment failed for order: ord-456
Stack: Error: Insufficient funds
at processPayment (payment.ts:45)
at OrderWorkflow.handlePayment (workflow.ts:120)
[2024-01-15T10:45:33.112Z] ERROR: Database connection lost
Stack: Error: Connection terminated
at Sequelize.authenticate (sequelize.ts:234)
query.log
Contains database queries:
[2024-01-15T10:30:15.234Z] QUERY: SELECT * FROM users WHERE id = ?
Params: ["user-123"]
[2024-01-15T10:30:20.567Z] QUERY: INSERT INTO orders (id, userId, serviceId, status) VALUES (?, ?, ?, ?)
Params: ["ord-456", "user-123", "svc-789", "pending"]
[2024-01-15T10:30:25.890Z] QUERY: UPDATE services SET stock = stock - 1 WHERE id = ?
Params: ["svc-789"]
Database Query Logging
Sequelize Integration
// adapters/shared/db.ts
import { Sequelize } from 'sequelize';
import { logger } from '@/shared/logging/file';
const sequelize = new Sequelize({
// ... config
logging: (sql: string) => {
logger.query(sql);
},
});
Mongoose Integration
// adapters/shared/db.ts
import mongoose from 'mongoose';
import { logger } from '@/shared/logging/file';
mongoose.set('debug', (collectionName: string, method: string, ...args: any[]) => {
logger.query(`${collectionName}.${method}(${JSON.stringify(args)})`);
});
Structured Logging
For production, consider structured JSON logs:
export class StructuredLogger {
private log(level: string, message: string, meta?: object): void {
const logEntry = {
timestamp: new Date().toISOString(),
level,
message,
...meta,
};
const formatted = JSON.stringify(logEntry) + '\n';
this.combinedStream.write(formatted);
}
info(message: string, meta?: object): void {
this.log('info', message, meta);
}
error(message: string, error?: Error, meta?: object): void {
this.log('error', message, {
...meta,
error: {
message: error?.message,
stack: error?.stack,
name: error?.name,
},
});
}
}
Example output:
{"timestamp":"2024-01-15T10:30:00.000Z","level":"info","message":"User logged in","userId":"user-123","ip":"192.168.1.1"}
{"timestamp":"2024-01-15T10:32:10.789Z","level":"error","message":"Payment failed","orderId":"ord-456","error":{"message":"Insufficient funds","stack":"Error: Insufficient funds\n at...","name":"PaymentError"}}
Log Rotation
Prevent log files from growing too large:
Using logrotate (Linux)
# /etc/logrotate.d/2krika
/home/parfaitd/Work/2krika/logs/*.log {
daily
rotate 7
compress
delaycompress
notifempty
create 0644 parfaitd parfaitd
sharedscripts
postrotate
# Restart application to reopen log files
systemctl reload 2krika
endscript
}
Manual Rotation
import { promises as fs } from 'fs';
import { join } from 'path';
async function rotateLogs(): Promise<void> {
const logsDir = 'logs';
const timestamp = new Date().toISOString().split('T')[0];
// Rotate each log file
const files = ['combined.log', 'error.log', 'query.log'];
for (const file of files) {
const oldPath = join(logsDir, file);
const newPath = join(logsDir, `${file}.${timestamp}`);
try {
await fs.rename(oldPath, newPath);
// Compress old log
await fs.writeFile(oldPath, ''); // Create new empty file
logger.info(`Rotated log file: ${file}`);
} catch (error) {
logger.error(`Failed to rotate log file: ${file}`, error);
}
}
}
// Run daily at midnight
import { CronJob } from 'cron';
const job = new CronJob('0 0 * * *', rotateLogs);
job.start();
Error Tracking
Custom Error Logging
export class ErrorLogger {
static logException(error: Error, context?: string): void {
const errorInfo = {
name: error.name,
message: error.message,
stack: error.stack,
context,
timestamp: new Date().toISOString(),
};
logger.error(
`Exception in ${context || 'unknown context'}`,
error
);
// Could also send to external service (Sentry, etc.)
// await Sentry.captureException(error);
}
}
// Usage
try {
await orderService.createOrder(data);
} catch (error) {
ErrorLogger.logException(error, 'OrderService.createOrder');
throw error;
}
HTTP Request Logging
// web/middleware/logger.middleware.ts
import { Injectable, NestMiddleware } from '@nestjs/common';
import { Request, Response, NextFunction } from 'express';
import { logger } from '@/shared/logging/file';
@Injectable()
export class LoggerMiddleware implements NestMiddleware {
use(req: Request, res: Response, next: NextFunction) {
const { method, originalUrl, ip } = req;
const userAgent = req.get('user-agent') || '';
// Log request
logger.info(`${method} ${originalUrl} - ${ip} - ${userAgent}`);
// Log response
res.on('finish', () => {
const { statusCode } = res;
logger.info(
`${method} ${originalUrl} - ${statusCode} - ${res.get('content-length') || 0}b`
);
});
next();
}
}
Apply middleware:
// web/app.module.ts
export class AppModule implements NestModule {
configure(consumer: MiddlewareConsumer) {
consumer.apply(LoggerMiddleware).forRoutes('*');
}
}
Performance Monitoring
Log Execution Time
export function logExecutionTime(
target: any,
propertyName: string,
descriptor: PropertyDescriptor
) {
const method = descriptor.value;
descriptor.value = async function (...args: any[]) {
const start = Date.now();
try {
const result = await method.apply(this, args);
const duration = Date.now() - start;
logger.info(
`${target.constructor.name}.${propertyName} executed in ${duration}ms`
);
return result;
} catch (error) {
const duration = Date.now() - start;
logger.error(
`${target.constructor.name}.${propertyName} failed after ${duration}ms`,
error
);
throw error;
}
};
return descriptor;
}
// Usage
export class OrderService {
@logExecutionTime
async createOrder(data: CreateOrderDTO) {
// ... implementation
}
}
Environment Configuration
# Logging
NODE_ENV=production
LOG_LEVEL=info # info, warn, error, debug
LOG_QUERIES=false # Enable query logging
LOG_DIR=logs
# Development
NODE_ENV=development
LOG_LEVEL=debug
LOG_QUERIES=true
Best Practices
✅ Do
- Use appropriate levels – ERROR for failures, INFO for events
- Include context – Add relevant IDs and metadata
- Log exceptions – Always log errors with stack traces
- Rotate logs – Prevent disk space issues
- Monitor logs – Set up alerts for errors
- Sanitize data – Don't log sensitive information (passwords, tokens)
- Use structured logging – JSON format for parsing
❌ Don't
- Don't log passwords – Or any sensitive data
- Don't log in tight loops – Causes performance issues
- Don't ignore log levels – Respect LOG_LEVEL setting
- Don't leave debug logs – In production code
- Don't log everything – Be selective and meaningful
- Don't forget timestamps – Essential for debugging
Log Analysis
Search Logs
# Find errors
grep "ERROR" logs/error.log
# Find specific user activity
grep "user-123" logs/combined.log
# Find slow queries (>1000ms)
grep "executed in [0-9]\{4,\}ms" logs/combined.log
# Count errors per day
grep "ERROR" logs/error.log | cut -d'T' -f1 | sort | uniq -c
Log Aggregation
For production, consider log aggregation tools:
- ELK Stack (Elasticsearch, Logstash, Kibana)
- Graylog
- Splunk
- Datadog
- CloudWatch (AWS)
Testing
Mock Logger
// tests/mocks/logger.mock.ts
export class MockLogger {
private logs: Array<{ level: string; message: string }> = [];
info(message: string): void {
this.logs.push({ level: 'info', message });
}
error(message: string, error?: Error): void {
this.logs.push({ level: 'error', message });
}
warn(message: string): void {
this.logs.push({ level: 'warn', message });
}
debug(message: string): void {
this.logs.push({ level: 'debug', message });
}
query(sql: string, params?: any[]): void {
this.logs.push({ level: 'query', message: sql });
}
getLogs() {
return this.logs;
}
clear() {
this.logs = [];
}
}
Logger Tests
describe('Logger', () => {
let mockLogger: MockLogger;
beforeEach(() => {
mockLogger = new MockLogger();
});
it('should log info messages', () => {
mockLogger.info('Test message');
const logs = mockLogger.getLogs();
expect(logs).toHaveLength(1);
expect(logs[0].level).toBe('info');
expect(logs[0].message).toBe('Test message');
});
it('should log errors', () => {
const error = new Error('Test error');
mockLogger.error('Error occurred', error);
const logs = mockLogger.getLogs();
expect(logs).toHaveLength(1);
expect(logs[0].level).toBe('error');
});
});
Summary
The logging system provides:
- Multiple log files for different purposes
- Singleton pattern for consistent logging
- Console output in development
- Query logging for database monitoring
- Error tracking with stack traces
- Structured logging support
- Log rotation capabilities
This comprehensive logging infrastructure helps with debugging, monitoring, and maintaining the application in production.