Architecture
Understanding LogWard's system architecture and design decisions.
System Overview
LogWard follows a modern microservices architecture with clear separation of concerns:
User → Organizations (1:N) → Projects (1:N) → API Keys → Logs
- Organizations - Top-level isolation for companies/teams. Each user can belong to multiple organizations.
- Projects - Logical grouping within organizations (e.g., "production", "staging"). Complete data isolation.
- API Keys - Project-scoped keys for secure log
ingestion and query. Prefixed with
lp_. - Logs - Time-series data stored in TimescaleDB with automatic compression and retention policies.
Technology Stack
Runtime: Node.js 20+
Framework: Fastify
Language: TypeScript 5
ORM: Kysely (type-safe SQL)
Queue: BullMQ + Redis
Validation: Zod schemas
Framework: SvelteKit 5 (Runes)
Language: TypeScript 5
Styling: TailwindCSS
Components: shadcn-svelte
Charts: ECharts
State: Svelte stores
RDBMS: PostgreSQL 16
Extension: TimescaleDB
Time-series: Hypertables
Compression: Automatic
Retention: Configurable policies
Cache: Redis 7
Proxy: Nginx
Container: Docker
Orchestration: Docker Compose
Monorepo: pnpm workspaces
Core Components
Backend Server (Fastify)
High-performance API server handling log ingestion, query, and management endpoints. Modular architecture with feature-based modules:
auth/- Authentication and user managementingestion/- Log ingestion with batch supportquery/- Log search and filteringalerts/- Alert rule managementdashboard/- Statistics and aggregations
Worker Process (BullMQ)
Background job processor for alert evaluation, notifications, and data retention. Runs independently from the main API server.
Frontend Dashboard (SvelteKit)
Modern, reactive UI with real-time log streaming, search, alerts management, and organization administration. Server-side rendering for optimal performance.
TimescaleDB
PostgreSQL extension optimized for time-series data. Automatic partitioning, compression, and retention policies for efficient long-term log storage.
Data Flow
Log Ingestion Flow
- 1. Client sends logs via
POST /api/v1/ingestwith API key - 2. Backend validates API key and extracts project ID
- 3. Logs are validated against Zod schema
- 4. Batch insert into TimescaleDB hypertable
- 5. Alert evaluator job is triggered (BullMQ)
- 6. Logs are broadcast to active SSE streams
Alert Processing Flow
- 1. Worker evaluates all enabled alert rules (every minute)
- 2. For each rule, query logs matching conditions
- 3. If threshold exceeded, create alert instance
- 4. Send notifications (email/webhook) via configured channels
- 5. Update alert status and last triggered timestamp