OpenTelemetry Integration
Native OTLP SupportLogWard supports native OpenTelemetry Protocol (OTLP) for log ingestion, allowing you to send logs from any OpenTelemetry-instrumented application.
Overview
The OTLP endpoint accepts logs in both JSON and Protobuf formats, making it compatible with all OpenTelemetry SDKs and collectors.
POST /api/v1/otlp/logsapplication/json, application/x-protobufX-API-Key headerData Mapping
OTLP log records are automatically mapped to LogWard's format:
| OTLP Field | LogWard Field | Notes |
|---|---|---|
| timeUnixNano | time | Converted to ISO 8601 |
| severityNumber | level | Mapped to 5 levels (see below) |
| body.stringValue | message | Log message content |
| traceId | trace_id | Converted to UUID format |
| spanId | span_id | 16-character hex string |
| attributes | metadata | Stored as JSON object |
| resource.service.name | service | Extracted from resource |
Severity Mapping
OTLP severity numbers (0-24) are mapped to LogWard levels:
debug
TRACE/DEBUG
info
INFO
warn
WARN
error
ERROR
critical
FATAL
Node.js Example
Install the required packages:
npm install @opentelemetry/sdk-node @opentelemetry/api-logs \
@opentelemetry/sdk-logs @opentelemetry/exporter-logs-otlp-httpConfigure the OpenTelemetry SDK:
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-http';
import { BatchLogRecordProcessor } from '@opentelemetry/sdk-logs';
import { Resource } from '@opentelemetry/resources';
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
import { logs, SeverityNumber } from '@opentelemetry/api-logs';
// Configure the OTLP exporter
const logExporter = new OTLPLogExporter({
url: 'https://your-logward-instance.com/api/v1/otlp/logs',
headers: {
'X-API-Key': 'your-api-key-here',
},
});
// Initialize the SDK
const sdk = new NodeSDK({
resource: new Resource({
[ATTR_SERVICE_NAME]: 'my-service',
}),
logRecordProcessor: new BatchLogRecordProcessor(logExporter),
});
sdk.start();
// Get a logger and emit logs
const logger = logs.getLogger('my-logger');
logger.emit({
severityNumber: SeverityNumber.INFO,
severityText: 'INFO',
body: 'User logged in successfully',
attributes: {
'user.id': '12345',
'user.email': 'user@example.com',
},
});Python Example
Install the required packages:
pip install opentelemetry-sdk opentelemetry-exporter-otlp-proto-httpConfigure the OpenTelemetry SDK:
from opentelemetry import _logs
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
import logging
# Configure the resource
resource = Resource.create({
SERVICE_NAME: "my-python-service"
})
# Configure the OTLP exporter
exporter = OTLPLogExporter(
endpoint="https://your-logward-instance.com/api/v1/otlp/logs",
headers={"X-API-Key": "your-api-key-here"},
)
# Set up the logger provider
logger_provider = LoggerProvider(resource=resource)
logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
_logs.set_logger_provider(logger_provider)
# Attach to Python's logging module
handler = LoggingHandler(
level=logging.DEBUG,
logger_provider=logger_provider,
)
logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(logging.DEBUG)
# Now all logs will be sent to LogWard
logging.info("Application started", extra={"user.id": "12345"})
logging.warning("High memory usage", extra={"memory.percent": 85})
logging.error("Database connection failed", extra={"db.host": "localhost"})Go Example
Install the required packages:
go get go.opentelemetry.io/otel
go get go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp
go get go.opentelemetry.io/otel/sdk/logConfigure the OpenTelemetry SDK:
package main
import (
"context"
"log"
"go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp"
"go.opentelemetry.io/otel/log/global"
sdklog "go.opentelemetry.io/otel/sdk/log"
"go.opentelemetry.io/otel/sdk/resource"
semconv "go.opentelemetry.io/otel/semconv/v1.24.0"
)
func main() {
ctx := context.Background()
// Create the OTLP exporter
exporter, err := otlploghttp.New(ctx,
otlploghttp.WithEndpoint("your-logward-instance.com"),
otlploghttp.WithURLPath("/api/v1/otlp/logs"),
otlploghttp.WithHeaders(map[string]string{
"X-API-Key": "your-api-key-here",
}),
)
if err != nil {
log.Fatalf("failed to create exporter: %v", err)
}
// Create resource
res, _ := resource.New(ctx,
resource.WithAttributes(
semconv.ServiceName("my-go-service"),
),
)
// Create logger provider
provider := sdklog.NewLoggerProvider(
sdklog.WithProcessor(sdklog.NewBatchProcessor(exporter)),
sdklog.WithResource(res),
)
defer provider.Shutdown(ctx)
global.SetLoggerProvider(provider)
}OpenTelemetry Collector
You can use the OpenTelemetry Collector to aggregate logs from multiple services before sending to LogWard.
# otel-collector-config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 100
exporters:
otlphttp/logward:
endpoint: https://your-logward-instance.com
headers:
X-API-Key: your-api-key-here
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/logward]Docker Compose configuration:
version: '3.8'
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ['--config=/etc/otel-collector-config.yaml']
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- '4317:4317' # OTLP gRPC
- '4318:4318' # OTLP HTTPFluent Bit Integration
Fluent Bit can forward logs to LogWard using the OpenTelemetry output plugin:
[SERVICE]
Flush 1
Log_Level info
[INPUT]
Name tail
Path /var/log/app/*.log
Tag app.*
[OUTPUT]
Name opentelemetry
Match *
Host your-logward-instance.com
Port 443
Uri /api/v1/otlp/logs
Log_response_payload True
Tls On
Header X-API-Key your-api-key-hereTrace Correlation
When sending logs with trace context, LogWard automatically extracts
and indexes trace_id and span_id fields.
This enables:
Trace-to-logs correlation
Click on a trace ID to see all related logs
Distributed tracing
Follow requests across multiple services
Context filtering
Search logs by trace ID or span ID
import { trace, context } from '@opentelemetry/api';
import { logs, SeverityNumber } from '@opentelemetry/api-logs';
const tracer = trace.getTracer('my-tracer');
const logger = logs.getLogger('my-logger');
// Create a span
const span = tracer.startSpan('process-request');
// Log within the span context - trace_id is auto-propagated
context.with(trace.setSpan(context.active(), span), () => {
logger.emit({
severityNumber: SeverityNumber.INFO,
body: 'Processing user request',
attributes: { 'request.id': 'req-123' },
});
});
span.end();Troubleshooting
Response Codes
| Code | Meaning |
|---|---|
| 200 | Success |
| 400 | Invalid request format |
| 401 | Missing or invalid API key |
| 429 | Rate limit exceeded |
| 500 | Server error |
Common Issues
- Check that your API key is valid and has ingestion permissions
- Verify you're sending to
/api/v1/otlp/logs - Use
application/jsoncontent type for best compatibility - Check rate limits (default: 200 req/min per API key)
body field with content.
The body.stringValue is used as the message.service.name resource attribute in your SDK configuration.Enable Debug Logging
Enable debug logging in your OpenTelemetry SDK to see request details:
# Node.js
export OTEL_LOG_LEVEL=debug
# Python
import logging
logging.basicConfig(level=logging.DEBUG)Migration from Custom SDKs
If you're currently using LogWard's custom SDKs, you can migrate to OpenTelemetry for standardized instrumentation:
import { LogWardClient } from '@logward-dev/sdk-node';
const client = new LogWardClient({
apiKey: 'your-key'
});
client.info('api', 'User logged in', {
userId: '123'
});import { logs, SeverityNumber } from '@opentelemetry/api-logs';
const logger = logs.getLogger('my-logger');
logger.emit({
severityNumber: SeverityNumber.INFO,
body: 'User logged in',
attributes: { 'user.id': '123' },
});- Vendor-neutral: Switch backends without code changes
- Auto-instrumentation: Automatic logging for popular frameworks
- Trace correlation: Built-in distributed tracing support
- Large ecosystem: Extensive integrations and community support