OTLP Integration Guide
LogClaw uses OTLP (OpenTelemetry Protocol) as its sole log ingestion protocol. OTLP is the CNCF industry standard supported by every major observability vendor — Datadog, Splunk, Grafana, AWS CloudWatch, GCP Cloud Logging, and Azure Monitor all speak OTLP natively. No custom integrations needed. If your app already uses OpenTelemetry, point it at LogClaw and you’re done.Endpoints
| Transport | Port | Path | Use Case |
|---|---|---|---|
| gRPC | 4317 | — | High-throughput, binary Protobuf. Recommended for production SDKs and OTel agents. |
| HTTP/JSON | 4318 | /v1/logs | Human-readable JSON. Good for curl, scripts, and debugging. |
logclaw-otel-collector Kubernetes service.
Quick Start — curl
The simplest way to send a log to LogClaw:200 OK with {"partialSuccess":{}}.
SDK Integration
Python
Install the OpenTelemetry Python SDK:Java (Log4j2 + OTel Appender)
Add the OTel Log4j2 appender to yourpom.xml:
log4j2.xml:
Node.js (Winston + OTel)
Go
Using an OTel Collector as a Sidecar / Agent
For production deployments, the recommended pattern is to run an OTel Collector as a DaemonSet or sidecar that collects logs from your pods and forwards them to LogClaw’s collector.OTLP Field Mapping
LogClaw’s Bridge flattens the nested OTLP structure into canonical flat documents for OpenSearch:| OTLP Field | LogClaw Field | Description |
|---|---|---|
resource.attributes["service.name"] | service | Service name |
logRecord.body.stringValue | message | Log message |
logRecord.severityText | level | Log level (INFO, WARN, ERROR, etc.) |
logRecord.timeUnixNano | timestamp | ISO-8601 timestamp |
logRecord.traceId | trace_id | Distributed trace ID |
logRecord.spanId | span_id | Span ID |
resource.attributes["host.name"] | host | Hostname |
resource.attributes["tenant_id"] | tenant_id | Tenant (injected by collector) |
logRecord.attributes[*] | Flattened as top-level fields | Custom attributes |
Dashboard File Upload
The LogClaw dashboard supports drag-and-drop log file upload. When you upload a JSON file through the UI, the dashboard automatically converts each log entry to OTLP format using the built-inlogsToOtlp() converter and sends them to the OTel Collector via the /api/otel/v1/logs proxy.
Supported file formats:
- JSON — array of log objects
[{"message": "...", "level": "ERROR", ...}] - NDJSON — newline-delimited JSON (one log object per line)
Troubleshooting
Logs not appearing in OpenSearch?-
Check OTel Collector health:
-
Check Collector logs for export errors:
-
Verify Kafka topic has messages:
-
Check Bridge OTLP ETL thread:
connection refused?
Ensure you’re using the correct port:
- gRPC →
:4317 - HTTP/JSON →
:4318/v1/logs