nodejs_opentelemetry_tracing 12 Q&As

Node.js OpenTelemetry Tracing FAQ & Answers

12 expert Node.js OpenTelemetry Tracing answers researched from official documentation. Every answer cites authoritative sources you can verify.

unknown

12 questions
A

Install packages: npm install @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http. Initialize at app entry (before imports): const {NodeSDK} = require('@opentelemetry/sdk-node'); const {getNodeAutoInstrumentations} = require('@opentelemetry/auto-instrumentations-node'); const sdk = new NodeSDK({traceExporter: new OTLPTraceExporter({url: 'http://localhost:4318/v1/traces'}), instrumentations: [getNodeAutoInstrumentations()]}); sdk.start(). Auto-instrumentations cover: HTTP/HTTPS, Express, databases (PostgreSQL, MongoDB), Redis, gRPC. Export to Jaeger (local): docker run -d -p4318:4318 -p16686:16686 jaegertracing/all-in-one. View traces: http://localhost:16686. Traces show: Request flow across services, timing per operation, errors with context.

99% confidence
A

OpenTelemetry automatically injects trace context into HTTP headers using W3C Trace Context format. Headers: traceparent: 00---01 (version-traceId-spanId-flags), tracestate: vendor-specific data. Outgoing requests: Auto-instrumentation injects headers. Manual: const span = trace.getActiveSpan(); fetch(url, {headers: {'traceparent': generateTraceparent(span)}}) Incoming requests: Auto-instrumentation extracts headers, continues trace. Service A (span A) → calls Service B → creates span B (parent: span A) → single trace. Benefits: End-to-end visibility, correlate logs with trace IDs, identify slow services. Works across: HTTP/HTTPS, gRPC, message queues (RabbitMQ, Kafka with plugins). Context propagation automatic with auto-instrumentations. Manual control: trace.setSpan() to set active span.

99% confidence
A

Use tracer to create custom spans for important operations. Pattern: const {trace} = require('@opentelemetry/api'); const tracer = trace.getTracer('my-service', '1.0.0'); async function processOrder(orderId) {return tracer.startActiveSpan('process-order', async (span) => {span.setAttribute('order.id', orderId); try {const result = await heavyComputation(); span.setStatus({code: SpanStatusCode.OK}); return result;} catch (error) {span.recordException(error); span.setStatus({code: SpanStatusCode.ERROR, message: error.message}); throw error;} finally {span.end();}})}. Attributes: Add business context (user ID, order ID). Status: OK or ERROR. Exception recording: Captures stack trace. Use for: Database queries, external API calls, CPU-intensive operations. Spans appear in Jaeger UI with attributes.

99% confidence
A

BatchSpanProcessor batches spans before exporting, SimpleSpanProcessor exports immediately. SimpleSpanProcessor: Exports each span individually when ended, blocks application until export completes (synchronous), causes performance degradation (network call per span). BatchSpanProcessor: Collects spans in memory buffer, exports in batches periodically or when buffer full, non-blocking (async export). Configure: const {BatchSpanProcessor} = require('@opentelemetry/sdk-trace-base'); sdk.addSpanProcessor(new BatchSpanProcessor(exporter, {maxQueueSize: 2048, scheduledDelayMillis: 5000, maxExportBatchSize: 512})). Settings: maxQueueSize (buffer size), scheduledDelayMillis (export interval), maxExportBatchSize (spans per batch). Production: ALWAYS use BatchSpanProcessor. SimpleSpanProcessor only for debugging/development. Performance: Batching reduces export overhead from 30-80% to <5%.

99% confidence
A

YES, use sampling in production to reduce performance overhead and costs. 100% tracing causes 30-80% performance degradation. Sampling strategies: (1) TraceIdRatioBasedSampler: Sample fixed percentage. Config: sampler: new TraceIdRatioBasedSampler(0.01) = 1% of traces. (2) ParentBasedSampler: If parent span sampled, sample children (maintains trace completeness). (3) AlwaysOnSampler: 100% sampling (development only). Recommended rates: Development: 100%, Staging: 10-20%, Production: 1-5%, High-traffic production: 0.1-1%. Pattern: const {TraceIdRatioBasedSampler} = require('@opentelemetry/sdk-trace-base'); sampler: new TraceIdRatioBasedSampler(0.01). Always sample errors: Use custom sampler that samples 100% of errors, 1% of success. Benefits: <5% performance overhead, reduced backend costs (Jaeger/Datadog storage), still catch issues.

99% confidence
A

Inject trace ID and span ID into log statements for correlation. Pattern: const {trace} = require('@opentelemetry/api'); const winston = require('winston'); const logger = winston.createLogger({format: winston.format.combine(winston.format.timestamp(), winston.format.printf(info => {const span = trace.getActiveSpan(); const traceId = span?.spanContext().traceId || 'no-trace'; const spanId = span?.spanContext().spanId || 'no-span'; return ${info.timestamp} [${traceId}] [${spanId}] ${info.level}: ${info.message};}))})). Log with context: logger.info('Processing order', {orderId: 123}). In Jaeger: Click span → see linked logs with same trace ID. In log aggregation (ELK): Search by trace ID → see all logs for request. Benefits: Debug production issues (logs + traces), correlate errors with slow requests, full request journey. Auto-injection: Use @opentelemetry/instrumentation-winston for automatic injection.

99% confidence
A

Install packages: npm install @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http. Initialize at app entry (before imports): const {NodeSDK} = require('@opentelemetry/sdk-node'); const {getNodeAutoInstrumentations} = require('@opentelemetry/auto-instrumentations-node'); const sdk = new NodeSDK({traceExporter: new OTLPTraceExporter({url: 'http://localhost:4318/v1/traces'}), instrumentations: [getNodeAutoInstrumentations()]}); sdk.start(). Auto-instrumentations cover: HTTP/HTTPS, Express, databases (PostgreSQL, MongoDB), Redis, gRPC. Export to Jaeger (local): docker run -d -p4318:4318 -p16686:16686 jaegertracing/all-in-one. View traces: http://localhost:16686. Traces show: Request flow across services, timing per operation, errors with context.

99% confidence
A

OpenTelemetry automatically injects trace context into HTTP headers using W3C Trace Context format. Headers: traceparent: 00---01 (version-traceId-spanId-flags), tracestate: vendor-specific data. Outgoing requests: Auto-instrumentation injects headers. Manual: const span = trace.getActiveSpan(); fetch(url, {headers: {'traceparent': generateTraceparent(span)}}) Incoming requests: Auto-instrumentation extracts headers, continues trace. Service A (span A) → calls Service B → creates span B (parent: span A) → single trace. Benefits: End-to-end visibility, correlate logs with trace IDs, identify slow services. Works across: HTTP/HTTPS, gRPC, message queues (RabbitMQ, Kafka with plugins). Context propagation automatic with auto-instrumentations. Manual control: trace.setSpan() to set active span.

99% confidence
A

Use tracer to create custom spans for important operations. Pattern: const {trace} = require('@opentelemetry/api'); const tracer = trace.getTracer('my-service', '1.0.0'); async function processOrder(orderId) {return tracer.startActiveSpan('process-order', async (span) => {span.setAttribute('order.id', orderId); try {const result = await heavyComputation(); span.setStatus({code: SpanStatusCode.OK}); return result;} catch (error) {span.recordException(error); span.setStatus({code: SpanStatusCode.ERROR, message: error.message}); throw error;} finally {span.end();}})}. Attributes: Add business context (user ID, order ID). Status: OK or ERROR. Exception recording: Captures stack trace. Use for: Database queries, external API calls, CPU-intensive operations. Spans appear in Jaeger UI with attributes.

99% confidence
A

BatchSpanProcessor batches spans before exporting, SimpleSpanProcessor exports immediately. SimpleSpanProcessor: Exports each span individually when ended, blocks application until export completes (synchronous), causes performance degradation (network call per span). BatchSpanProcessor: Collects spans in memory buffer, exports in batches periodically or when buffer full, non-blocking (async export). Configure: const {BatchSpanProcessor} = require('@opentelemetry/sdk-trace-base'); sdk.addSpanProcessor(new BatchSpanProcessor(exporter, {maxQueueSize: 2048, scheduledDelayMillis: 5000, maxExportBatchSize: 512})). Settings: maxQueueSize (buffer size), scheduledDelayMillis (export interval), maxExportBatchSize (spans per batch). Production: ALWAYS use BatchSpanProcessor. SimpleSpanProcessor only for debugging/development. Performance: Batching reduces export overhead from 30-80% to <5%.

99% confidence
A

YES, use sampling in production to reduce performance overhead and costs. 100% tracing causes 30-80% performance degradation. Sampling strategies: (1) TraceIdRatioBasedSampler: Sample fixed percentage. Config: sampler: new TraceIdRatioBasedSampler(0.01) = 1% of traces. (2) ParentBasedSampler: If parent span sampled, sample children (maintains trace completeness). (3) AlwaysOnSampler: 100% sampling (development only). Recommended rates: Development: 100%, Staging: 10-20%, Production: 1-5%, High-traffic production: 0.1-1%. Pattern: const {TraceIdRatioBasedSampler} = require('@opentelemetry/sdk-trace-base'); sampler: new TraceIdRatioBasedSampler(0.01). Always sample errors: Use custom sampler that samples 100% of errors, 1% of success. Benefits: <5% performance overhead, reduced backend costs (Jaeger/Datadog storage), still catch issues.

99% confidence
A

Inject trace ID and span ID into log statements for correlation. Pattern: const {trace} = require('@opentelemetry/api'); const winston = require('winston'); const logger = winston.createLogger({format: winston.format.combine(winston.format.timestamp(), winston.format.printf(info => {const span = trace.getActiveSpan(); const traceId = span?.spanContext().traceId || 'no-trace'; const spanId = span?.spanContext().spanId || 'no-span'; return ${info.timestamp} [${traceId}] [${spanId}] ${info.level}: ${info.message};}))})). Log with context: logger.info('Processing order', {orderId: 123}). In Jaeger: Click span → see linked logs with same trace ID. In log aggregation (ELK): Search by trace ID → see all logs for request. Benefits: Debug production issues (logs + traces), correlate errors with slow requests, full request journey. Auto-injection: Use @opentelemetry/instrumentation-winston for automatic injection.

99% confidence