Skip to main content

Prometheus Remote Write

Sends metrics to any Prometheus remote write compatible endpoint.

Details

The Prometheus Remote Write output converts pipeline records into Prometheus time series and pushes them to any endpoint that implements the Prometheus remote write protocol.

Each record is translated into a single TimeSeries with one sample. The metric name, numeric value, timestamp, and labels are all configurable via field mappings.

Data is encoded using the Prometheus protobuf wire format and compressed with Snappy before transmission, matching the standard remote write specification.

Batching

Records are batched before sending — up to 500 records or 1 MiB of data per request, flushed at least every 5 seconds.

Metric name

The metric name (Prometheus __name__ label) can be set in two ways:

  • Static: a fixed string applied to every record (e.g., security_events_total)
  • From Field: the value of a JSON field in each record becomes the metric name

Value

If left empty, every record uses a fixed value of 1.0, which makes this output useful for counting events.

When a value field is configured, the field can contain a number (float64, int) or a numeric string. If the field is missing or cannot be parsed as a number, the record is rejected and will not be sent.

Timestamp

If left empty, the current time is used for every record.

When a timestamp field is configured, a wide range of formats are accepted — common examples include:

  • Unix milliseconds (integer or numeric string)
  • RFC3339 / RFC3339Nano (2006-01-02T15:04:05Z)
  • SQL format with or without fractional seconds (2006-01-02 15:04:05)
  • Common Log Format (02/Jan/2006:15:04:05 -0700)
  • RFC1123Z (Mon, 02 Jan 2006 15:04:05 -0700)

If the field is missing or unparseable, the record is rejected and will not be sent.

Labels

Any number of JSON fields can be promoted to Prometheus labels. Field values are converted to strings. The __name__ label is always set first and cannot be overridden via label_fields.

Requirements

The remote write endpoint must be reachable from Monad and must accept the standard Prometheus remote write protocol (protobuf + Snappy). Ensure the endpoint URL includes the full path (e.g., /api/v1/write for Prometheus, /api/v1/push for Grafana Mimir).

For authenticated endpoints, obtain either a bearer token or basic auth credentials from your metrics backend before configuring this output.

Configuration

The following configuration defines the output parameters.

Settings

SettingTypeRequiredDescription
EndpointstringYesThe Prometheus remote write endpoint URL (e.g., https://prometheus.example.com/api/v1/write).
Metric NameoneOfYesHow the Prometheus metric name (__name__) is determined for each record. Options: static (fixed name) or field (extracted from a JSON field).
└── (Static) Metric NamestringYesThe fixed metric name applied to all records (e.g., http_requests_total).
└── (From Field) Field NamestringYesThe JSON field whose value becomes the metric name. If the field is missing, the record is rejected.
Value FieldstringNoJSON field containing the numeric sample value. If left empty, each record is counted as 1.0. If specified, records with a missing or unparseable value are rejected.
Timestamp FieldstringNoJSON field containing the event timestamp. If left empty, the current time is used. If specified, records with a missing or unparseable timestamp are rejected.
Label Fieldsarray of stringsNoJSON field names to extract as Prometheus labels. Values are converted to strings.
Skip TLS VerificationboolNoSkip TLS certificate verification when connecting to the remote write endpoint. Not recommended for production.
AuthenticationoneOfYesAuthentication method. Options: none, bearer (token), or basic (username + password).
└── (Bearer Token) Bearer TokensecretYesBearer token sent in the Authorization: Bearer <token> header.
└── (Basic Auth) UsernamesecretYesUsername for HTTP basic authentication.
└── (Basic Auth) PasswordsecretYesPassword for HTTP basic authentication.

API Examples

The metric_name and auth fields are discriminated unions. Set type to select the variant, then provide the matching nested object. Secrets are referenced by their ID ({"id": "<secret-id>"}).

Static metric name, no authentication

{
"settings": {
"endpoint": "https://prometheus.example.com/api/v1/write",
"metric_name": {
"type": "static",
"static": {
"value": "security_events_total"
}
},
"value_field": "count",
"timestamp_field": "timestamp",
"label_fields": ["severity", "source", "host"],
"tls_skip_verify": false,
"auth": {
"type": "none"
}
}
}

Metric name from field, bearer token authentication

{
"settings": {
"endpoint": "https://mimir.example.com/api/v1/push",
"metric_name": {
"type": "field",
"field": {
"field_name": "metric_name"
}
},
"value_field": "value",
"timestamp_field": "ts",
"label_fields": ["region", "env"],
"tls_skip_verify": false,
"auth": {
"type": "bearer",
"bearer": {
"bearer_token": {
"id": "<secret-id>"
}
}
}
}
}

Basic auth

{
"settings": {
"endpoint": "https://metrics.example.com/api/v1/write",
"metric_name": {
"type": "static",
"static": {
"value": "monad_events_total"
}
},
"label_fields": ["severity", "pipeline"],
"tls_skip_verify": false,
"auth": {
"type": "basic",
"basic": {
"username": {
"id": "<secret-id>"
},
"password": {
"id": "<secret-id>"
}
}
}
}
}

Best Practices

  1. Follow Prometheus naming conventions: Metric names should use lowercase letters, digits, and underscores, and end with a unit suffix where appropriate (e.g., _total, _bytes, _seconds). Invalid characters in metric names from a field source may cause the remote write endpoint to reject the request.

  2. Use static metric names for homogeneous pipelines: If all records in a pipeline represent the same kind of event, a static metric name is simpler and less error-prone than field extraction.

  3. Keep label cardinality low: Each unique combination of label values creates a new time series in your metrics backend. Avoid using high-cardinality fields (e.g., user IDs, request IDs, IP addresses) as labels, as this can degrade query performance and increase storage costs.

  4. Use value_field for measurements, omit it for counts: If your records carry a measured quantity (latency, bytes, score), map it to value_field. If you just want to count occurrences, leave value_field empty — each record contributes a value of 1.0. Note that when a field is configured, records where the field is missing or unparseable will be rejected rather than silently defaulted.

  5. Match the endpoint path to your backend: Different Prometheus-compatible backends use different remote write paths — consult your backend's documentation for the correct URL.

Limitations

  • Each record produces exactly one time series with one sample. There is no aggregation — if you need aggregated metrics, consider aggregating at the source, or writing to a collector.