CloudWatch Logs Stream
Sync Type: Streaming
Overview
Receives CloudWatch Logs in real-time via Amazon Data Firehose HTTP endpoint delivery. Instead of polling the CloudWatch Logs API, this input receives log events as they are pushed from AWS through a Firehose delivery stream configured with an HTTP endpoint destination.
Requirements
- An AWS account with CloudWatch Logs
- A Monad organization API key with
pipeline:data:writepermission - AWS permissions to create:
- Amazon Data Firehose delivery stream
- CloudWatch Logs subscription filter
Setup Instructions
Step 1: Create the Monad Pipeline
- In the Monad platform, create a new pipeline with the AWS CloudWatch Logs Stream input
- Note the Pipeline ID from the pipeline settings - you'll need this for the Firehose configuration
Step 2: Create a Monad API Key
- In the Monad platform, navigate to Settings > API Keys
- Click Create new API key
- Give it a descriptive name (e.g., "AWS Firehose CloudWatch Logs")
- Ensure the API key has the
pipeline:data:writepermission (included in Contributor and System Administrator roles) - Copy and securely store the API key value - it won't be visible again
Step 3: Create an Amazon Data Firehose Delivery Stream
- Navigate to the Amazon Data Firehose console in AWS
- Click Create Firehose stream
- Configure the stream:
- Source: Select
Direct PUT - Destination: Select
HTTP Endpoint
- Configure the HTTP endpoint destination:
- HTTP endpoint URL:
https://app.monad.com/api/v2/http/send/{PIPELINE_ID} - Replace
{PIPELINE_ID}with your Monad pipeline ID - Access key: Enter your Monad API key (this is sent as the
X-Amz-Firehose-Access-Keyheader) - Content encoding: Select
GZIP(recommended for reduced bandwidth)
- Configure buffering hints:
- Buffer size: 1-5 MiB (recommended: 1 MiB for lower latency)
- Buffer interval: 60-300 seconds (recommended: 60 seconds for near real-time)
- Configure backup settings as needed
- Create the delivery stream and note the Firehose ARN
Step 4: Create an IAM Role for CloudWatch Logs
CloudWatch Logs needs permission to write to your Firehose stream. Create an IAM role with the following configuration:
Trust policy (allows CloudWatch Logs to assume the role):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "logs.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": {
"aws:SourceArn": "arn:aws:logs:*:YOUR_ACCOUNT_ID:*"
}
}
}
]
}
Replace YOUR_ACCOUNT_ID with your AWS account ID.
Permissions policy (allows writing to Firehose):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch"
],
"Resource": "arn:aws:firehose:*:YOUR_ACCOUNT_ID:deliverystream/*"
}
]
}
Replace YOUR_ACCOUNT_ID with your AWS account ID. You can also scope this down to a specific delivery stream ARN for tighter security.
Step 5: Create a CloudWatch Logs Subscription Filter
- Navigate to the CloudWatch Logs console
- Select the log group you want to stream
- Click the Subscription filters tab
- Click Create → Create Kinesis Data Firehose subscription filter
- Configure the subscription filter:
- Firehose stream: Select the delivery stream created in Step 3
- IAM role: Select the role created in Step 4
- Filter pattern: Enter a filter pattern (leave empty to send all logs)
- Filter name: Give it a descriptive name
- Create the subscription filter
Configuration
This input requires no configuration in Monad beyond creating the pipeline. All configuration is done on the AWS side.
Settings
| Setting | Type | Required | Description |
|---|---|---|---|
| None | - | - | No settings required |
Secrets
| Secret | Type | Required | Description |
|---|---|---|---|
| None | - | - | No secrets required |
Authentication
Authentication is handled via the Monad API key that you configure in the Firehose HTTP endpoint destination. The API key is sent by Firehose in the X-Amz-Firehose-Access-Key header.
API Key Requirements:
- Must be an organization API key (not a personal API key)
- Must have the
pipeline:data:writepermission
Data Format
Each log event is enriched with metadata from both CloudWatch and Firehose before being delivered to the pipeline:
| Field | Type | Description |
|---|---|---|
id | string | Unique identifier for the log event |
timestamp | integer | Log event timestamp in Unix milliseconds |
message | object/string | The log message (preserved as JSON if valid, otherwise as string) |
owner | string | AWS account ID that owns the log group |
logGroup | string | Name of the CloudWatch log group |
logStream | string | Name of the CloudWatch log stream |
subscriptionFilters | array | Names of subscription filters that matched |
firehoseRequestId | string | Firehose request ID for tracing |
firehoseTimestamp | integer | Firehose delivery timestamp in Unix milliseconds |
Troubleshooting
Common Issues
Issue: Logs are not appearing in Monad
Possible Causes:
- Firehose delivery stream is not active
- API key is invalid or lacks permissions
- Subscription filter pattern is too restrictive
Solution:
- Check the Firehose stream monitoring metrics in AWS console
- Verify the API key has
pipeline:data:writepermission - Review the subscription filter pattern
Issue: Firehose shows delivery failures
Possible Causes:
- Incorrect HTTP endpoint URL
- Invalid API key in the access key field
- Network connectivity issues
Solution:
- Verify the endpoint URL matches
https://app.monad.com/api/v2/http/send/{PIPELINE_ID} - Regenerate and update the API key
- Check AWS VPC and security group configurations
Related Articles
- Amazon Data Firehose HTTP Endpoint Destinations
- CloudWatch Logs Subscription Filters
- Using Subscription Filters with Amazon Data Firehose
Sample Record
{
"id": "38504832190534723847234234234234234234",
"timestamp": 1706540800000,
"message": {
"level": "INFO",
"action": "Request processed",
"requestId": "abc123-def456-ghi789",
"duration_ms": 127
},
"owner": "123456789012",
"logGroup": "/aws/lambda/my-function",
"logStream": "2024/01/29/[$LATEST]abc123def456",
"subscriptionFilters": ["monad-subscription"],
"firehoseRequestId": "firehose-req-xyz789",
"firehoseTimestamp": 1706540799000
}