Search Results
Ingests results from pre-existing Splunk saved searches for security monitoring and analysis.
Sync Type: Full Synchronisation
Requirements
Before configuring this input, you need:
-
A running Splunk instance — either Splunk Cloud (
yourinstance.splunkcloud.com) or Splunk Enterprise (self-hosted). -
An API token — used to authenticate all REST API requests.
- Log in to your Splunk instance
- Navigate to Settings → Tokens
- Click New Token
- Set an expiry date and any audience restrictions
- Copy and securely store the generated token
-
One or more saved searches with a schedule — Monad reads results from job runs of pre-existing saved searches. Each saved search must have a schedule attached so that Splunk runs it automatically and produces job history.
- In Splunk Web, go to Search & Reporting → Searches, Reports and Alerts
- Open or create a saved search
- Under the Schedule tab, enable Schedule Search and set a cron expression
- Save the search
Tip: The search must have run at least once before Monad can fetch results. You can trigger a manual run by clicking the Run button on the saved search.
Details
Monad reads results from pre-existing saved searches — it does not create new search jobs. On each cron run:
- For each name in Saved Search Names, calls
GET /services/saved/searches/{name}/historyto list completed job runs (100 jobs per page). - Filters to jobs where
isDone=trueandpublishedis strictly after the last processed job's published time (stored in state). - For each new completed job, fetches paginated results from
GET /services/search/jobs/{sid}/results(1,000 records per page). - After all results for a job are emitted, advances the state cursor to that job's
publishedtime and clears any in-progress resume markers.
Records are deduplicated by SHA-256-hashing the raw record bytes (first 8 bytes of the digest). This fingerprint is also used as a resume marker so that a run interrupted mid-job can skip already-emitted records on the next execution.
Data Retrieval Flow
For each saved search name (configured):
GET /services/saved/searches/{name}/history?count=100&offset=N
→ filter: isDone=true AND published > lastPublishedJobTime
→ for each new job:
GET /services/search/jobs/{sid}/results?count=1000&offset=N
→ repeat until page returns fewer than 1,000 records
Incremental Sync
Monad stores the published timestamp of the most recently fully processed job. On each subsequent run, only jobs published after this timestamp are ingested. This cursor is shared across all configured saved searches.
⚠️ Warning: Splunk job results can expire as early as 10 minutes or as late as 7 days after completion and it's configurable. Make sure the cron schedule is always less than the configured TTL of the job. Configuring this input to run at intervals > job's TTL value will result in missing job results.
Mid-job Resume
If a run is interrupted while streaming results for a job, Monad stores the job's SID and the hash of the last successfully emitted record. On the next run, Monad resumes from the record immediately following that hash, so no records are re-emitted or skipped.
Configuration
The following configuration defines the input parameters. Each field's specifications, such as type, requirements, and descriptions, are detailed below.
Settings
| Setting | Type | Required | Default | Description |
|---|---|---|---|---|
| Host | string | Yes | — | Splunk hostname without scheme or port. For Splunk Cloud use yourinstance.splunkcloud.com; for self-hosted Splunk Enterprise use the server hostname or IP. |
| Port | integer | No | 8089 | Splunk REST API port. Defaults to 8089. |
| Saved Search Names | string array | Yes | — | One or more names of saved searches to ingest results from. Names are case-sensitive and must match exactly as they appear in Splunk. |
| Cron Schedule | string | Yes | */5 * * * * | Schedule for this input. Make sure this is always less than Splunk's minimum job TTL window. |
Secrets
| Secret | Type | Required | Description |
|---|---|---|---|
| API Token | string | Yes | Splunk authentication token. Sent as the Authorization: Bearer <token> request header. Generate one under Settings → Tokens in Splunk Web. |
Rate Limits
| Scope | Limit | Window | Notes |
|---|---|---|---|
| API Requests | 10 | Per second | Conservative limit; Splunk does not publicly document hard rate limits for the REST API. |
Source: Splunk REST API Documentation
Limitations
- The cursor (
lastPublishedJobTime) is shared across all configured saved searches. Jobs are processed in the order they appear in each search's history, and the cursor advances to the globally latest published time seen across all searches. - Records are deduplicated by content hash. Records with identical raw bytes across separate job runs will be treated as duplicates.