Skip to main content

Search Results

Ingests results from pre-existing Splunk saved searches for security monitoring and analysis.

Sync Type: Full Synchronisation

Requirements

Before configuring this input, you need:

  1. A running Splunk instance — either Splunk Cloud (yourinstance.splunkcloud.com) or Splunk Enterprise (self-hosted).

  2. An API token — used to authenticate all REST API requests.

    • Log in to your Splunk instance
    • Navigate to Settings → Tokens
    • Click New Token
    • Set an expiry date and any audience restrictions
    • Copy and securely store the generated token
  3. One or more saved searches with a schedule — Monad reads results from job runs of pre-existing saved searches. Each saved search must have a schedule attached so that Splunk runs it automatically and produces job history.

    • In Splunk Web, go to Search & Reporting → Searches, Reports and Alerts
    • Open or create a saved search
    • Under the Schedule tab, enable Schedule Search and set a cron expression
    • Save the search

    Tip: The search must have run at least once before Monad can fetch results. You can trigger a manual run by clicking the Run button on the saved search.

Details

Monad reads results from pre-existing saved searches — it does not create new search jobs. On each cron run:

  1. For each name in Saved Search Names, calls GET /services/saved/searches/{name}/history to list completed job runs (100 jobs per page).
  2. Filters to jobs where isDone=true and published is strictly after the last processed job's published time (stored in state).
  3. For each new completed job, fetches paginated results from GET /services/search/jobs/{sid}/results (1,000 records per page).
  4. After all results for a job are emitted, advances the state cursor to that job's published time and clears any in-progress resume markers.

Records are deduplicated by SHA-256-hashing the raw record bytes (first 8 bytes of the digest). This fingerprint is also used as a resume marker so that a run interrupted mid-job can skip already-emitted records on the next execution.

Data Retrieval Flow

For each saved search name (configured):
GET /services/saved/searches/{name}/history?count=100&offset=N
→ filter: isDone=true AND published > lastPublishedJobTime
→ for each new job:
GET /services/search/jobs/{sid}/results?count=1000&offset=N
→ repeat until page returns fewer than 1,000 records

Incremental Sync

Monad stores the published timestamp of the most recently fully processed job. On each subsequent run, only jobs published after this timestamp are ingested. This cursor is shared across all configured saved searches.

⚠️ Warning: Splunk job results can expire as early as 10 minutes or as late as 7 days after completion and it's configurable. Make sure the cron schedule is always less than the configured TTL of the job. Configuring this input to run at intervals > job's TTL value will result in missing job results.

Mid-job Resume

If a run is interrupted while streaming results for a job, Monad stores the job's SID and the hash of the last successfully emitted record. On the next run, Monad resumes from the record immediately following that hash, so no records are re-emitted or skipped.

Configuration

The following configuration defines the input parameters. Each field's specifications, such as type, requirements, and descriptions, are detailed below.

Settings

SettingTypeRequiredDefaultDescription
HoststringYesSplunk hostname without scheme or port. For Splunk Cloud use yourinstance.splunkcloud.com; for self-hosted Splunk Enterprise use the server hostname or IP.
PortintegerNo8089Splunk REST API port. Defaults to 8089.
Saved Search Namesstring arrayYesOne or more names of saved searches to ingest results from. Names are case-sensitive and must match exactly as they appear in Splunk.
Cron SchedulestringYes*/5 * * * *Schedule for this input. Make sure this is always less than Splunk's minimum job TTL window.

Secrets

SecretTypeRequiredDescription
API TokenstringYesSplunk authentication token. Sent as the Authorization: Bearer <token> request header. Generate one under Settings → Tokens in Splunk Web.

Rate Limits

ScopeLimitWindowNotes
API Requests10Per secondConservative limit; Splunk does not publicly document hard rate limits for the REST API.

Source: Splunk REST API Documentation

Limitations

  • The cursor (lastPublishedJobTime) is shared across all configured saved searches. Jobs are processed in the order they appear in each search's history, and the cursor advances to the globally latest published time seen across all searches.
  • Records are deduplicated by content hash. Records with identical raw bytes across separate job runs will be treated as duplicates.