How to use Lurelit.
Getting Started
Lurelit is an agentic phishing & smishing screenshot analyzer powered by Elastic Security Workflows and Agent Builder. Upload a suspicious screenshot, and Lurelit runs a multi-step AI pipeline — vision analysis, IOC extraction, VirusTotal & urlscan.io enrichment, and autonomous ES|QL threat hunting across your environment logs — then delivers a structured verdict.
What You Need
- Elastic Stack 9.4+ with Security Workflows (GA) and Agent Builder
- API keys for Anthropic (Claude vision), VirusTotal, and urlscan.io
- Node.js 20+ (for running from source) or Docker
Quick Start
- Install Lurelit (from source or Docker — see next section)
- Complete the first-time setup wizard (connects to Kibana, imports workflow)
- Login with your Kibana credentials
- Upload a screenshot and get a verdict
Installation & Deployment
Lurelit can be installed from source or run as a Docker container. Both methods are equally supported and run on port 5001 by default.
From Source
git clone https://github.com/jamesspi/lurelit.git
cd lurelit
npm install
npm run dev
# Open http://localhost:5001
# Admin key shown in terminalRequires Node.js 20+. Use this for local development or when you want hot-reload during customization.
With Docker
git clone https://github.com/jamesspi/lurelit.git
cd lurelit
docker compose up
# Open http://localhost:5001
# Admin key shown in container logs: docker compose logs lurelitThe container uses a multi-stage build with node:22-alpine and runs as a non-root user. No Node.js installation required on the host.
With Docker + Env Vars (Skip Setup Wizard)
docker compose up -e KIBANA_URL=https://your-kibana:5601 -e WORKFLOW_ID=your-workflow-id -e CONFIG_SECRET=your-secret
# Goes straight to login, no setup wizard neededWhen KIBANA_URL and WORKFLOW_ID are set, the app bypasses the setup wizard entirely and goes straight to the login page. Ideal for pre-configured deployments and CI/CD.
Environment Variables
Set these in your docker-compose.yml, .env.local file, or pass via -e flags:
services:
lurelit:
build: .
ports:
- "5001:5001"
environment:
- KIBANA_URL=http://host.docker.internal:5601
- WORKFLOW_ID=your-workflow-id-here
- CONFIG_SECRET=change-me-to-a-random-stringhost.docker.internal to reach a Kibana instance running on your host machine. For remote Kibana deployments, use the full URL.Admin Key
On first startup, Lurelit generates an admin key and prints it to the server terminal / Docker logs. This key is required to unlock the setup wizard. Copy it from the terminal output or from docker compose logs lurelit.
Standalone Build Optimization
The Next.js config sets output: 'standalone', which produces a minimal production build including only the files needed to run. The Docker image copies just the standalone output (.next/standalone + .next/static + public/) into the final layer, resulting in a small image size with fast cold starts.
Image Details
- Base image:
node:22-alpine - Port: 5001 (configurable via
PORTenv var) - User: Runs as non-root
nextjsuser (UID 1001) - Restart policy:
unless-stopped
Securing with HTTPS
Lurelit runs on HTTP by default. For production deployments, TLS should be configured via a reverse proxy.
Recommended: Nginx Reverse Proxy
server {
listen 443 ssl;
server_name lurelit.yourdomain.com;
ssl_certificate /etc/ssl/certs/lurelit.crt;
ssl_certificate_key /etc/ssl/private/lurelit.key;
location / {
proxy_pass http://localhost:5001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}Alternative: Caddy (Auto-TLS with Let's Encrypt)
lurelit.yourdomain.com {
reverse_proxy localhost:5001
}Caddy automatically provisions and renews certificates.
Docker with TLS
# docker-compose.yml with Caddy
services:
lurelit:
build: .
ports:
- "5001:5001"
environment:
- KIBANA_URL=https://your-kibana:5601
- WORKFLOW_ID=your-workflow-id
caddy:
image: caddy:alpine
ports:
- "443:443"
- "80:80"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
depends_on:
- lurelitImportant Notes
- When running behind a proxy, session cookies are marked
secure: truein production (NODE_ENV=production) - Set
NODE_ENV=productionwhen deploying with TLS - The
X-Forwarded-Protoheader ensures Lurelit knows it's behind HTTPS - Native TLS support may be added in a future release
First-Time Setup
On first launch, Lurelit auto-detects that no configuration exists and redirects you to a 5-step guided setup wizard at /setup. The middleware checks for environment variables (KIBANA_URL + WORKFLOW_ID) and the smish_configured cookie — if neither is present, all routes redirect to setup.
First-Time Setup (Step by Step)
- On first startup, Lurelit generates an admin key and prints it to the server terminal / Docker logs. Copy it.
- Navigate to
http://localhost:5001/setup(the app redirects here automatically when unconfigured). - Paste the admin key to unlock the wizard.
- Enter your Kibana URL (e.g.,
http://localhost:5601) and credentials (username/password with workflow execution privileges). - The wizard validates connectivity and checks prerequisites: Kibana 9.4+, Workflows API, Agent Builder, Security solution.
- Review connectors: the wizard scans for required HTTP connectors (Anthropic, VirusTotal, urlscan.io) and optional ones (Slack). Create missing connectors by entering API keys directly in the wizard.
- Select AI models: choose inference endpoints for enrichment/hunting (primary model) and report formatting (secondary model).
- Import or select the Lurelit workflow: if a matching workflow exists, select it; otherwise import the bundled workflow with your configured connectors.
- Configuration is encrypted and saved. You're redirected to the login page.
- Login with your Kibana credentials (same ones from step 4).
- Upload a screenshot and start analyzing!
KIBANA_URL and WORKFLOW_ID as environment variables (or in docker-compose.yml), the setup wizard is bypassed entirely and the app goes straight to the login page. The wizard is only needed for interactive first-time configuration.Wizard Steps Summary
Auto-Detection
The middleware (src/middleware.ts) runs on every request. If environment variables are missing and no smish_configured cookie exists, the user is redirected to /setup. API routes return 503 with { needsSetup: true }.
Re-Running Setup
To re-run the wizard after initial configuration, navigate directly to /setup (it's always accessible as a public path). You can also clear your stored configuration via DELETE /api/settings which removes the encrypted config file and the smish_configured cookie, forcing the redirect again on next page load.
Configuration
Environment Variables
Set these in your environment or .env.local file:
# Required
KIBANA_URL=https://your-kibana.elastic.cloud
WORKFLOW_ID=your-workflow-id-from-kibana
# Optional (defaults to insecure dev key if omitted)
CONFIG_SECRET=a-random-secret-for-encryptionKIBANA_URL— Full URL to your Kibana instance (no trailing slash). Append/s/space-nameif using a non-default space.WORKFLOW_ID— The auto-generated ID of your imported workflow (see Finding Your Workflow ID)CONFIG_SECRET— Secret key used to encrypt stored config and session cookies (AES-256-GCM)
Settings Modal
Click the gear icon in the nav bar to open the settings modal. It allows you to:
- View and edit the
Kibana URLandWorkflow ID - See whether configuration is managed via environment variables (displays an info banner when env vars are active)
- Run a "Test" button that saves current settings and validates the Kibana connection via
POST /api/settings/test - UI changes override environment variables when saved
UI-Based Setup
If environment variables are not set, you can configure Lurelit through the Settings modal in the navigation bar. Click the "Setup" / "Configured" button to open it. The settings are encrypted and persisted to .smish-config.enc on disk.
Authentication
Lurelit uses Kibana native credentials. When you log in, your username and password are validated against the Kibana /api/status endpoint. On success, an encrypted session cookie (smish_session) is set with a 24-hour TTL. All subsequent API calls to Kibana use Basic auth derived from the stored session.
Kibana Prerequisites
Before deploying Lurelit, ensure the following components are available and configured in your environment.
Infrastructure
API Keys Required
Kibana User Privileges
workflowsManagement:execute, read, readExecution)actions:execute)logs-*, filebeat-*, .alerts-security.* indicesVersion Compatibility
9.4+
9.4+
GA (9.4+)
Connectors
Anthropic API Connector
Powers the AI screenshot analysis using Claude Opus 4.7 with vision capabilities.
- Open your workflow in the YAML editor
- For the
analyze_screenshotstep, create an HTTP connector - Set the URL to:
https://api.anthropic.com/v1/messagesAuthentication headers:
x-api-key: <your-anthropic-api-key>
anthropic-version: 2023-06-01
content-type: application/jsonVirusTotal Connectors
Three separate HTTP connectors are needed for full VirusTotal coverage. All share the same API key but target different endpoints.
1. VT URL Submit — submits URLs for scanning
URL: https://www.virustotal.com/api/v3/urls
Method: POST
Header: x-apikey: <your-virustotal-api-key>2. VT Base — polls analysis results and general queries
URL: https://www.virustotal.com/api/v3
Method: GET
Header: x-apikey: <your-virustotal-api-key>3. VT Files — file hash lookups
URL: https://www.virustotal.com/api/v3/files
Method: GET
Header: x-apikey: <your-virustotal-api-key>virustotal.getIpReport action type). If you have a native VirusTotal connector already configured in Stack Management → Connectors, you can reference its auto-generated ID in the workflow.urlscan.io Connector
Searches urlscan.io for domain/URL/IP reputation data.
URL: https://urlscan.io/api/v1/search/
Method: GET
Header: API-Key: <your-urlscan-api-key>The workflow sends GET requests with query parameters q (search query) and size (result limit).
Slack Connector (Optional)
If you want analysis reports posted to a Slack channel, create an HTTP connector for the Slack API.
URL: https://slack.com/api/chat.postMessage
Method: POST
Header: Authorization: Bearer <your-slack-bot-token>
Header: Content-Type: application/jsonThe workflow posts Block Kit formatted messages. Update the channel field in the send_slack_report step to your channel ID.
AI Inference Endpoints
The ai.agentsteps use Kibana's managed inference connectors. These are pre-configured by Elastic and route through inference infrastructure to the Anthropic API. You should not need to create these manually — they appear automatically when you have an Anthropic integration configured.
.anthropic-claude-4.6-opus-chat_completion— Used by the summarize_enrichment AI agent step.anthropic-claude-4.6-sonnet-chat_completion— Used by the Slack message formatting step
Workflow Setup
workflow/ directory. You do NOT need to build it yourself — simply import it into your Kibana instance.What the Workflow Does
The workflow executes a multi-step agentic pipeline on each submitted screenshot:
Importing the Workflow
- Navigate to Security → Workflows in Kibana
- Click Create new workflow
- Switch to the YAML editor
- Paste the contents of
workflow/phishing-smishing-screenshot-analyzer.yamlfrom this project - Kibana will assign the workflow a unique ID automatically
- Update all
connector-idreferences in the YAML to match YOUR connector IDs (see Connectors section) - Ensure the trigger is set to
manual - Save and enable the workflow
Updating Connector References
After creating your connectors within the Workflows UI, update these connector-id fields in the YAML:
# Replace these with YOUR connector IDs:
analyze_screenshot: connector-id: <your-anthropic-connector-id>
vt_url_submit: connector-id: <your-vt-url-connector-id>
vt_url_poll_check: connector-id: <your-vt-base-connector-id>
vt_hash_lookup: connector-id: <your-vt-files-connector-id>
vt_ip_lookup: connector-id: <your-vt-native-connector-id>
urlscan_*_search: connector-id: <your-urlscan-connector-id>
send_slack_report: connector-id: <your-slack-connector-id>
format_slack_message: connector-id: <your-inference-endpoint>Workflow Inputs
The workflow accepts two inputs when triggered:
image_base64(string, required) — The base64-encoded screenshot datamedia_type(string, default: "image/png") — MIME type of the image
Finding Your Workflow ID
After importing the workflow into Kibana, you need to find the auto-generated workflow ID and configure it in Lurelit.
Method 1: From the URL
- Navigate to Security → Workflows
- Click on your imported workflow to open it
- Look at the browser URL — it will contain the workflow ID:
https://your-kibana.example.com/app/security/workflows/<workflow-id>
# Example:
https://localhost:5601/app/security/workflows/phishing-smishing-screenshot-analyzerMethod 2: From the Workflow Details Panel
- Open the workflow in the editor
- Click the Info or Details panel
- The workflow ID is displayed in the metadata section
Method 3: Via API
# List all workflows and find yours:
curl -u 'user:pass' -H 'kbn-xsrf: true' \
'https://your-kibana/api/workflows/workflows' | jq '.[]'Configuring in Lurelit
Once you have the workflow ID, enter it in one of two ways:
- Environment variable: Set
WORKFLOW_ID=your-workflow-idin.env.local - Settings UI: Click the Settings button in the nav bar and paste the workflow ID into the "Workflow ID" field
Permissions
The user account used by Lurelit to connect to Kibana needs specific privileges to execute workflows, run the AI agent, and access security data.
Kibana Space Privileges
Feature Privilege Purpose
─────────────────────────────────────────────────────────────────────
Workflows All Create, read, execute workflows
Security Read Access security alerts for hunt step
Actions All Execute connectors (VT, urlscan, etc.)
AI Agent All Execute ai.agent steps via Agent BuilderWorkflow Execution Privileges
workflowsManagement:execute— Trigger workflow executions via the APIworkflowsManagement:read— Read workflow definitions (needed to verify workflow exists)workflowsManagement:readExecution— Poll execution status, read step logs and output
Agent Builder Execution
The ai.agent steps in the workflow execute as the Elastic AI Agent. The user running the workflow must have:
- Permission to invoke Agent Builder agents
- The
elastic-ai-agentmust be set to public visibility - ES|QL execution privileges (the agent runs queries autonomously)
Index Read Access (Hunt Step)
The environment hunt step runs ES|QL queries across your data. The executing user needs read access to:
logs-* Network, DNS, HTTP, and endpoint logs
filebeat-* Filebeat ingested log data
.alerts-security.* Security alerts (detection rules)
packetbeat-* Network packet data (if available)
winlogbeat-* Windows event logs (if available)Connector Execution
The user must have the actions:execute privilege to call connectors during workflow execution. This is typically granted via the Actions and Connectors → All feature privilege in the Kibana space.
Using Lurelit
Uploading Screenshots
From the home page, drag-and-drop or click to select one or more screenshot files. Supported formats: PNG, JPG/JPEG, WEBP. Images are base64-encoded and sent to the workflow as input.
Single vs. Bulk Upload
- Single file — Redirects directly to the results page for that analysis
- Multiple files — All are submitted in parallel; you're redirected to the History page to track them
Real-Time Analysis Progress
The results page polls the execution status every 3 seconds. A live elapsed timer counts up during analysis. Each workflow step appears in a timeline as it completes, with a progress bar showing overall completion percentage.
Understanding the Verdict
When the workflow completes, you get a structured verdict including:
- Classification — Threat or Safe, with confidence percentage
- Attack type — Smishing, phishing, credential harvest, etc.
- Red flags — Specific indicators the AI identified
- IOCs extracted — URLs, domains, IPs, hashes found in the image
- Enrichment results — VirusTotal stats and urlscan.io data per IOC
- Environment hunt — Whether any IOCs were seen in your org's logs
- Attack chain — Step-by-step timeline of the attack progression
Cost Tracking
Each analysis displays an AI cost breakdown showing token usage and estimated cost for the LLM calls made during the workflow.
Features
Human-in-the-Loop Approval
When the workflow isn't confident enough to automatically proceed with environment hunting, it pauses at a waitForInputstep and asks for human approval. The UI displays a yellow "Human Approval Required" card with:
- Context summary — The classification, enrichment findings, and why approval is needed (rendered as markdown)
- Approve Hunt — Proceeds with the environment hunt step using the Elastic AI Agent
- Skip Hunt — Finalizes the report without running the hunt
- Optional reason field — Add a note explaining your decision (stored with the execution)
After approval, the workflow resumes via POST /api/resume/[executionId] with the proceed_with_hunt boolean and optional reason.
Cancel Analysis ("Cut the Line")
Running analyses can be cancelled at any time. The cancel button appears in multiple locations:
- Small cancel icon in the Active Analyses Bar (bottom bar, next to each running execution)
- Cancel button on the results page during running analyses
- Cancel option on the History page for active executions
Cancelling calls Kibana's POST /api/workflows/executions/[id]/cancel API to abort the workflow execution server-side.
Active Analyses Bar
A persistent bottom bar appears whenever analyses are running or waiting for input. It polls /api/history every 15 seconds and displays:
- A pulsing teal dot with the count of active analyses
- Each running execution as a clickable chip that navigates to its results page
- Live elapsed time counter for running executions, or a "WAITING" badge for HITL-paused ones
- A cancel button per execution (with a scissors icon, titled "Cut the line")
- A "View All" link to the History page and a dismiss button
The bar auto-reappears when new analyses start. It uses a 1-second tick interval for elapsed time updates.
Cost Tracking
Each completed analysis shows an estimated AI cost breakdown, expandable via a $X.XX est. cost summary line. The breakdown includes:
- Per-step detail — Input/output token counts, model used, number of LLM calls, and cost for each step (e.g., Analyze Screenshot, Summarize Enrichment, Hunt in Environment)
- Model auto-detection — The system resolves the model from connector IDs (e.g.,
.anthropic-claude-4.6-opus-chat_completion→ Opus 4.6) or falls back to known AI agent step defaults - Multi-provider pricing — Supports Anthropic (Opus 4.7/4.6, Sonnet 4.5/4, Haiku 3.5/3), OpenAI (GPT-4o, GPT-4o Mini, GPT-4 Turbo, GPT-4), and Google (Gemini 2.5 Pro/Flash, 2.0 Flash)
- Total summary — Aggregated token count and total estimated cost across all steps
User Avatars
Click your avatar circle in the nav bar to upload a profile photo. The built-in avatar editor provides:
- Circular crop preview — 200px preview with a teal-glowing border
- Drag to reposition — Click and drag to pan the image within the circle
- Zoom slider — Scale from 0.5x to 3x to frame your photo
- Server-side storage — Saved as a 256px JPEG data URL in the
.avatars/directory, keyed by username
Avatars appear in the nav bar, on the results page ("Submitted by" attribution), and in history analytics.
Settings Modal
Click the gear icon in the nav bar to open the settings modal. It allows you to:
- View and edit the
Kibana URLandWorkflow ID - See whether configuration is managed via environment variables (displays an info banner when env vars are active)
- Run a "Test" button that saves current settings and validates the Kibana connection via
POST /api/settings/test - UI changes override environment variables when saved
Multi-File Parallel Analysis
Select multiple screenshots and submit them all at once. Each is processed independently by the workflow in parallel.
Attack Chain Rendering
When the workflow reconstructs an attack chain (step-by-step progression of the attack), the results page renders it as a visual timeline. Each step in the chain shows the attacker's actions in sequence, helping analysts understand the full attack flow at a glance.
IOC Enrichment Results
Each extracted IOC is enriched with VirusTotal detection stats (malicious, suspicious, harmless, undetected engine counts) and urlscan.io search results. Results are displayed inline with color-coded severity indicators.
Environment Threat Hunting
The workflow runs ES|QL queries against your Elasticsearch data to determine if any extracted IOCs have been observed in DNS logs, HTTP connections, TLS handshakes, or other network events in your environment.
Markdown Rendering
AI-generated analysis summaries and reports are rendered as styled markdown with headings, lists, code blocks, and emphasis.
Screenshot Persistence & Lightbox
Uploaded screenshots are stored in localStorage (keyed by execution ID) so they persist across page navigations. A lightbox component allows full-screen viewing.
Landing Page
A showcase page is available at /landing. It provides a marketing-style overview of Lurelit's capabilities, the three-step analysis flow, feature highlights, and links to the documentation and login pages. Useful for sharing with stakeholders or embedding in internal portals.
History & Analytics
History Dashboard
The History page shows all past executions with filtering by status (completed, failed, running, threats, safe). Stats cards show totals, threat counts, and average analysis time. A metrics dashboard renders historical trend data.
Sankey Diagram
The metrics view includes a Sankey diagram that visualizes the flow of submissions through classification stages — from upload through AI analysis, enrichment, and final verdict — showing how many analyses resulted in threats vs. safe verdicts.
Metrics & Filters
Aggregate metrics are available via GET /api/metrics, providing total counts, threat/safe breakdowns, and timing data. The History page supports filtering by status, date range, and user.
Architecture
Tech Stack
App Router
Client components
Strict mode
Utility + custom props
API Routes
POST /api/auth/login — Validate credentials, create session
POST /api/auth/logout — Destroy session cookie
GET /api/auth/me — Check authentication status
POST /api/submit — Submit screenshot for analysis
GET /api/status/[id] — Poll execution status & results
POST /api/cancel/[id] — Cancel a running execution
POST /api/resume/[id] — Resume a HITL-paused execution
GET /api/history — List past executions (paginated)
GET /api/metrics — Aggregate threat/safe counts
GET /api/settings — Read config state
POST /api/settings — Save Kibana URL + Workflow ID
DELETE /api/settings — Clear stored config
POST /api/settings/test — Test Kibana connectivity
GET /api/avatar — Get current user avatar
POST /api/avatar — Upload/update avatar
DELETE /api/avatar — Remove avatar
GET /api/avatar/[user] — Get another user's avatar
POST /api/setup/check — Validate Kibana connection (setup wizard)
POST /api/setup/save — Save setup config
POST /api/setup/validate-workflow — Verify workflow ID existsAuthentication Flow
Login validates credentials against the Kibana API. On success, an encrypted session cookie is set (AES-256-GCM, scrypt-derived key from CONFIG_SECRET). The middleware checks this cookie on every request and redirects unauthenticated users to /login. Sessions expire after 24 hours.
Data Flow
User uploads screenshot
→ POST /api/submit (base64 image)
→ Kibana POST /api/workflows/workflow/{id}/run
→ Returns executionId
Results page polls:
→ GET /api/status/{executionId}
→ Kibana GET /api/workflows/executions/{id}
→ Returns steps[], output, status
Human-in-the-Loop (if triggered):
→ Status returns "waiting_for_input"
→ UI shows HumanApproval card
→ POST /api/resume/{executionId} { proceed_with_hunt, reason }
→ Kibana POST /api/workflows/executions/{id}/input
Cancel (if requested):
→ POST /api/cancel/{executionId}
→ Kibana POST /api/workflows/executions/{id}/cancel
Workflow pipeline (in Kibana):
1. AI analyzes screenshot (Anthropic Claude)
2. Parse structured classification + IOCs
3. For each IOC: VirusTotal + urlscan.io enrichment
4. Summarize enrichment findings
5. [HITL gate] waitForInput if confidence is ambiguous
6. ES|QL hunt in environment logs
7. Generate final report with verdictTesting
Test Data Seeding
The scripts/seed-test-data.shscript injects 9 test documents into Elasticsearch that match IOCs from the test screenshots. This makes the "Environment Threat Hunt" step find real hits.
# Usage:
./scripts/seed-test-data.sh [ELASTICSEARCH_URL] [USERNAME] [PASSWORD]
# Defaults:
./scripts/seed-test-data.sh http://localhost:9200 elastic changemeThe script creates network event documents for DNS lookups, HTTP requests, and TLS connections matching known-bad IOCs (usps-redelivery.info, ezpass.com-licy.win, loginmicrosoftonline.uk, etc.).
Example Screenshots
The examples/screenshots/ directory contains ready-to-use PNG screenshots for testing all analysis paths. Upload these directly to Lurelit to validate each scenario.
Benign (safe messages for testing false-positive rates):
benign-email-microsoft-signin.png— Legitimate Microsoft sign-in notificationbenign-sms-amazon-delivery.png— Real Amazon delivery SMS
Phishing/Smishing (threats):
phishing-email-microsoft365.png— M365 credential phishingphishing-email-real-ioc-microsoft365.png— Same with real VT-flagged IOCssmishing-sms-usps-delivery.png— USPS package delivery smishsmishing-sms-ezpass-toll.png— E-ZPass toll scam smishsmishing-real-ioc-usps.png— USPS smish with real VT-flagged URLsmishing-real-ioc-ezpass.png— E-ZPass smish with real VT-flagged URL
Spam:
spam-email-cold-outreach.png— B2B cold outreach emailspam-sms-marketing.png— SMS marketing spam
HITL Testing:
hitl-trigger-no-hunt-loyalty-reward.png— Triggers HITL, analyst can skip hunthitl-trigger-approved-hunt.png— Triggers HITL, analyst approves hunthitl-trigger-no-hunt-loyalty-reward.png— Triggers HITL approval, analyst can skiphitl-trigger-approved-hunt.png— Triggers HITL approval, analyst approves hunt
real-ioc in the filename contain IOCs known to be flagged by VirusTotal. When combined with the seed data script (scripts/seed-test-data.sh), these produce both VT enrichment hits AND environment hunt hits — providing an end-to-end test of the full analysis pipeline including threat hunting.Troubleshooting
Authentication Failures
- 401 Unauthorized on login — Verify your Kibana credentials are correct. Lurelit authenticates against
/api/statususing Basic auth. Ensure the user exists in Kibana native realm (not just SAML/OIDC). - Session expired mid-analysis — Sessions have a 24-hour TTL. If an analysis takes longer than expected, re-login and resubmit.
- CORS errors in browser console — Lurelit's API routes proxy to Kibana server-side. If you see CORS errors, ensure
KIBANA_URLis reachable from the Next.js server, not just the browser.
Workflow Not Found
- 404 when submitting — The workflow ID in Lurelit settings must exactly match the ID in Kibana. Check with:
GET /api/workflows/workflow/your-workflow-id - Workflow exists but won't trigger — Ensure the workflow is enabled and has a
manualtrigger type. Disabled workflows return 400. - Wrong Kibana space — Workflows are space-scoped. Ensure Lurelit is pointed at the correct Kibana space (append
/s/space-nameto your Kibana URL if using a non-default space).
Enrichment Returning Empty
- VirusTotal returns no data — Check that your VirusTotal connectors have valid API keys. Free VT accounts have rate limits (4 req/min).
- urlscan.io returns empty results — New/uncommon domains may not be in urlscan's database. This is expected. The workflow uses
on-failure: continueso this won't break the pipeline. - All enrichment steps skipped — If the AI analysis finds no IOCs (legitimate message), enrichment is conditionally skipped. This is working as intended.
Hunt Step Skipped or Empty
- Hunt step not executed — The hunt only runs when
has_malicious_iocs: truefrom the summarize step. If the message is legitimate or IOCs are not deemed malicious, the hunt is intentionally skipped. - Hunt runs but finds nothing — This usually means the malicious IOCs have not appeared in your environment's logs. Verify the user has read access to
logs-*,filebeat-*, and.alerts-security.*indices. - Hunt step times out (500s) — The Elastic AI Agent may be running complex ES|QL queries across large indices. Consider adding time bounds to your data retention or check cluster health.
- Agent Builder not available — If the
ai.agentsteps fail with agent-not-found errors, ensure theelastic-ai-agentis visible in Agent Builder (accessible from theAgentsitem in the left-hand navigation) and has public visibility.
Connector Errors
- Connector not found — Workflow references connectors by their auto-generated ID. Verify your connector IDs match those in the workflow YAML. Open the workflow editor to check for validation errors.
- Anthropic 529/overloaded — The Anthropic connector has a 120s timeout. During high load, the API may be slow. The step will fail and mark the execution as failed.
- Rate limiting — VirusTotal free API allows 4 requests per minute. With multiple IOCs, you may hit this limit. Consider a premium VT API key for production use.
Common Error Codes
HTTP 401 → Kibana credentials invalid or session expired
HTTP 403 → User lacks required privileges (check Roles)
HTTP 404 → Workflow ID mismatch or wrong Kibana space
HTTP 400 → Workflow disabled, or invalid input format
HTTP 500 → Workflow execution error (check Kibana logs)
TIMEOUT → Analysis step exceeded timeout (default 120s for AI, 500s for hunt)