Quick Start
Prerequisites
- Docker (recommended), or
- Rust toolchain (1.75+) for building from source
Run with Docker
docker run -d --name bloop \
-p 5332:5332 \
-v bloop_data:/data \
-e BLOOP__AUTH__HMAC_SECRET=your-secret-here \
ghcr.io/jaikoo/bloop:latest
Build from Source
git clone https://github.com/jaikoo/bloop.git
cd bloop
cargo build --release
./target/release/bloop --config config.toml
To include the optional features (DuckDB-powered analytics and/or LLM tracing):
# Analytics only
cargo build --release --features analytics
# LLM tracing only
cargo build --release --features llm-tracing
# Both
cargo build --release --features "analytics,llm-tracing"
Send Your First Error
# Your project API key (from Settings → Projects)
API_KEY="bloop_abc123..."
# Compute HMAC signature using the project key
BODY='{"timestamp":1700000000,"source":"api","environment":"production","release":"1.0.0","error_type":"RuntimeError","message":"Something went wrong"}'
SIG=$(echo -n "$BODY" | openssl dgst -sha256 -hmac "$API_KEY" | awk '{print $2}')
# Send to bloop
curl -X POST http://localhost:5332/v1/ingest \
-H "Content-Type: application/json" \
-H "X-Project-Key: $API_KEY" \
-H "X-Signature: $SIG" \
-d "$BODY"
Open http://localhost:5332 in your browser, register a passkey, and view your error on the dashboard.
Configuration
Bloop reads from config.toml in the working directory. Every value can be overridden via environment variables using double-underscore separators: BLOOP__SECTION__KEY.
Full Reference
# ── Server ──
[server]
host = "0.0.0.0"
port = 5332
# ── Database ──
[database]
path = "bloop.db" # SQLite file path
pool_size = 4 # deadpool-sqlite connections
# ── Ingestion ──
[ingest]
max_payload_bytes = 32768 # Max single request body
max_stack_bytes = 8192 # Max stack trace length
max_metadata_bytes = 4096 # Max metadata JSON size
max_message_bytes = 2048 # Max error message length
max_batch_size = 50 # Max events per batch request
channel_capacity = 8192 # MPSC channel buffer size
# ── Pipeline ──
[pipeline]
flush_interval_secs = 2 # Flush timer
flush_batch_size = 500 # Events per batch write
sample_reservoir_size = 5 # Sample occurrences kept per fingerprint
# ── Retention ──
[retention]
raw_events_days = 7 # Raw event TTL
prune_interval_secs = 3600 # How often to run cleanup
# ── Auth ──
[auth]
hmac_secret = "change-me-in-production"
rp_id = "localhost" # WebAuthn relying party ID
rp_origin = "http://localhost:5332" # WebAuthn origin
session_ttl_secs = 604800 # Session lifetime (7 days)
# ── Rate Limiting ──
[rate_limit]
per_second = 100
burst_size = 200
# ── Alerting ──
[alerting]
cooldown_secs = 900 # Min seconds between re-fires
# ── SMTP (for email alerts) ──
[smtp]
enabled = false
host = "smtp.example.com"
port = 587
username = ""
password = ""
from = "[email protected]"
starttls = true
# ── Analytics (optional, requires --features analytics) ──
[analytics]
enabled = true
cache_ttl_secs = 60
zscore_threshold = 2.5
# ── LLM Tracing (optional, requires --features llm-tracing) ──
[llm_tracing]
enabled = true # Runtime toggle
channel_capacity = 4096 # Bounded channel size
flush_interval_secs = 2 # Time-based flush trigger
flush_batch_size = 200 # Count-based flush trigger
max_spans_per_trace = 100 # Validation limit
max_batch_size = 50 # Max traces per batch POST
default_content_storage = "none" # none | metadata_only | full
cache_ttl_secs = 30 # Query result cache TTL
# ── LLM Proxy (optional, requires --features llm-tracing) ──
# The proxy enables zero-instrumentation tracing by acting as a reverse proxy.
# Your app points its LLM client at bloop instead of the provider directly.
[llm_tracing.proxy]
enabled = true # Enable proxy endpoints
providers = ["openai", "anthropic"] # Supported providers
openai_base_url = "https://api.openai.com/v1" # OpenAI API base
anthropic_base_url = "https://api.anthropic.com/v1" # Anthropic API base
capture_prompts = true # Store full prompts
capture_completions = true # Store completion text
capture_streaming = true # Capture streaming responses
Environment Variables
| Variable | Overrides | Example |
|---|---|---|
BLOOP__SERVER__PORT | server.port | 8080 |
BLOOP__DATABASE__PATH | database.path | /data/bloop.db |
BLOOP__AUTH__HMAC_SECRET | auth.hmac_secret | my-production-secret |
BLOOP__AUTH__RP_ID | auth.rp_id | errors.myapp.com |
BLOOP__AUTH__RP_ORIGIN | auth.rp_origin | https://errors.myapp.com |
BLOOP__LLM_TRACING__ENABLED | llm_tracing.enabled | true |
BLOOP__LLM_TRACING__DEFAULT_CONTENT_STORAGE | llm_tracing.default_content_storage | full |
BLOOP_SLACK_WEBHOOK_URL | (direct) | Slack incoming webhook URL |
BLOOP_WEBHOOK_URL | (direct) | Generic webhook URL |
Note: BLOOP_SLACK_WEBHOOK_URL and BLOOP_WEBHOOK_URL are read directly from the environment (not through the config system), so they use single underscores.
Architecture
Bloop is a single async Rust process. All components run as Tokio tasks within one binary.
Storage Layers
| Layer | Retention | Purpose |
|---|---|---|
| Raw events | 7 days (configurable) | Full event payloads for debugging |
| Aggregates | Indefinite | Error counts, first/last seen, status |
| Sample reservoir | Indefinite | 5 sample occurrences per fingerprint |
Fingerprinting
Every ingested error gets a deterministic fingerprint. The algorithm:
- Normalize the message: strip UUIDs → strip IPs → strip all numbers → lowercase
- Extract top stack frame: skip framework frames (UIKitCore, node_modules, etc.), strip line numbers
- Hash:
xxhash3(source + error_type + route + normalized_message + top_frame)
This means "Connection refused at 10.0.0.1:5432" and "Connection refused at 192.168.1.2:3306" produce the same fingerprint. You can also supply your own fingerprint field to override.
Backpressure
The ingestion handler pushes events into a bounded MPSC channel (default capacity: 8192). If the channel is full:
- The event is dropped
- The client still receives
200 OK - A warning is logged server-side
Bloop never returns 429 to your clients. Mobile apps and APIs should not retry errors — if the buffer is full, the event wasn't critical enough to block on.