SDK: Rust
Install
Add to your Cargo.toml:
[dependencies]
bloop-client = "0.1"
use bloop_client::{BloopClient, SpanType, SpanStatus, TraceStatus};
// Configure once at startup
let client = BloopClient::builder()
.endpoint("https://errors.myapp.com")
.project_key("bloop_abc123...")
.environment("production")
.release("0.1.0")
.build()?;
// Capture an error manually
client.capture_error("TypeError", "Cannot read property 'id' of undefined")
.route("/api/users")
.http_status(500)
.send()
.await?;
// Capture from any std::error::Error
if let Err(e) = risky_operation() {
client.capture_exception(&*e)
.route("/api/process")
.send()
.await?;
}
// Flush on shutdown
client.shutdown().await?;
LLM Tracing
// Manual trace + span
let mut trace = client.start_trace("chat-completion")
.session_id("session-abc")
.user_id("user-123")
.input("What is the weather?")
.build();
let span = trace.start_span(SpanType::Generation)
.name("gpt-4o call")
.model("gpt-4o")
.provider("openai")
.build();
// ... make LLM call ...
span.end()
.input_tokens(100)
.output_tokens(50)
.cost(0.0025)
.status(SpanStatus::Ok)
.output("The weather is sunny.")
.finish();
trace.end(TraceStatus::Completed)
.output("The weather is sunny.")
.finish();
// Closure-based convenience
let result = client.trace_generation(
"chat-completion", "gpt-4o", "openai",
|span| async {
let response = call_llm().await?;
span.set_usage(100, 50, 0.0025);
Ok(response)
},
).await?;
// RAII guard — auto-ends on drop
{
let _trace = client.trace_guard("chat-completion");
let _span = _trace.span_guard(SpanType::Generation, "gpt-4o", "openai");
// completed on drop, error if unwinding from panic
}
Features
- Builder pattern — Idiomatic Rust builder for client, trace, and span construction.
- capture_exception — Takes
&dyn std::error::Error, extracts type name and error chain for the stack trace. - install_panic_hook — Optional global panic hook that captures panics as error events with full backtrace.
- RAII guards —
trace_guard()andspan_guard()auto-end on drop. Status iscompletedon normal exit,errorif unwinding. - Tower middleware — Optional
towerfeature provides an error-capturing layer for axum and other Tower-based frameworks. - Async batch flush — Events and traces buffered, flushed via
reqweston a background tokio task (default 5s interval). - Minimal dependencies —
reqwest,hmac/sha2,serde,uuid,tokio. No heavyweight frameworks.
Feature Flags
| Flag | Default | Description |
|---|---|---|
tracing | Yes | LLM trace/span API |
tower | No | Tower/axum error middleware layer |
panic-hook | No | Global panic hook for error capture |
The Rust SDK uses the same HMAC-SHA256 crates (hmac, sha2) as the bloop server itself. Cost values are specified in dollars (float) and converted to microdollars on the server side.
SDK: TypeScript / Node.js
IngestEvent Payload
interface IngestEvent {
timestamp: number; // Unix epoch seconds
source: "ios" | "android" | "api";
environment: string; // "production", "staging", etc.
release: string; // Semver or build ID
error_type: string; // Exception class name
message: string; // Error message
app_version?: string; // Display version
build_number?: string; // Build number
route_or_procedure?: string; // API route or RPC method
screen?: string; // Mobile screen name
stack?: string; // Stack trace
http_status?: number; // HTTP status code
request_id?: string; // Correlation ID
user_id_hash?: string; // Hashed user identifier
device_id_hash?: string; // Hashed device identifier
fingerprint?: string; // Custom fingerprint (overrides auto)
metadata?: Record<string, unknown>; // Arbitrary extra data
}
Option A: Install the SDK
npm install @dthink/bloop-sdk
import { BloopClient } from "@dthink/bloop-sdk";
const bloop = new BloopClient({
endpoint: "https://errors.myapp.com",
projectKey: "bloop_abc123...", // From project settings
environment: "production",
release: "1.2.0",
});
// Install global handlers to catch uncaught exceptions & unhandled rejections
bloop.installGlobalHandlers();
// Capture an Error object
try {
riskyOperation();
} catch (err) {
bloop.captureError(err, {
route: "POST /api/users",
httpStatus: 500,
});
}
// Capture a structured event
bloop.capture({
errorType: "ValidationError",
message: "Invalid email format",
route: "POST /api/users",
httpStatus: 422,
});
// Express middleware
import express from "express";
const app = express();
app.use(bloop.errorMiddleware());
// Flush on shutdown
await bloop.shutdown();
Option B: Minimal Example (Zero Dependencies)
async function sendToBloop(
endpoint: string,
projectKey: string,
event: Record<string, unknown>,
) {
const body = JSON.stringify(event);
const encoder = new TextEncoder();
const key = await crypto.subtle.importKey(
"raw", encoder.encode(projectKey),
{ name: "HMAC", hash: "SHA-256" }, false, ["sign"],
);
const sig = await crypto.subtle.sign("HMAC", key, encoder.encode(body));
const hex = [...new Uint8Array(sig)]
.map(b => b.toString(16).padStart(2, "0")).join("");
await fetch(`${endpoint}/v1/ingest`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-Project-Key": projectKey,
"X-Signature": hex,
},
body,
});
}
// Usage
await sendToBloop("https://errors.myapp.com", "bloop_abc123...", {
timestamp: Math.floor(Date.now() / 1000),
source: "api",
environment: "production",
release: "1.2.0",
error_type: "TypeError",
message: "Cannot read property of undefined",
});
Features (SDK)
- installGlobalHandlers — Captures uncaught exceptions and unhandled promise rejections automatically. Works in both Node.js (
process.on("uncaughtException")) and browser (window.addEventListener("error")) environments. - Express middleware —
errorMiddleware()drops into your Express app and captures all unhandled request errors with route and status metadata. - Batched delivery — Events are buffered and sent in batches automatically.
- Web Crypto HMAC — Works in both Node.js and browser environments.
Events captured by global handlers are tagged with extra metadata so you can distinguish them from manually captured errors:
// Automatically added to metadata:
{
metadata: {
unhandled: true,
mechanism: "uncaughtException" // or "unhandledRejection"
}
}
@dthink/bloop-sdk uses the Web Crypto API internally, so it works in both Node.js and browser environments. Use installGlobalHandlers() to catch errors you might otherwise miss.
LLM Tracing
Track LLM calls with token usage, cost, and latency. Requires the llm-tracing feature on your bloop server.
import { BloopClient } from "@dthink/bloop-sdk";
const bloop = new BloopClient({
endpoint: "https://errors.myapp.com",
projectKey: "bloop_abc123...",
environment: "production",
release: "1.2.0",
});
// Manual trace + span
const trace = bloop.startTrace("chat-completion", {
sessionId: "session-abc",
userId: "user-123",
input: "What is the weather?",
});
const span = trace.startSpan("generation", {
name: "gpt-4o call",
model: "gpt-4o",
provider: "openai",
});
// ... make LLM call ...
span.end({
inputTokens: 100,
outputTokens: 50,
cost: 0.0025, // dollars — converted to microdollars server-side
status: "ok",
output: "The weather is sunny.",
});
trace.end({ status: "completed", output: "The weather is sunny." });
// Convenience wrapper — auto-creates trace + generation span
const result = await bloop.traceGeneration(
"chat-completion", "gpt-4o", "openai",
async (span) => {
const response = await callLlm();
span.setUsage(100, 50, 0.0025);
return response;
},
);
Span types: generation, tool, retrieval, custom. A trace can contain multiple spans of any type.
SDK: Swift (iOS)
Option A: Install the SDK
Add to your Package.swift dependencies or via Xcode → File → Add Package Dependencies:
.package(url: "https://github.com/jaikoo/bloop-swift.git", from: "0.4.0")
import Bloop
// Configure once at app launch (e.g. in AppDelegate or @main App.init)
BloopClient.configure(
endpoint: "https://errors.myapp.com",
secret: "your-hmac-secret",
projectKey: "bloop_abc123...", // From Settings → Projects
environment: "production",
release: "2.1.0"
)
// Install crash handler (captures SIGABRT, SIGSEGV, etc.)
BloopClient.shared?.installCrashHandler()
// Install lifecycle handlers (flush on background/terminate)
BloopClient.shared?.installLifecycleHandlers()
// Capture an error manually
do {
try riskyOperation()
} catch {
BloopClient.shared?.capture(
error: error,
screen: "HomeViewController"
)
}
// Capture a structured event
BloopClient.shared?.capture(
errorType: "NetworkError",
message: "Request timed out",
screen: "HomeViewController"
)
// Synchronous flush (e.g. before a crash report is sent)
BloopClient.shared?.flushSync()
// Close the client (flushes remaining events)
BloopClient.shared?.close()
Option B: Minimal Example (Zero Dependencies)
import Foundation
import CommonCrypto
struct BloopClient {
let url: URL
let projectKey: String // From Settings → Projects
func send(event: [String: Any]) async throws {
let body = try JSONSerialization.data(withJSONObject: event)
let signature = hmacSHA256(data: body, key: projectKey)
var request = URLRequest(url: url.appendingPathComponent("/v1/ingest"))
request.httpMethod = "POST"
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
request.setValue(projectKey, forHTTPHeaderField: "X-Project-Key")
request.setValue(signature, forHTTPHeaderField: "X-Signature")
request.httpBody = body
let (_, response) = try await URLSession.shared.data(for: request)
guard let http = response as? HTTPURLResponse,
http.statusCode == 200 else {
return // Fire and forget — don't crash the app
}
}
private func hmacSHA256(data: Data, key: String) -> String {
let keyData = key.data(using: .utf8)!
var digest = [UInt8](repeating: 0, count: Int(CC_SHA256_DIGEST_LENGTH))
keyData.withUnsafeBytes { keyBytes in
data.withUnsafeBytes { dataBytes in
CCHmac(CCHmacAlgorithm(kCCHmacAlgSHA256),
keyBytes.baseAddress, keyData.count,
dataBytes.baseAddress, data.count,
&digest)
}
}
return digest.map { String(format: "%02x", $0) }.joined()
}
}
// Usage
let client = BloopClient(
url: URL(string: "https://errors.myapp.com")!,
projectKey: "bloop_abc123..."
)
try await client.send(event: [
"timestamp": Int(Date().timeIntervalSince1970),
"source": "ios",
"environment": "production",
"release": "2.1.0",
"error_type": "NetworkError",
"message": "Request timed out",
"screen": "HomeViewController",
])
Features (SDK)
- Crash handler —
installCrashHandler()registers signal handlers forSIGABRT,SIGSEGV,SIGBUS,SIGFPE, andSIGTRAP. Captured crashes are persisted to disk and sent on next launch. - Lifecycle handlers —
installLifecycleHandlers()observesUIApplicationnotifications to automatically flush events when the app enters the background or is about to terminate. - Synchronous flush —
flushSync()blocks the calling thread until all buffered events are sent. Useful in crash handlers orapplicationWillTerminate. - Device info — Automatically enriches events with device model, OS version, and app version/build number from the main bundle.
- HMAC signing — All requests are signed with HMAC-SHA256 using
CryptoKit. - Close —
close()flushes any remaining events and releases resources. Safe to call multiple times.
Call installCrashHandler() as early as possible in your app launch sequence — before any other crash reporting SDKs. Only one signal handler can be active per signal.
LLM Tracing
Track LLM calls with token usage, cost, and latency. Requires the llm-tracing feature on your bloop server.
import Bloop
// Manual trace + span
let trace = BloopClient.shared?.startTrace(
name: "chat-completion",
sessionId: "session-abc",
userId: "user-123",
input: "What is the weather?"
)
let span = trace?.startSpan(.generation,
name: "gpt-4o call",
model: "gpt-4o",
provider: "openai"
)
// ... make LLM call ...
span?.end(
inputTokens: 100,
outputTokens: 50,
cost: 0.0025,
status: .ok,
output: "The weather is sunny."
)
trace?.end(status: .completed, output: "The weather is sunny.")
// Closure-based convenience
let result = try await BloopClient.shared?.traceGeneration(
name: "chat-completion",
model: "gpt-4o",
provider: "openai"
) { span in
let response = try await callLlm()
span.setUsage(inputTokens: 100, outputTokens: 50, cost: 0.0025)
return response
}
Span types: .generation, .tool, .retrieval, .custom. Cost is specified in dollars (float) and converted to microdollars server-side.
SDK: Kotlin (Android)
github.com/jaikoo/bloop-kotlin
Option A: Install the SDK
implementation("io.github.jaikoo:bloop-client:0.3.0")
import com.bloop.sdk.BloopClient
// Configure once in Application.onCreate()
BloopClient.configure(
endpoint = "https://errors.myapp.com",
secret = "your-hmac-secret",
projectKey = "bloop_abc123...", // From Settings → Projects
environment = "production",
release = "3.0.1",
)
// Install uncaught exception handler
BloopClient.shared?.installUncaughtExceptionHandler()
// Capture a throwable
try {
riskyOperation()
} catch (e: Exception) {
BloopClient.shared?.capture(e, screen = "ProfileFragment")
}
// Capture a structured event
BloopClient.shared?.capture(
errorType = "IllegalStateException",
message = "Fragment not attached to activity",
screen = "ProfileFragment",
)
// Synchronous flush (e.g. before process death)
BloopClient.shared?.flushSync()
// Async flush
BloopClient.shared?.flush()
Option B: Minimal Example (Zero Dependencies)
import okhttp3.*
import okhttp3.MediaType.Companion.toMediaType
import okhttp3.RequestBody.Companion.toRequestBody
import org.json.JSONObject
import javax.crypto.Mac
import javax.crypto.spec.SecretKeySpec
class BloopClient(
private val baseUrl: String,
private val projectKey: String, // From Settings → Projects
) {
private val client = OkHttpClient()
private val json = "application/json".toMediaType()
fun send(event: JSONObject) {
val body = event.toString()
val signature = hmacSha256(body, projectKey)
val request = Request.Builder()
.url("$baseUrl/v1/ingest")
.post(body.toRequestBody(json))
.addHeader("X-Project-Key", projectKey)
.addHeader("X-Signature", signature)
.build()
// Fire and forget on background thread
client.newCall(request).enqueue(object : Callback {
override fun onFailure(call: Call, e: IOException) {}
override fun onResponse(call: Call, response: Response) {
response.close()
}
})
}
private fun hmacSha256(data: String, key: String): String {
val mac = Mac.getInstance("HmacSHA256")
mac.init(SecretKeySpec(key.toByteArray(), "HmacSHA256"))
return mac.doFinal(data.toByteArray())
.joinToString("") { "%02x".format(it) }
}
}
// Usage
val bloop = BloopClient("https://errors.myapp.com", "bloop_abc123...")
bloop.send(JSONObject().apply {
put("timestamp", System.currentTimeMillis() / 1000)
put("source", "android")
put("environment", "production")
put("release", "3.0.1")
put("error_type", "IllegalStateException")
put("message", "Fragment not attached to activity")
put("screen", "ProfileFragment")
})
Features (SDK)
- Device enrichment — Automatically adds device model (
Build.MODEL), manufacturer, brand, OS version (Build.VERSION.RELEASE), and API level via reflection. Falls back to JVM system properties on non-Android platforms. Opt out withenrichDevice = false. - Uncaught exception handler —
installUncaughtExceptionHandler()wrapsThread.setDefaultUncaughtExceptionHandlerto capture crashes. Chains to any previously installed handler so other crash reporters still work. - flush / flushSync —
flush()sends buffered events asynchronously.flushSync()blocks until all events are sent — use it inonTrimMemoryor before calling the previous uncaught exception handler. - HMAC signing — All requests are signed with HMAC-SHA256 via
javax.crypto.Mac. - Thread-safe buffering — Events are buffered in a
ConcurrentLinkedQueueand flushed on a scheduled background thread (default: 5 seconds).
Call installUncaughtExceptionHandler() after any other crash SDKs (e.g. Firebase Crashlytics) so bloop captures first and then chains to the previous handler.
LLM Tracing
Track LLM calls with token usage, cost, and latency. Requires the llm-tracing feature on your bloop server.
import com.bloop.sdk.BloopClient
import com.bloop.sdk.SpanType
import com.bloop.sdk.SpanStatus
import com.bloop.sdk.TraceStatus
// Manual trace + span
val trace = BloopClient.shared?.startTrace(
name = "chat-completion",
sessionId = "session-abc",
userId = "user-123",
input = "What is the weather?",
)
val span = trace?.startSpan(SpanType.GENERATION,
name = "gpt-4o call",
model = "gpt-4o",
provider = "openai",
)
// ... make LLM call ...
span?.end(
inputTokens = 100,
outputTokens = 50,
cost = 0.0025,
status = SpanStatus.OK,
output = "The weather is sunny.",
)
trace?.end(status = TraceStatus.COMPLETED, output = "The weather is sunny.")
// Lambda-based convenience
val result = BloopClient.shared?.traceGeneration(
name = "chat-completion",
model = "gpt-4o",
provider = "openai",
) { span ->
val response = callLlm()
span.setUsage(inputTokens = 100, outputTokens = 50, cost = 0.0025)
response
}
Span types: GENERATION, TOOL, RETRIEVAL, CUSTOM. Cost is specified in dollars and converted to microdollars server-side. Traces are flushed with regular error events on the background thread.
SDK: Python
github.com/jaikoo/bloop-python
Option A: Install the SDK
pip install bloop-sdk
from bloop import BloopClient
# Initialize — auto-captures uncaught exceptions via sys.excepthook
client = BloopClient(
endpoint="https://errors.myapp.com",
project_key="bloop_abc123...", # From Settings → Projects
environment="production",
release="1.0.0",
)
# Capture an exception with full traceback
try:
risky_operation()
except Exception as e:
client.capture_exception(e,
route_or_procedure="POST /api/process",
)
# Extracts: error_type, message, and full stack trace automatically
# Capture a structured event (no exception object needed)
client.capture(
error_type="ValidationError",
message="Invalid email format",
route_or_procedure="POST /api/users",
)
# Context manager for graceful shutdown
with BloopClient(endpoint="...", project_key="...") as bloop:
bloop.capture(error_type="TestError", message="Hello")
# flush + close happen automatically
Option B: Minimal Example (Zero Dependencies)
import hmac, hashlib, json, time
from urllib.request import Request, urlopen
def send_to_bloop(endpoint, project_key, event):
body = json.dumps(event).encode()
sig = hmac.new(
project_key.encode(), body, hashlib.sha256
).hexdigest()
req = Request(
f"{endpoint}/v1/ingest",
data=body,
headers={
"Content-Type": "application/json",
"X-Project-Key": project_key,
"X-Signature": sig,
},
)
urlopen(req)
# Usage
send_to_bloop("https://errors.myapp.com", "bloop_abc123...", {
"timestamp": int(time.time()),
"source": "api",
"environment": "production",
"release": "1.0.0",
"error_type": "ValueError",
"message": "Invalid input",
})
Features (SDK)
- Zero dependencies — Uses only Python stdlib (
hmac,hashlib,json,urllib.request,threading) - capture_exception — Pass any exception object to
capture_exception(e)and it automatically extracts the error type, message, and full stack trace viatraceback.format_exception - Auto-capture (sys.excepthook) — Installs a
sys.excepthookhandler to capture uncaught exceptions in the main thread automatically - Thread crash capture — Installs a
threading.excepthookhandler to capture uncaught exceptions in spawned threads (Python 3.8+) - atexit auto-flush — Registers an
atexithandler to flush all buffered events before the process exits, so nothing is lost on clean shutdown - Thread-safe buffering — Events are buffered and flushed in batches on a background timer (default: 5 seconds)
- Context manager — Use
with BloopClient(...) as bloop:for automatic flush and close on exit - HMAC signing — All requests are signed with HMAC-SHA256 using your project key
LLM Tracing
Track LLM calls with token usage, cost, and latency. Requires the llm-tracing feature on your bloop server.
from bloop import BloopClient
client = BloopClient(
endpoint="https://errors.myapp.com",
project_key="bloop_abc123...",
environment="production",
release="1.0.0",
)
# Manual trace + span
trace = client.start_trace("chat-completion",
session_id="session-abc",
user_id="user-123",
input="What is the weather?",
)
span = trace.start_span("generation",
name="gpt-4o call",
model="gpt-4o",
provider="openai",
)
# ... make LLM call ...
span.end(
input_tokens=100,
output_tokens=50,
cost=0.0025,
status="ok",
output="The weather is sunny.",
)
trace.end(status="completed", output="The weather is sunny.")
# Context manager — auto-ends trace and span
with client.trace("chat-completion", session_id="session-abc") as trace:
with trace.span("generation", model="gpt-4o", provider="openai") as span:
response = call_llm()
span.set_usage(input_tokens=100, output_tokens=50, cost=0.0025)
# Decorator for automatic tracing
@client.trace_function("process-query", model="gpt-4o", provider="openai")
def process_query(prompt: str) -> str:
return call_llm(prompt)
Span types: generation, tool, retrieval, custom. The context manager and decorator patterns auto-set status to error if an exception is raised.
SDK: Ruby
Option A: Install the SDK
gem install bloop-sdk
require "bloop"
client = Bloop::Client.new(
endpoint: "https://errors.myapp.com",
project_key: "bloop_abc123...", # From Settings → Projects
environment: "production",
release: "1.0.0",
)
# Capture an exception
begin
risky_operation
rescue => e
client.capture_exception(e, route_or_procedure: "POST /api/orders")
end
# Structured event capture
client.capture(
error_type: "ValidationError",
message: "Invalid email format",
route_or_procedure: "POST /api/users",
)
# Block-based error capture
client.with_error_capture(route_or_procedure: "POST /api/checkout") do
process_payment
end
# If process_payment raises, the exception is captured and re-raised
# Graceful shutdown
client.close
Rack Middleware
Automatically capture all unhandled exceptions in your web application:
# Rails — config/application.rb
require "bloop"
BLOOP_CLIENT = Bloop::Client.new(
endpoint: "https://errors.myapp.com",
project_key: "bloop_abc123...",
environment: Rails.env,
release: "1.0.0",
)
config.middleware.use Bloop::RackMiddleware, client: BLOOP_CLIENT
# Sinatra
require "bloop"
client = Bloop::Client.new(
endpoint: "https://errors.myapp.com",
project_key: "bloop_abc123...",
environment: "production",
release: "1.0.0",
)
use Bloop::RackMiddleware, client: client
The middleware captures exceptions, enriches them with the request path as route_or_procedure and the HTTP status as http_status, then re-raises the exception so your normal error handling still works.
Option B: Minimal Example (Zero Dependencies)
require "net/http"
require "json"
require "openssl"
def send_to_bloop(endpoint, project_key, event)
body = event.to_json
sig = OpenSSL::HMAC.hexdigest("SHA256", project_key, body)
uri = URI("#{endpoint}/v1/ingest")
req = Net::HTTP::Post.new(uri)
req["Content-Type"] = "application/json"
req["X-Project-Key"] = project_key
req["X-Signature"] = sig
req.body = body
Net::HTTP.start(uri.host, uri.port, use_ssl: uri.scheme == "https") do |http|
http.request(req)
end
end
# Usage
send_to_bloop("https://errors.myapp.com", "bloop_abc123...", {
timestamp: Time.now.to_i,
source: "api",
environment: "production",
release: "1.0.0",
error_type: "RuntimeError",
message: "Something went wrong",
})
Features (SDK)
- Zero dependencies — Uses only Ruby stdlib (
openssl,net/http,json) - Auto-capture — Registers an
at_exithook to flush events and capture unhandled exceptions - Thread-safe buffering — Events are buffered with Mutex protection and flushed on a background thread (default: 5 seconds)
- capture_exception — Helper method that extracts error type, message, and backtrace from a Ruby exception
- with_error_capture — Block-based helper that wraps a block in a
begin/rescue, captures the exception, and re-raises it. Useful for wrapping individual operations without cluttering your code with rescue blocks. - Rack middleware —
Bloop::RackMiddlewaredrops into any Rack-compatible app (Rails, Sinatra, Hanami, etc.) and captures all unhandled request exceptions with route and status metadata - HMAC signing — All requests are signed with HMAC-SHA256 via
OpenSSL::HMAC
LLM Tracing
Track LLM calls with token usage, cost, and latency. Requires the llm-tracing feature on your bloop server.
require "bloop"
client = Bloop::Client.new(
endpoint: "https://errors.myapp.com",
project_key: "bloop_abc123...",
environment: "production",
release: "1.0.0",
)
# Manual trace + span
trace = client.start_trace("chat-completion",
session_id: "session-abc",
user_id: "user-123",
input: "What is the weather?",
)
span = trace.start_span(:generation,
name: "gpt-4o call",
model: "gpt-4o",
provider: "openai",
)
# ... make LLM call ...
span.finish(
input_tokens: 100,
output_tokens: 50,
cost: 0.0025,
status: :ok,
output: "The weather is sunny.",
)
trace.finish(status: :completed, output: "The weather is sunny.")
# Block-based convenience — auto-ends on block exit
client.with_trace("chat-completion", session_id: "session-abc") do |trace|
trace.with_span(:generation, model: "gpt-4o", provider: "openai") do |span|
response = call_llm
span.set_usage(input_tokens: 100, output_tokens: 50, cost: 0.0025)
end
end
Span types: :generation, :tool, :retrieval, :custom. Block-based helpers auto-set status to :error if an exception is raised, then re-raise the exception.
SDK: React Native
github.com/jaikoo/bloop-react-native
Option A: Install the SDK
npm install @dthink/bloop-react-native
Setup with Hook
import { useBloop, BloopErrorBoundary } from "@dthink/bloop-react-native";
function App() {
const bloop = useBloop({
endpoint: "https://errors.myapp.com",
projectKey: "bloop_abc123...",
environment: "production",
release: "1.0.0",
appVersion: "1.0.0",
buildNumber: "42",
});
// useBloop automatically installs all three global handlers:
// 1. ErrorUtils global handler (uncaught JS exceptions)
// 2. Promise rejection handler
// 3. AppState handler (flushes on background/inactive)
return (
<BloopErrorBoundary
client={bloop}
fallback={<ErrorScreen />}
>
<Navigation />
</BloopErrorBoundary>
);
}
Production Handlers
The useBloop hook installs three handlers automatically on mount (and cleans them up on unmount). You can also install them manually if you are not using the hook:
import { BloopRNClient } from "@dthink/bloop-react-native";
const bloop = new BloopRNClient({ /* ... */ });
// Flushes buffered events when the app goes to background or inactive
bloop.installAppStateHandler();
// Captures unhandled promise rejections as error events
bloop.installPromiseRejectionHandler();
// Clean up all handlers when done
await bloop.shutdown();
- installAppStateHandler — Listens to React Native
AppStatechanges. When the app transitions to"background"or"inactive", all buffered events are flushed immediately so nothing is lost if the OS kills the process. - installPromiseRejectionHandler — Hooks into React Native's global
Promiserejection tracking. Unhandled rejections are captured withmetadata.mechanism: "unhandledRejection"andmetadata.unhandled: true.
Option B: Inline Alternative
@dthink/bloop-react-native wraps @dthink/bloop-sdk with React Native-specific features (platform detection, ErrorUtils handler, AppState flush, promise rejection tracking). A zero-dependency inline approach is not practical here — install @dthink/bloop-sdk directly if you only need the core client without React Native hooks.
Features (SDK)
- useBloop hook — Creates and manages the client lifecycle. Automatically installs all three global handlers (ErrorUtils, promise rejection, AppState) on mount and cleans up on unmount.
- BloopErrorBoundary — React error boundary that automatically captures render errors. Supports a
fallbackprop (component or render function) and anonErrorcallback. - AppState flush — Automatically flushes events when the app backgrounds, so events are not lost to OS process termination.
- Promise rejection capture — Unhandled promise rejections are captured automatically with full error details.
- Platform detection — Automatically sets
sourceto"ios"or"android"based onPlatform.OS. - Manual capture — Use
bloop.captureError(error)for caught exceptions.
@dthink/bloop-react-native wraps @dthink/bloop-sdk with React Native-specific features: automatic platform detection, ErrorUtils global handler, promise rejection tracking, AppState-based flush, and app version/build number metadata.
LLM Tracing
@dthink/bloop-react-native re-exports all tracing types and methods from @dthink/bloop-sdk. The API is identical to the TypeScript SDK:
import { useBloop } from "@dthink/bloop-react-native";
function ChatScreen() {
const bloop = useBloop({
endpoint: "https://errors.myapp.com",
projectKey: "bloop_abc123...",
environment: "production",
release: "1.0.0",
});
async function handleSend(prompt: string) {
// Convenience wrapper — auto-creates trace + generation span
const result = await bloop.traceGeneration(
"chat-completion", "gpt-4o", "openai",
async (span) => {
const response = await callLlm(prompt);
span.setUsage(
response.usage.input_tokens,
response.usage.output_tokens,
response.usage.cost,
);
return response.text;
},
);
return result;
}
}
See the TypeScript SDK section for the full tracing API, including manual trace/span management and all span types.