SDK: Rust

github.com/jaikoo/bloop-rust

Install

Add to your Cargo.toml:

toml
[dependencies]
bloop-client = "0.1"
rust
use bloop_client::{BloopClient, SpanType, SpanStatus, TraceStatus};

// Configure once at startup
let client = BloopClient::builder()
    .endpoint("https://errors.myapp.com")
    .project_key("bloop_abc123...")
    .environment("production")
    .release("0.1.0")
    .build()?;

// Capture an error manually
client.capture_error("TypeError", "Cannot read property 'id' of undefined")
    .route("/api/users")
    .http_status(500)
    .send()
    .await?;

// Capture from any std::error::Error
if let Err(e) = risky_operation() {
    client.capture_exception(&*e)
        .route("/api/process")
        .send()
        .await?;
}

// Flush on shutdown
client.shutdown().await?;

LLM Tracing

rust
// Manual trace + span
let mut trace = client.start_trace("chat-completion")
    .session_id("session-abc")
    .user_id("user-123")
    .input("What is the weather?")
    .build();

let span = trace.start_span(SpanType::Generation)
    .name("gpt-4o call")
    .model("gpt-4o")
    .provider("openai")
    .build();

// ... make LLM call ...

span.end()
    .input_tokens(100)
    .output_tokens(50)
    .cost(0.0025)
    .status(SpanStatus::Ok)
    .output("The weather is sunny.")
    .finish();

trace.end(TraceStatus::Completed)
    .output("The weather is sunny.")
    .finish();

// Closure-based convenience
let result = client.trace_generation(
    "chat-completion", "gpt-4o", "openai",
    |span| async {
        let response = call_llm().await?;
        span.set_usage(100, 50, 0.0025);
        Ok(response)
    },
).await?;

// RAII guard — auto-ends on drop
{
    let _trace = client.trace_guard("chat-completion");
    let _span = _trace.span_guard(SpanType::Generation, "gpt-4o", "openai");
    // completed on drop, error if unwinding from panic
}

Features

Feature Flags

FlagDefaultDescription
tracingYesLLM trace/span API
towerNoTower/axum error middleware layer
panic-hookNoGlobal panic hook for error capture

The Rust SDK uses the same HMAC-SHA256 crates (hmac, sha2) as the bloop server itself. Cost values are specified in dollars (float) and converted to microdollars on the server side.

SDK: TypeScript / Node.js

github.com/jaikoo/bloop-js

IngestEvent Payload

typescript
interface IngestEvent {
  timestamp: number;          // Unix epoch seconds
  source: "ios" | "android" | "api";
  environment: string;        // "production", "staging", etc.
  release: string;            // Semver or build ID
  error_type: string;         // Exception class name
  message: string;            // Error message
  app_version?: string;       // Display version
  build_number?: string;      // Build number
  route_or_procedure?: string; // API route or RPC method
  screen?: string;            // Mobile screen name
  stack?: string;             // Stack trace
  http_status?: number;       // HTTP status code
  request_id?: string;        // Correlation ID
  user_id_hash?: string;      // Hashed user identifier
  device_id_hash?: string;    // Hashed device identifier
  fingerprint?: string;       // Custom fingerprint (overrides auto)
  metadata?: Record<string, unknown>; // Arbitrary extra data
}

Option A: Install the SDK

bash
npm install @dthink/bloop-sdk
typescript
import { BloopClient } from "@dthink/bloop-sdk";

const bloop = new BloopClient({
  endpoint: "https://errors.myapp.com",
  projectKey: "bloop_abc123...",   // From project settings
  environment: "production",
  release: "1.2.0",
});

// Install global handlers to catch uncaught exceptions & unhandled rejections
bloop.installGlobalHandlers();

// Capture an Error object
try {
  riskyOperation();
} catch (err) {
  bloop.captureError(err, {
    route: "POST /api/users",
    httpStatus: 500,
  });
}

// Capture a structured event
bloop.capture({
  errorType: "ValidationError",
  message: "Invalid email format",
  route: "POST /api/users",
  httpStatus: 422,
});

// Express middleware
import express from "express";

const app = express();
app.use(bloop.errorMiddleware());

// Flush on shutdown
await bloop.shutdown();

Option B: Minimal Example (Zero Dependencies)

typescript
async function sendToBloop(
  endpoint: string,
  projectKey: string,
  event: Record<string, unknown>,
) {
  const body = JSON.stringify(event);
  const encoder = new TextEncoder();
  const key = await crypto.subtle.importKey(
    "raw", encoder.encode(projectKey),
    { name: "HMAC", hash: "SHA-256" }, false, ["sign"],
  );
  const sig = await crypto.subtle.sign("HMAC", key, encoder.encode(body));
  const hex = [...new Uint8Array(sig)]
    .map(b => b.toString(16).padStart(2, "0")).join("");

  await fetch(`${endpoint}/v1/ingest`, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "X-Project-Key": projectKey,
      "X-Signature": hex,
    },
    body,
  });
}

// Usage
await sendToBloop("https://errors.myapp.com", "bloop_abc123...", {
  timestamp: Math.floor(Date.now() / 1000),
  source: "api",
  environment: "production",
  release: "1.2.0",
  error_type: "TypeError",
  message: "Cannot read property of undefined",
});

Features (SDK)

Events captured by global handlers are tagged with extra metadata so you can distinguish them from manually captured errors:

typescript
// Automatically added to metadata:
{
  metadata: {
    unhandled: true,
    mechanism: "uncaughtException"  // or "unhandledRejection"
  }
}

@dthink/bloop-sdk uses the Web Crypto API internally, so it works in both Node.js and browser environments. Use installGlobalHandlers() to catch errors you might otherwise miss.

LLM Tracing

Track LLM calls with token usage, cost, and latency. Requires the llm-tracing feature on your bloop server.

typescript
import { BloopClient } from "@dthink/bloop-sdk";

const bloop = new BloopClient({
  endpoint: "https://errors.myapp.com",
  projectKey: "bloop_abc123...",
  environment: "production",
  release: "1.2.0",
});

// Manual trace + span
const trace = bloop.startTrace("chat-completion", {
  sessionId: "session-abc",
  userId: "user-123",
  input: "What is the weather?",
});

const span = trace.startSpan("generation", {
  name: "gpt-4o call",
  model: "gpt-4o",
  provider: "openai",
});

// ... make LLM call ...

span.end({
  inputTokens: 100,
  outputTokens: 50,
  cost: 0.0025,          // dollars — converted to microdollars server-side
  status: "ok",
  output: "The weather is sunny.",
});

trace.end({ status: "completed", output: "The weather is sunny." });

// Convenience wrapper — auto-creates trace + generation span
const result = await bloop.traceGeneration(
  "chat-completion", "gpt-4o", "openai",
  async (span) => {
    const response = await callLlm();
    span.setUsage(100, 50, 0.0025);
    return response;
  },
);

Span types: generation, tool, retrieval, custom. A trace can contain multiple spans of any type.

SDK: Swift (iOS)

github.com/jaikoo/bloop-swift

Option A: Install the SDK

Add to your Package.swift dependencies or via Xcode → File → Add Package Dependencies:

swift
.package(url: "https://github.com/jaikoo/bloop-swift.git", from: "0.4.0")
swift
import Bloop

// Configure once at app launch (e.g. in AppDelegate or @main App.init)
BloopClient.configure(
    endpoint: "https://errors.myapp.com",
    secret: "your-hmac-secret",
    projectKey: "bloop_abc123...",  // From Settings → Projects
    environment: "production",
    release: "2.1.0"
)

// Install crash handler (captures SIGABRT, SIGSEGV, etc.)
BloopClient.shared?.installCrashHandler()

// Install lifecycle handlers (flush on background/terminate)
BloopClient.shared?.installLifecycleHandlers()

// Capture an error manually
do {
    try riskyOperation()
} catch {
    BloopClient.shared?.capture(
        error: error,
        screen: "HomeViewController"
    )
}

// Capture a structured event
BloopClient.shared?.capture(
    errorType: "NetworkError",
    message: "Request timed out",
    screen: "HomeViewController"
)

// Synchronous flush (e.g. before a crash report is sent)
BloopClient.shared?.flushSync()

// Close the client (flushes remaining events)
BloopClient.shared?.close()

Option B: Minimal Example (Zero Dependencies)

swift
import Foundation
import CommonCrypto

struct BloopClient {
    let url: URL
    let projectKey: String  // From Settings → Projects

    func send(event: [String: Any]) async throws {
        let body = try JSONSerialization.data(withJSONObject: event)
        let signature = hmacSHA256(data: body, key: projectKey)

        var request = URLRequest(url: url.appendingPathComponent("/v1/ingest"))
        request.httpMethod = "POST"
        request.setValue("application/json", forHTTPHeaderField: "Content-Type")
        request.setValue(projectKey, forHTTPHeaderField: "X-Project-Key")
        request.setValue(signature, forHTTPHeaderField: "X-Signature")
        request.httpBody = body

        let (_, response) = try await URLSession.shared.data(for: request)
        guard let http = response as? HTTPURLResponse,
              http.statusCode == 200 else {
            return // Fire and forget — don't crash the app
        }
    }

    private func hmacSHA256(data: Data, key: String) -> String {
        let keyData = key.data(using: .utf8)!
        var digest = [UInt8](repeating: 0, count: Int(CC_SHA256_DIGEST_LENGTH))
        keyData.withUnsafeBytes { keyBytes in
            data.withUnsafeBytes { dataBytes in
                CCHmac(CCHmacAlgorithm(kCCHmacAlgSHA256),
                       keyBytes.baseAddress, keyData.count,
                       dataBytes.baseAddress, data.count,
                       &digest)
            }
        }
        return digest.map { String(format: "%02x", $0) }.joined()
    }
}

// Usage
let client = BloopClient(
    url: URL(string: "https://errors.myapp.com")!,
    projectKey: "bloop_abc123..."
)

try await client.send(event: [
    "timestamp": Int(Date().timeIntervalSince1970),
    "source": "ios",
    "environment": "production",
    "release": "2.1.0",
    "error_type": "NetworkError",
    "message": "Request timed out",
    "screen": "HomeViewController",
])

Features (SDK)

Call installCrashHandler() as early as possible in your app launch sequence — before any other crash reporting SDKs. Only one signal handler can be active per signal.

LLM Tracing

Track LLM calls with token usage, cost, and latency. Requires the llm-tracing feature on your bloop server.

swift
import Bloop

// Manual trace + span
let trace = BloopClient.shared?.startTrace(
    name: "chat-completion",
    sessionId: "session-abc",
    userId: "user-123",
    input: "What is the weather?"
)

let span = trace?.startSpan(.generation,
    name: "gpt-4o call",
    model: "gpt-4o",
    provider: "openai"
)

// ... make LLM call ...

span?.end(
    inputTokens: 100,
    outputTokens: 50,
    cost: 0.0025,
    status: .ok,
    output: "The weather is sunny."
)

trace?.end(status: .completed, output: "The weather is sunny.")

// Closure-based convenience
let result = try await BloopClient.shared?.traceGeneration(
    name: "chat-completion",
    model: "gpt-4o",
    provider: "openai"
) { span in
    let response = try await callLlm()
    span.setUsage(inputTokens: 100, outputTokens: 50, cost: 0.0025)
    return response
}

Span types: .generation, .tool, .retrieval, .custom. Cost is specified in dollars (float) and converted to microdollars server-side.

SDK: Kotlin (Android)

github.com/jaikoo/bloop-kotlin

Option A: Install the SDK

kotlin
implementation("io.github.jaikoo:bloop-client:0.3.0")
kotlin
import com.bloop.sdk.BloopClient

// Configure once in Application.onCreate()
BloopClient.configure(
    endpoint = "https://errors.myapp.com",
    secret = "your-hmac-secret",
    projectKey = "bloop_abc123...",  // From Settings → Projects
    environment = "production",
    release = "3.0.1",
)

// Install uncaught exception handler
BloopClient.shared?.installUncaughtExceptionHandler()

// Capture a throwable
try {
    riskyOperation()
} catch (e: Exception) {
    BloopClient.shared?.capture(e, screen = "ProfileFragment")
}

// Capture a structured event
BloopClient.shared?.capture(
    errorType = "IllegalStateException",
    message = "Fragment not attached to activity",
    screen = "ProfileFragment",
)

// Synchronous flush (e.g. before process death)
BloopClient.shared?.flushSync()

// Async flush
BloopClient.shared?.flush()

Option B: Minimal Example (Zero Dependencies)

kotlin
import okhttp3.*
import okhttp3.MediaType.Companion.toMediaType
import okhttp3.RequestBody.Companion.toRequestBody
import org.json.JSONObject
import javax.crypto.Mac
import javax.crypto.spec.SecretKeySpec

class BloopClient(
    private val baseUrl: String,
    private val projectKey: String,  // From Settings → Projects
) {
    private val client = OkHttpClient()
    private val json = "application/json".toMediaType()

    fun send(event: JSONObject) {
        val body = event.toString()
        val signature = hmacSha256(body, projectKey)

        val request = Request.Builder()
            .url("$baseUrl/v1/ingest")
            .post(body.toRequestBody(json))
            .addHeader("X-Project-Key", projectKey)
            .addHeader("X-Signature", signature)
            .build()

        // Fire and forget on background thread
        client.newCall(request).enqueue(object : Callback {
            override fun onFailure(call: Call, e: IOException) {}
            override fun onResponse(call: Call, response: Response) {
                response.close()
            }
        })
    }

    private fun hmacSha256(data: String, key: String): String {
        val mac = Mac.getInstance("HmacSHA256")
        mac.init(SecretKeySpec(key.toByteArray(), "HmacSHA256"))
        return mac.doFinal(data.toByteArray())
            .joinToString("") { "%02x".format(it) }
    }
}

// Usage
val bloop = BloopClient("https://errors.myapp.com", "bloop_abc123...")
bloop.send(JSONObject().apply {
    put("timestamp", System.currentTimeMillis() / 1000)
    put("source", "android")
    put("environment", "production")
    put("release", "3.0.1")
    put("error_type", "IllegalStateException")
    put("message", "Fragment not attached to activity")
    put("screen", "ProfileFragment")
})

Features (SDK)

Call installUncaughtExceptionHandler() after any other crash SDKs (e.g. Firebase Crashlytics) so bloop captures first and then chains to the previous handler.

LLM Tracing

Track LLM calls with token usage, cost, and latency. Requires the llm-tracing feature on your bloop server.

kotlin
import com.bloop.sdk.BloopClient
import com.bloop.sdk.SpanType
import com.bloop.sdk.SpanStatus
import com.bloop.sdk.TraceStatus

// Manual trace + span
val trace = BloopClient.shared?.startTrace(
    name = "chat-completion",
    sessionId = "session-abc",
    userId = "user-123",
    input = "What is the weather?",
)

val span = trace?.startSpan(SpanType.GENERATION,
    name = "gpt-4o call",
    model = "gpt-4o",
    provider = "openai",
)

// ... make LLM call ...

span?.end(
    inputTokens = 100,
    outputTokens = 50,
    cost = 0.0025,
    status = SpanStatus.OK,
    output = "The weather is sunny.",
)

trace?.end(status = TraceStatus.COMPLETED, output = "The weather is sunny.")

// Lambda-based convenience
val result = BloopClient.shared?.traceGeneration(
    name = "chat-completion",
    model = "gpt-4o",
    provider = "openai",
) { span ->
    val response = callLlm()
    span.setUsage(inputTokens = 100, outputTokens = 50, cost = 0.0025)
    response
}

Span types: GENERATION, TOOL, RETRIEVAL, CUSTOM. Cost is specified in dollars and converted to microdollars server-side. Traces are flushed with regular error events on the background thread.

SDK: Python

github.com/jaikoo/bloop-python

Option A: Install the SDK

bash
pip install bloop-sdk
python
from bloop import BloopClient

# Initialize — auto-captures uncaught exceptions via sys.excepthook
client = BloopClient(
    endpoint="https://errors.myapp.com",
    project_key="bloop_abc123...",  # From Settings → Projects
    environment="production",
    release="1.0.0",
)

# Capture an exception with full traceback
try:
    risky_operation()
except Exception as e:
    client.capture_exception(e,
        route_or_procedure="POST /api/process",
    )
    # Extracts: error_type, message, and full stack trace automatically

# Capture a structured event (no exception object needed)
client.capture(
    error_type="ValidationError",
    message="Invalid email format",
    route_or_procedure="POST /api/users",
)

# Context manager for graceful shutdown
with BloopClient(endpoint="...", project_key="...") as bloop:
    bloop.capture(error_type="TestError", message="Hello")
# flush + close happen automatically

Option B: Minimal Example (Zero Dependencies)

python
import hmac, hashlib, json, time
from urllib.request import Request, urlopen

def send_to_bloop(endpoint, project_key, event):
    body = json.dumps(event).encode()
    sig = hmac.new(
        project_key.encode(), body, hashlib.sha256
    ).hexdigest()

    req = Request(
        f"{endpoint}/v1/ingest",
        data=body,
        headers={
            "Content-Type": "application/json",
            "X-Project-Key": project_key,
            "X-Signature": sig,
        },
    )
    urlopen(req)

# Usage
send_to_bloop("https://errors.myapp.com", "bloop_abc123...", {
    "timestamp": int(time.time()),
    "source": "api",
    "environment": "production",
    "release": "1.0.0",
    "error_type": "ValueError",
    "message": "Invalid input",
})

Features (SDK)

LLM Tracing

Track LLM calls with token usage, cost, and latency. Requires the llm-tracing feature on your bloop server.

python
from bloop import BloopClient

client = BloopClient(
    endpoint="https://errors.myapp.com",
    project_key="bloop_abc123...",
    environment="production",
    release="1.0.0",
)

# Manual trace + span
trace = client.start_trace("chat-completion",
    session_id="session-abc",
    user_id="user-123",
    input="What is the weather?",
)

span = trace.start_span("generation",
    name="gpt-4o call",
    model="gpt-4o",
    provider="openai",
)

# ... make LLM call ...

span.end(
    input_tokens=100,
    output_tokens=50,
    cost=0.0025,
    status="ok",
    output="The weather is sunny.",
)

trace.end(status="completed", output="The weather is sunny.")

# Context manager — auto-ends trace and span
with client.trace("chat-completion", session_id="session-abc") as trace:
    with trace.span("generation", model="gpt-4o", provider="openai") as span:
        response = call_llm()
        span.set_usage(input_tokens=100, output_tokens=50, cost=0.0025)

# Decorator for automatic tracing
@client.trace_function("process-query", model="gpt-4o", provider="openai")
def process_query(prompt: str) -> str:
    return call_llm(prompt)

Span types: generation, tool, retrieval, custom. The context manager and decorator patterns auto-set status to error if an exception is raised.

SDK: Ruby

github.com/jaikoo/bloop-ruby

Option A: Install the SDK

bash
gem install bloop-sdk
ruby
require "bloop"

client = Bloop::Client.new(
  endpoint: "https://errors.myapp.com",
  project_key: "bloop_abc123...",  # From Settings → Projects
  environment: "production",
  release: "1.0.0",
)

# Capture an exception
begin
  risky_operation
rescue => e
  client.capture_exception(e, route_or_procedure: "POST /api/orders")
end

# Structured event capture
client.capture(
  error_type: "ValidationError",
  message: "Invalid email format",
  route_or_procedure: "POST /api/users",
)

# Block-based error capture
client.with_error_capture(route_or_procedure: "POST /api/checkout") do
  process_payment
end
# If process_payment raises, the exception is captured and re-raised

# Graceful shutdown
client.close

Rack Middleware

Automatically capture all unhandled exceptions in your web application:

ruby
# Rails — config/application.rb
require "bloop"

BLOOP_CLIENT = Bloop::Client.new(
  endpoint: "https://errors.myapp.com",
  project_key: "bloop_abc123...",
  environment: Rails.env,
  release: "1.0.0",
)
config.middleware.use Bloop::RackMiddleware, client: BLOOP_CLIENT
ruby
# Sinatra
require "bloop"

client = Bloop::Client.new(
  endpoint: "https://errors.myapp.com",
  project_key: "bloop_abc123...",
  environment: "production",
  release: "1.0.0",
)
use Bloop::RackMiddleware, client: client

The middleware captures exceptions, enriches them with the request path as route_or_procedure and the HTTP status as http_status, then re-raises the exception so your normal error handling still works.

Option B: Minimal Example (Zero Dependencies)

ruby
require "net/http"
require "json"
require "openssl"

def send_to_bloop(endpoint, project_key, event)
  body = event.to_json
  sig  = OpenSSL::HMAC.hexdigest("SHA256", project_key, body)

  uri = URI("#{endpoint}/v1/ingest")
  req = Net::HTTP::Post.new(uri)
  req["Content-Type"]  = "application/json"
  req["X-Project-Key"] = project_key
  req["X-Signature"]   = sig
  req.body = body

  Net::HTTP.start(uri.host, uri.port, use_ssl: uri.scheme == "https") do |http|
    http.request(req)
  end
end

# Usage
send_to_bloop("https://errors.myapp.com", "bloop_abc123...", {
  timestamp: Time.now.to_i,
  source: "api",
  environment: "production",
  release: "1.0.0",
  error_type: "RuntimeError",
  message: "Something went wrong",
})

Features (SDK)

LLM Tracing

Track LLM calls with token usage, cost, and latency. Requires the llm-tracing feature on your bloop server.

ruby
require "bloop"

client = Bloop::Client.new(
  endpoint: "https://errors.myapp.com",
  project_key: "bloop_abc123...",
  environment: "production",
  release: "1.0.0",
)

# Manual trace + span
trace = client.start_trace("chat-completion",
  session_id: "session-abc",
  user_id: "user-123",
  input: "What is the weather?",
)

span = trace.start_span(:generation,
  name: "gpt-4o call",
  model: "gpt-4o",
  provider: "openai",
)

# ... make LLM call ...

span.finish(
  input_tokens: 100,
  output_tokens: 50,
  cost: 0.0025,
  status: :ok,
  output: "The weather is sunny.",
)

trace.finish(status: :completed, output: "The weather is sunny.")

# Block-based convenience — auto-ends on block exit
client.with_trace("chat-completion", session_id: "session-abc") do |trace|
  trace.with_span(:generation, model: "gpt-4o", provider: "openai") do |span|
    response = call_llm
    span.set_usage(input_tokens: 100, output_tokens: 50, cost: 0.0025)
  end
end

Span types: :generation, :tool, :retrieval, :custom. Block-based helpers auto-set status to :error if an exception is raised, then re-raise the exception.

SDK: React Native

github.com/jaikoo/bloop-react-native

Option A: Install the SDK

bash
npm install @dthink/bloop-react-native

Setup with Hook

tsx
import { useBloop, BloopErrorBoundary } from "@dthink/bloop-react-native";

function App() {
  const bloop = useBloop({
    endpoint: "https://errors.myapp.com",
    projectKey: "bloop_abc123...",
    environment: "production",
    release: "1.0.0",
    appVersion: "1.0.0",
    buildNumber: "42",
  });
  // useBloop automatically installs all three global handlers:
  // 1. ErrorUtils global handler (uncaught JS exceptions)
  // 2. Promise rejection handler
  // 3. AppState handler (flushes on background/inactive)

  return (
    <BloopErrorBoundary
      client={bloop}
      fallback={<ErrorScreen />}
    >
      <Navigation />
    </BloopErrorBoundary>
  );
}

Production Handlers

The useBloop hook installs three handlers automatically on mount (and cleans them up on unmount). You can also install them manually if you are not using the hook:

typescript
import { BloopRNClient } from "@dthink/bloop-react-native";

const bloop = new BloopRNClient({ /* ... */ });

// Flushes buffered events when the app goes to background or inactive
bloop.installAppStateHandler();

// Captures unhandled promise rejections as error events
bloop.installPromiseRejectionHandler();

// Clean up all handlers when done
await bloop.shutdown();

Option B: Inline Alternative

@dthink/bloop-react-native wraps @dthink/bloop-sdk with React Native-specific features (platform detection, ErrorUtils handler, AppState flush, promise rejection tracking). A zero-dependency inline approach is not practical here — install @dthink/bloop-sdk directly if you only need the core client without React Native hooks.

Features (SDK)

@dthink/bloop-react-native wraps @dthink/bloop-sdk with React Native-specific features: automatic platform detection, ErrorUtils global handler, promise rejection tracking, AppState-based flush, and app version/build number metadata.

LLM Tracing

@dthink/bloop-react-native re-exports all tracing types and methods from @dthink/bloop-sdk. The API is identical to the TypeScript SDK:

typescript
import { useBloop } from "@dthink/bloop-react-native";

function ChatScreen() {
  const bloop = useBloop({
    endpoint: "https://errors.myapp.com",
    projectKey: "bloop_abc123...",
    environment: "production",
    release: "1.0.0",
  });

  async function handleSend(prompt: string) {
    // Convenience wrapper — auto-creates trace + generation span
    const result = await bloop.traceGeneration(
      "chat-completion", "gpt-4o", "openai",
      async (span) => {
        const response = await callLlm(prompt);
        span.setUsage(
          response.usage.input_tokens,
          response.usage.output_tokens,
          response.usage.cost,
        );
        return response.text;
      },
    );
    return result;
  }
}

See the TypeScript SDK section for the full tracing API, including manual trace/span management and all span types.