Atomspect API Documentation • v1.0

The Hub for
Atomspect Protocols.

Atomspect provides a high-fidelity proxy bridge across multiple LLM providers. Gain absolute transparency into every token consumed by your AI agents.

Base URL

https://api.atomspect.com/v1

Auth Header

Authorization: Bearer <key>

Endpoints

POST /chat, POST /audit

SDK Integrations

Because Atomspect is fully compatible with the OpenAI API specification, you can use your favorite SDKs by simply changing the base_url.

OpenAI Python SDK
python
from openai import OpenAI

# Initialize with Atomspect Base URL
client = OpenAI(
    api_key="YOUR_ATOMSPECT_API_KEY",
    base_url="https://api.atomspect.com/v1"
)

response = client.chat.completions.create(
    model="ignored-set-provider-below",
    messages=[{"role": "user", "content": "Audit my usage"}],
    extra_body={
        "provider": "openai" # Options: openai, anthropic, gemini, openrouter
    }
)

Supports all standard chat features

OpenAI Node/TS SDK
javascript
import OpenAI from 'openai';

// Initialize with Atomspect Base URL
const client = new OpenAI({
  apiKey: 'YOUR_ATOMSPECT_API_KEY',
  baseURL: 'https://api.atomspect.com/v1'
});

const response = await client.chat.completions.create({
  messages: [{ role: 'user', content: 'Sync my data' }],
  // Atomspect extension: specify provider in body
  provider: 'anthropic' 
});

Universal browser & server support

Automation First

Using Atomspect in
n8n Workflows

Connect your n8n workflows to Atomspect for automated auditing. Simply use the standard OpenAI Node with our parameters.

Configuration

  • ResourceOpenAI Chat Node
  • Base URL Overridehttps://api.atomspect.com/v1
  • MethodPOST

Body Parameters

Use the "Body Parameters" or "Additional Fields" in the OpenAI node to add the provider:

"provider": "openrouter"

Technical Reference

Comprehensive breakdown of the request and response lifecycle. High precision for LLM assistance.

Request Body

FIELDTYPEDESCRIPTION
providerenumREQUIRED. One of: 'openai', 'anthropic', 'gemini', 'openrouter'.
messagesarrayREQUIRED. Array of {role, content} objects representing the conversation.
temperaturefloatOPTIONAL. Default: 0.7. Controls randomness of output.
streamboolOPTIONAL. Enable Server-Sent Events for real-time delivery.

Response Object

{
  "id": "atomspect-resp-5f82k...",
  "object": "chat.completion",
  "created": 1708703218,
  "model": "gpt-4-atomspect-proxied",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Strategic analysis complete..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 1240,
    "completion_tokens": 421,
    "total_tokens": 1661
  }
}

Key Outcomes

Standard OpenAI-compliant object structure
usage results included natively
Audit metadata added to headers
Compatible with all JSON parsers

Build the future of
Atomspect protocols.

Connect to Atomspect and get absolute clarity on your LLM spending. Our APIs are built for precision and transparency.