The Hub for
Atomspect Protocols.
Atomspect provides a high-fidelity proxy bridge across multiple LLM providers. Gain absolute transparency into every token consumed by your AI agents.
Base URL
https://api.atomspect.com/v1
Auth Header
Authorization: Bearer <key>
Endpoints
POST /chat, POST /audit
SDK Integrations
Because Atomspect is fully compatible with the OpenAI API specification, you can use your favorite SDKs by simply changing the base_url.
from openai import OpenAI
# Initialize with Atomspect Base URL
client = OpenAI(
api_key="YOUR_ATOMSPECT_API_KEY",
base_url="https://api.atomspect.com/v1"
)
response = client.chat.completions.create(
model="ignored-set-provider-below",
messages=[{"role": "user", "content": "Audit my usage"}],
extra_body={
"provider": "openai" # Options: openai, anthropic, gemini, openrouter
}
)Supports all standard chat features
import OpenAI from 'openai';
// Initialize with Atomspect Base URL
const client = new OpenAI({
apiKey: 'YOUR_ATOMSPECT_API_KEY',
baseURL: 'https://api.atomspect.com/v1'
});
const response = await client.chat.completions.create({
messages: [{ role: 'user', content: 'Sync my data' }],
// Atomspect extension: specify provider in body
provider: 'anthropic'
});Universal browser & server support
Using Atomspect in
n8n Workflows
Connect your n8n workflows to Atomspect for automated auditing. Simply use the standard OpenAI Node with our parameters.
Configuration
- ResourceOpenAI Chat Node
- Base URL Overridehttps://api.atomspect.com/v1
- MethodPOST
Body Parameters
Use the "Body Parameters" or "Additional Fields" in the OpenAI node to add the provider:
Technical Reference
Comprehensive breakdown of the request and response lifecycle. High precision for LLM assistance.
Request Body
| FIELD | TYPE | DESCRIPTION |
|---|---|---|
| provider | enum | REQUIRED. One of: 'openai', 'anthropic', 'gemini', 'openrouter'. |
| messages | array | REQUIRED. Array of {role, content} objects representing the conversation. |
| temperature | float | OPTIONAL. Default: 0.7. Controls randomness of output. |
| stream | bool | OPTIONAL. Enable Server-Sent Events for real-time delivery. |
Response Object
{
"id": "atomspect-resp-5f82k...",
"object": "chat.completion",
"created": 1708703218,
"model": "gpt-4-atomspect-proxied",
"choices": [
{
"message": {
"role": "assistant",
"content": "Strategic analysis complete..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 1240,
"completion_tokens": 421,
"total_tokens": 1661
}
}Key Outcomes
Build the future of
Atomspect protocols.
Connect to Atomspect and get absolute clarity on your LLM spending. Our APIs are built for precision and transparency.