Common Issues
Authentication Problems
Problem: Getting authentication errors when making API calls.Solutions:
Check your API key:
# Verify your API key is set correctly
echo $ADAPTIVE_API_KEY
Ensure correct format:
// Correct - no 'Bearer' prefix needed
const openai = new OpenAI ({
apiKey: 'your-adaptive-api-key' ,
baseURL: 'https://api.llmadaptive.uk/v1'
});
Verify API key validity:
Check if your API key has expired
Ensure you’re using the correct key for your environment
Try regenerating your API key in the dashboard
Test with curl:
curl -H "Authorization: Bearer apk_123456" \
-H "Content-Type: application/json" \
https://api.llmadaptive.uk/v1/chat/completions \
-d '{"model":"adaptive/auto","messages":[{"role":"user","content":"test"}]}'
Problem: Environment variable not being loaded.Solutions:
Check environment variable:
# In terminal
export ADAPTIVE_API_KEY = your-key-here
# Or in .env file
echo "ADAPTIVE_API_KEY=your-key-here" >> .env
Load environment variables:
// Node.js
require ( 'dotenv' ). config ();
// Or using ES modules
import 'dotenv/config' ;
Python environment:
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv( "ADAPTIVE_API_KEY" )
Configuration Issues
Problem: Using incorrect base URL causing connection failures.Correct base URL: https://api.llmadaptive.uk/v1
Common mistakes: // ❌ Wrong
baseURL : 'https://api.openai.com/v1'
baseURL : 'https://adaptive.ai/api/v1'
baseURL : 'https://www.llmadaptive.uk/v1'
// ✅ Correct
baseURL : 'https://api.llmadaptive.uk/v1'
Problem: Intelligent routing not working or model errors.Solutions:
Use empty string for intelligent routing:
// ✅ Correct - enables intelligent routing
model : "adaptive/auto"
// ❌ Wrong - tries to use specific model
model : "adaptive/auto"
model : "adaptive/auto"
model : "adaptive/auto"
TypeScript type issues:
// Option 1: Type assertion
model : "adaptive/auto" as any
// Option 2: Disable strict checking for this parameter
// @ts-ignore
model : "adaptive/auto"
SSL/TLS Certificate Errors
Problem: Certificate validation errors in some environments.Solutions:
Update certificates:
# Ubuntu/Debian
sudo apt-get update && sudo apt-get install ca-certificates
# macOS
brew install ca-certificates
Node.js certificate issues:
// Temporary workaround (not recommended for production)
process . env [ "NODE_TLS_REJECT_UNAUTHORIZED" ] = 0 ;
// Better solution: update Node.js or certificates
Python certificate issues:
import ssl
import certifi
# Ensure certificates are up to date
ssl.create_default_context( cafile = certifi.where())
Request/Response Issues
Problem: Getting empty responses or no content.Diagnostic steps:
Check request format:
const completion = await openai . chat . completions . create ({
model: "adaptive/auto" ,
messages: [
{ role: "user" , content: "Hello" } // Ensure content is not empty
]
});
Verify response handling:
console . log ( "Full response:" , completion );
console . log ( "Content:" , completion . choices [ 0 ]?. message ?. content );
Check for API errors:
try {
const completion = await openai . chat . completions . create ({ ... });
} catch ( error ) {
console . log ( "Error details:" , error );
console . log ( "Status:" , error . status );
console . log ( "Message:" , error . message );
}
Problem: Streaming responses not appearing or failing.Solutions:
Check streaming syntax:
// ✅ Correct streaming setup
const stream = await openai . chat . completions . create ({
model: "adaptive/auto" ,
messages: [ ... ],
stream: true
});
for await ( const chunk of stream ) {
process . stdout . write ( chunk . choices [ 0 ]?. delta ?. content || '' );
}
Browser streaming with fetch:
const response = await fetch ( '/api/stream-chat' , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({ message })
});
const reader = response . body . getReader ();
const decoder = new TextDecoder ();
while ( true ) {
const { done , value } = await reader . read ();
if ( done ) break ;
const chunk = decoder . decode ( value );
// Process chunk...
}
Server-sent events setup:
// Server
res . writeHead ( 200 , {
'Content-Type' : 'text/event-stream' ,
'Cache-Control' : 'no-cache' ,
'Connection' : 'keep-alive'
});
Problem: Getting 429 errors (rate limit exceeded).Solutions:
Implement exponential backoff:
async function retryWithBackoff ( fn , maxRetries = 3 ) {
for ( let i = 0 ; i < maxRetries ; i ++ ) {
try {
return await fn ();
} catch ( error ) {
if ( error . status === 429 && i < maxRetries - 1 ) {
const delay = Math . pow ( 2 , i ) * 1000 ; // 1s, 2s, 4s
await new Promise ( resolve => setTimeout ( resolve , delay ));
continue ;
}
throw error ;
}
}
}
Check your rate limits:
Free tier : 100 requests/minute, 10,000 tokens/minute
Pro tier : 1,000 requests/minute, 100,000 tokens/minute
Enterprise : Custom limits
Implement request queuing:
class RequestQueue {
constructor ( maxPerMinute = 100 ) {
this . queue = [];
this . maxPerMinute = maxPerMinute ;
this . requestTimes = [];
}
async enqueue ( requestFn ) {
return new Promise (( resolve , reject ) => {
this . queue . push ({ requestFn , resolve , reject });
this . processQueue ();
});
}
async processQueue () {
if ( this . queue . length === 0 ) return ;
const now = Date . now ();
this . requestTimes = this . requestTimes . filter ( time => now - time < 60000 );
if ( this . requestTimes . length < this . maxPerMinute ) {
const { requestFn , resolve , reject } = this . queue . shift ();
this . requestTimes . push ( now );
try {
const result = await requestFn ();
resolve ( result );
} catch ( error ) {
reject ( error );
}
// Process next request
setTimeout (() => this . processQueue (), 100 );
} else {
// Wait and try again
setTimeout (() => this . processQueue (), 1000 );
}
}
}
Integration-Specific Issues
LangChain Integration Problems
Problem: LangChain not working with Adaptive.Solutions:
Correct LangChain setup:
# Python
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
api_key = os.getenv( "ADAPTIVE_API_KEY" ),
base_url = "https://api.llmadaptive.uk/v1" ,
model = "adaptive/auto" # Important: empty string
)
// JavaScript
import { ChatOpenAI } from "@langchain/openai" ;
const llm = new ChatOpenAI ({
apiKey: process . env . ADAPTIVE_API_KEY ,
configuration: {
baseURL: "https://api.llmadaptive.uk/v1"
},
model: "adaptive/auto"
});
Handle LangChain-specific errors:
from openai import APIError
try :
response = llm.invoke( "Hello" )
except APIError as e:
print ( f "API Error: { e } " )
except Exception as e:
print ( f "Other error: { e } " )
Problem: Vercel AI SDK not connecting properly.Solutions:
Using OpenAI provider method:
import { openai } from '@ai-sdk/openai' ;
const adaptiveOpenAI = openai ({
apiKey: process . env . ADAPTIVE_API_KEY ,
baseURL: 'https://api.llmadaptive.uk/v1' ,
});
const { text } = await generateText ({
model: adaptiveOpenAI ( 'adaptive/auto' ), // Empty string for routing
prompt: 'Hello'
});
TypeScript issues:
// If getting type errors
const model = adaptiveOpenAI ( '' as any );
Environment variables in Next.js:
// next.config.js
module . exports = {
env: {
ADAPTIVE_API_KEY: process . env . ADAPTIVE_API_KEY ,
},
};
Adaptive-Specific Error Scenarios
🔄 Unique to Adaptive : These errors provide multi-provider failure insights not available from other AI APIs.
All Providers Failed (503)
Scenario: FallbackError in Sequential Mode
Symptom: {
"error" : {
"type" : "fallback_failed" ,
"message" : "all 3 providers failed (sequential mode)"
}
}
Diagnosis:
Check details.failures array for per-provider errors
Look for patterns (all rate limits? All service unavailable?)
Check duration_ms - were requests timing out?
Solutions:
All rate limits : Implement request queuing or reduce rate
Mixed errors : Retry with exponential backoff
All unavailable : Check status page
High duration_ms : Check network latency
Prevention:
Implement circuit breaker pattern
Cache successful responses
Set up fallback to local models
Scenario: FallbackError in Race Mode
Symptom: {
"error" : {
"type" : "fallback_failed" ,
"message" : "all 2 providers failed (race mode)" ,
"details" : {
"mode" : "race" ,
"attempts" : 2 ,
"failures" : [
{
"model" : "gpt-4" ,
"error" : "Rate limit exceeded" ,
"duration_ms" : 850
},
{
"model" : "claude-3-opus" ,
"error" : "Service temporarily unavailable" ,
"duration_ms" : 1200
}
]
}
}
}
Race mode characteristics:
Faster failure detection (returns as soon as all fail)
All requests happen simultaneously
duration_ms shows time to complete all attempts
Solutions:
Switch to sequential mode for slower but more controlled fallback
Implement request throttling
Use cached responses during outages
Model Registry Errors (404)
Scenario: Model Not Found
Symptom: {
"error" : {
"type" : "model_registry_error" ,
"message" : "Model 'invalid-model' not found"
}
}
Common causes:
Typo in model name
Model not available in your region
Model temporarily disabled
Solutions:
Check available models:
curl -H "Authorization: Bearer apk_123456" \
https://api.llmadaptive.uk/v1/models
Use intelligent routing:
// Instead of specific model
model : "gpt-4" // ❌ May not be available
// Use intelligent routing
model : "adaptive/auto" // ✅ Adaptive chooses best available
Verify model support on supported models page
Model Router Errors (503)
Scenario: Unable to Route Request
Symptom: {
"error" : {
"type" : "model_router_error" ,
"message" : "Unable to route request to appropriate model"
}
}
Common causes:
Complex prompt that doesn’t match any model’s capabilities
All matching models are currently unavailable
Routing service temporarily overloaded
Solutions:
Specify a model explicitly:
// Instead of intelligent routing
model : "adaptive/auto" // ❌ May fail to route
// Specify explicitly
model : "gpt-3.5-turbo" // ✅ Direct routing
Simplify your prompt if using intelligent routing
Retry with backoff - often transient
Provider-Specific Errors
Scenario: Pass-Through Provider Errors
Symptom: {
"error" : {
"type" : "provider_failure" ,
"message" : "OpenAI API error: Rate limit exceeded" ,
"details" : {
"model" : "gpt-4" ,
"original_error" : {
"type" : "rate_limit_exceeded" ,
"message" : "Rate limit exceeded"
}
}
}
}
Understanding pass-through errors:
Adaptive tried one specific provider (not fallback)
Provider returned an error that Adaptive passes through
Includes original provider error details
Solutions:
Check provider-specific documentation
Implement provider-specific retry logic
Consider switching providers or using fallback
Error Investigation Checklist
When encountering errors:
Capture Context
Check Error Details
Verify Configuration
Review Documentation
Contact Support (if needed)
Include request_id
Provide error reproduction steps
Share redacted request/response
Problem: Responses taking longer than expected.Diagnostic steps:
Check routing decisions:
const completion = await openai . chat . completions . create ({
model: "adaptive/auto" ,
messages: [ ... ]
});
console . log ( "Selected model:" , completion . model );
Optimize with cost_bias:
// Prefer faster, cheaper models
const completion = await openai . chat . completions . create ({
model: "adaptive/auto" ,
messages: [ ... ],
cost_bias: 0.2 // 0 = cheapest/fastest, 1 = best quality
});
Use provider constraints for speed:
// Route only to fast providers
const completion = await openai . chat . completions . create ({
model: "adaptive/auto" ,
messages: [ ... ],
provider_constraint: [ "groq" , "gemini" ] // Fast providers
});
Problem: Network latency issues.Solutions:
Check your network:
# Test connectivity
ping llmadaptive.uk
# Test TLS handshake
curl -w "@curl-format.txt" -o /dev/null https://api.llmadaptive.uk/v1/
Implement timeout handling:
const controller = new AbortController ();
const timeoutId = setTimeout (() => controller . abort (), 30000 ); // 30s timeout
try {
const completion = await openai . chat . completions . create ({
model: "adaptive/auto" ,
messages: [ ... ]
}, {
signal: controller . signal
});
} catch ( error ) {
if ( error . name === 'AbortError' ) {
console . log ( 'Request timed out' );
}
} finally {
clearTimeout ( timeoutId );
}
Use connection pooling:
import https from 'https' ;
const agent = new https . Agent ({
keepAlive: true ,
maxSockets: 10
});
const openai = new OpenAI ({
apiKey: process . env . ADAPTIVE_API_KEY ,
baseURL: 'https://api.llmadaptive.uk/v1' ,
httpAgent: agent
});
Development Environment Issues
Problem: Cross-origin resource sharing errors.Solutions:
Never call API directly from browser:
// ❌ Wrong - exposes API key
// const completion = await openai.chat.completions.create({...});
// ✅ Correct - use your backend
const response = await fetch ( '/api/chat' , {
method: 'POST' ,
body: JSON . stringify ({ message })
});
Set up proxy in development:
// Next.js API route
// pages/api/chat.js
export default async function handler ( req , res ) {
const completion = await openai . chat . completions . create ({
model: "adaptive/auto" ,
messages: req . body . messages
});
res . json ({ response: completion . choices [ 0 ]. message . content });
}
Configure CORS for your backend:
// Express.js
app . use ( cors ({
origin: [ 'http://localhost:3000' , 'https://yourdomain.com' ],
credentials: true
}));
TypeScript Compilation Errors
Problem: TypeScript errors with Adaptive integration.Solutions:
Install correct types:
npm install --save-dev @types/node
npm install openai # Latest version includes types
Type assertion for model parameter:
const completion = await openai . chat . completions . create ({
model: "adaptive/auto" as any , // Type assertion
messages: [ ... ]
});
Create custom types if needed:
interface AdaptiveCompletion extends ChatCompletion {
provider : string ;
}
Problem: ES modules vs CommonJS issues.Solutions:
Use correct imports:
// ES modules
import OpenAI from 'openai' ;
// CommonJS
const OpenAI = require ( 'openai' );
Package.json configuration:
{
"type" : "module" ,
"dependencies" : {
"openai" : "^4.0.0"
}
}
Node.js version compatibility:
# Check Node.js version
node --version
# Adaptive requires Node.js 18+
# Update if necessary
Getting Help
When reporting issues, please include:
Environment Details
# System info
node --version
npm --version
# Package versions
npm list openai
npm list @langchain/openai
Request Details
// Sanitized request (remove API key)
{
"model" : "" ,
"messages" : [ ... ],
"provider_constraint" : [ ... ],
"cost_bias" : 0.5
}
Error Information
console . log ( "Error status:" , error . status );
console . log ( "Error message:" , error . message );
console . log ( "Error stack:" , error . stack );
Network Diagnostics
# Test connectivity
curl -I https://api.llmadaptive.uk/v1/
# DNS resolution
nslookup llmadaptive.uk
Support Channels
Documentation Check our comprehensive guides and API reference for solutions
GitHub Issues Report bugs and request features on our GitHub repository
Discord Community Get help from the community and Adaptive team members
Best Practices for Debugging
Start with Simple Requests
Test basic functionality first const simple = await openai . chat . completions . create ({
model: "adaptive/auto" ,
messages: [{ role: "user" , content: "Hello" }]
});
Enable Verbose Logging
Add detailed logging to understand what’s happening console . log ( "Request:" , JSON . stringify ( requestData , null , 2 ));
console . log ( "Response:" , JSON . stringify ( response , null , 2 ));
Test with curl
Verify API access outside your application curl -X POST https://api.llmadaptive.uk/v1/chat/completions \
-H "Authorization: Bearer apk_123456" \
-H "Content-Type: application/json" \
-d '{"model":"adaptive/auto","messages":[{"role":"user","content":"test"}]}'
Isolate the Problem
Systematically narrow down the issue:
Test different messages
Try different parameters
Test in different environments
Compare with working examples
Complete Error Handling Example
Here’s a production-ready error handling implementation:
class AdaptiveClient {
constructor ( apiKey ) {
this . openai = new OpenAI ({
apiKey: apiKey ,
baseURL: 'https://api.llmadaptive.uk/v1'
});
}
async createCompletion ( params , retries = 3 ) {
for ( let attempt = 1 ; attempt <= retries ; attempt ++ ) {
try {
const completion = await this . openai . chat . completions . create ({
model: "adaptive/auto" ,
... params
});
// Log success metrics
console . log ( `✅ Success: ${ completion . usage . total_tokens } tokens` );
return completion ;
} catch ( error ) {
// Handle specific errors
if ( error . status === 401 ) {
throw new Error ( 'Invalid API key - check your credentials' );
}
if ( error . status === 429 ) {
const delay = Math . min ( 1000 * Math . pow ( 2 , attempt ), 10000 );
console . log ( `⚠️ Rate limited, retrying in ${ delay } ms (attempt ${ attempt } / ${ retries } )` );
if ( attempt < retries ) {
await new Promise ( resolve => setTimeout ( resolve , delay ));
continue ;
}
throw new Error ( 'Rate limit exceeded - reduce request frequency' );
}
if ( error . status === 400 ) {
throw new Error ( `Invalid request: ${ error . message } ` );
}
if ( error . status >= 500 ) {
const delay = 1000 * attempt ;
console . log ( `🔄 Server error, retrying in ${ delay } ms (attempt ${ attempt } / ${ retries } )` );
if ( attempt < retries ) {
await new Promise ( resolve => setTimeout ( resolve , delay ));
continue ;
}
throw new Error ( 'Server error - try again later' );
}
// Unexpected error
throw new Error ( `Unexpected error: ${ error . message } ` );
}
}
}
}
// Usage example
const client = new AdaptiveClient ( process . env . ADAPTIVE_API_KEY );
try {
const response = await client . createCompletion ({
messages: [{ role: "user" , content: "Hello!" }],
model_router: {
cost_bias: 0.3 ,
models: [ "openai/gpt-5-mini" , "anthropic/claude-sonnet-4-5" ]
}
});
console . log ( "Response:" , response . choices [ 0 ]. message . content );
} catch ( error ) {
console . error ( "Failed to get completion:" , error . message );
}
FAQ
Why am I not getting responses from certain providers?
Check your model_router.models configuration. Ensure the providers you want are included and your cost_bias setting allows for the provider selection you expect.
How do I know which provider was selected?
Check the provider field in the response: console . log ( "Selected provider:" , completion . provider );
console . log ( "Model used:" , completion . model );
Can I force a specific model?
Use the model_router.models array with specific model names: model_router : {
models : [
"openai/gpt-5-mini"
]
}
Why are my costs higher than expected?
Check your cost_bias setting. A higher value (closer to 1) prioritizes quality over cost. Set it to 0.0-0.3 for maximum cost savings.
How do I disable semantic caching?
Set semantic cache to disabled in your request: semantic_cache : {
enabled : false
}