Skip to main content
Get started with Adaptive by changing one line of code. No complex setup required.

Step 1: Get Your API Key

1

Sign Up

Create a free account to get started
2

Generate Key

Generate your API key from the dashboard

Step 2: Install SDK (Optional)

  • JavaScript/Node.js
  • Python
  • cURL
npm install openai

Step 3: Make Your First Request

Choose your preferred language and framework:
  • OpenAI SDK
  • Anthropic SDK
  • Gemini SDK
  • Vercel AI SDK
  • LangChain
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-adaptive-api-key',
  baseURL: 'https://api.llmadaptive.uk/v1'
});

const response = await client.chat.completions.create({
  model: 'adaptive/auto', // Leave empty for intelligent routing
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.choices[0].message.content);

Error Handling

Always implement proper error handling in production. Adaptive provides detailed error information to help you build resilient applications.
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env.ADAPTIVE_API_KEY,
  baseURL: 'https://api.llmadaptive.uk/v1'
});

async function chatWithRetry(message: string, maxRetries = 3) {
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      const response = await client.chat.completions.create({
        model: 'adaptive/auto',
        messages: [{ role: 'user', content: message }]
      });

      return response.choices[0].message.content;

    } catch (error: any) {
      console.error(`Attempt ${attempt} failed:`, error.message);

      // Check for FallbackError (unique to Adaptive)
      if (error.response?.data?.error?.type === 'fallback_failed') {
        const failures = error.response.data.error.details.failures;
        console.log('Provider failures:', failures.map(f => ({
          provider: f.provider,
          model: f.model,
          error: f.error,
          duration: f.duration_ms
        })));
      }

      if (attempt === maxRetries) throw error;

      // Exponential backoff
      await new Promise(resolve =>
        setTimeout(resolve, Math.pow(2, attempt) * 1000)
      );
    }
  }
}

// Usage
try {
  const result = await chatWithRetry('Explain quantum computing');
  console.log(result);
} catch (error) {
  console.error('All retries failed:', error);
  // Implement fallback strategy (cached response, default message, etc.)
}
Production Tip: Always log the request_id from error responses for debugging. For comprehensive error handling patterns, see the Error Handling Best Practices guide.

Key Features

Intelligent Routing

Leave model empty and let our AI choose the optimal provider for your request

Cost Savings

Save 60-90% on AI costs with automatic model selection

6+ Providers

Access OpenAI, Anthropic, Google, Groq, DeepSeek, and Grok

Drop-in Replacement

Works with existing OpenAI and Anthropic SDK code

Example Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-5-nano",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "Hello! I'm ready to help you."
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 5,
    "completion_tokens": 10,
    "total_tokens": 15
  }
}
Adaptive returns standard OpenAI or Anthropic-compatible responses.

Testing Your Integration

1

Send Test Request

Run your code with a simple message like “Hello!” to verify the connection
2

Check Response

Confirm you receive a response and check the provider field to see which model was selected
3

Monitor Dashboard

View request logs and analytics in your Adaptive dashboard

Next Steps

Need Help?