Skip to main content
GET
/
v1
/
models
# Find vision-capable OpenAI models with large context
curl "https://api.llmadaptive.uk/v1/models?author=openai&supported_param=vision&min_context_length=100000" \
  -H "Authorization: Bearer apk_123456"

# Find cost-effective models with multimodal input
curl "https://api.llmadaptive.uk/v1/models?input_modality=text&input_modality=image&max_prompt_cost=0.00001" \
  -H "Authorization: Bearer apk_123456"

# Find active models from multiple providers with tool support
curl "https://api.llmadaptive.uk/v1/models?provider=openai&provider=anthropic&supported_param=tools&status=0" \
  -H "Authorization: Bearer apk_123456"
[
  {
    "id": 123,
    "author": "openai",
    "model_name": "gpt-5-mini",
    "display_name": "GPT-5 Mini",
    "description": "Affordable and intelligent small model for fast, lightweight tasks",
    "context_length": 128000,
    "pricing": {
      "prompt_cost": "0.00015",
      "completion_cost": "0.0006",
      "request_cost": "0",
      "image_cost": "0",
      "web_search_cost": "0",
      "internal_reasoning_cost": "0"
    },
    "architecture": {
      "modality": "text",
      "tokenizer": "cl100k_base",
      "instruct_type": "chat",
      "modalities": [
        {
          "modality_type": "input",
          "modality_value": "text"
        },
        {
          "modality_type": "output",
          "modality_value": "text"
        }
      ]
    },
    "top_provider": {
      "context_length": 128000,
      "max_completion_tokens": 16384,
      "is_moderated": "true"
    },
    "supported_parameters": [
      {
        "parameter_name": "temperature"
      },
      {
        "parameter_name": "top_p"
      },
      {
        "parameter_name": "max_tokens"
      },
      {
        "parameter_name": "tools"
      }
    ],
    "default_parameters": {
      "parameters": {
        "temperature": 1.0,
        "top_p": 1.0,
        "max_tokens": 4096
      }
    },
    "providers": [
      {
        "name": "openai",
        "endpoint_model_name": "gpt-5-mini",
        "context_length": 128000,
        "provider_name": "OpenAI",
        "tag": "openai",
        "quantization": "",
        "max_completion_tokens": 16384,
        "max_prompt_tokens": 128000,
        "status": 0,
        "uptime_last_30m": "99.9%",
        "supports_implicit_caching": "true",
        "is_zdr": "false",
        "pricing": {
          "prompt_cost": "0.00015",
          "completion_cost": "0.0006",
          "request_cost": "0",
          "image_cost": "0.2890",
          "input_cache_read_cost": "0",
          "input_cache_write_cost": "0"
        }
      }
    ]
  }
]

Overview

The Models API provides access to Adaptive’s comprehensive model registry, which contains detailed information about available LLM models including pricing, capabilities, context limits, and provider details. Use this API to:
  • Discover available models across all providers
  • Get detailed pricing and capability information
  • Filter models by provider
  • Retrieve specific model details for integration

Registry Model System

Adaptive maintains a centralized Model Registry that tracks comprehensive information about LLM models from multiple providers (OpenAI, Anthropic, Google, DeepSeek, Groq, and more).

What is a Registry Model?

A Registry Model is a comprehensive data structure containing:
  • Identity: Provider, model name, OpenRouter ID
  • Pricing: Input/output token costs, per-request costs
  • Capabilities: Context length, supported parameters, tool calling support
  • Architecture: Modality, tokenizer, instruction format
  • Provider Info: Top provider configuration, available endpoints
  • Metadata: Display name, description, timestamps

How the Registry Works

  1. Centralized Data Source: The registry service maintains up-to-date model information
  2. Automatic Lookups: When you specify a provider or model, Adaptive queries the registry
  3. Auto-Fill: Known models automatically get pricing and capability data filled in

Endpoints

List All Models

provider
string
Optional provider filter (e.g., “openai”, “anthropic”, “google”)

Advanced Filtering

The Models API supports comprehensive filtering with repeatable query parameters:
author
string[]
Filter by model author (repeatable). Example: ?author=openai&author=anthropic
model_name
string[]
Filter by model name (repeatable). Example: ?model_name=gpt-4&model_name=claude-3
input_modality
string[]
Filter by input modality (repeatable). Example: ?input_modality=text&input_modality=image
output_modality
string[]
Filter by output modality (repeatable). Example: ?output_modality=text
min_context_length
integer
Filter by minimum context length. Example: ?min_context_length=128000
max_prompt_cost
string
Filter by maximum prompt cost. Example: ?max_prompt_cost=0.00001
supported_param
string[]
Filter by required parameters (repeatable). Example: ?supported_param=tools&supported_param=vision
status
integer
Filter by endpoint status (0=active). Example: ?status=0
quantization
string[]
Filter by model quantization (repeatable). Example: ?quantization=fp16
# Find vision-capable OpenAI models with large context
curl "https://api.llmadaptive.uk/v1/models?author=openai&supported_param=vision&min_context_length=100000" \
  -H "Authorization: Bearer apk_123456"

# Find cost-effective models with multimodal input
curl "https://api.llmadaptive.uk/v1/models?input_modality=text&input_modality=image&max_prompt_cost=0.00001" \
  -H "Authorization: Bearer apk_123456"

# Find active models from multiple providers with tool support
curl "https://api.llmadaptive.uk/v1/models?provider=openai&provider=anthropic&supported_param=tools&status=0" \
  -H "Authorization: Bearer apk_123456"
Query Parameter Syntax: For multiple values, repeat the parameter name:
  • ✅ Correct: ?author=openai&author=anthropic
  • ❌ Incorrect: ?author=openai,anthropic (comma-separated not supported)
curl https://api.llmadaptive.uk/v1/models \
  -H "Authorization: Bearer apk_123456"
[
  {
    "id": 123,
    "author": "openai",
    "model_name": "gpt-5-mini",
    "display_name": "GPT-5 Mini",
    "description": "Affordable and intelligent small model for fast, lightweight tasks",
    "context_length": 128000,
    "pricing": {
      "prompt_cost": "0.00015",
      "completion_cost": "0.0006",
      "request_cost": "0",
      "image_cost": "0",
      "web_search_cost": "0",
      "internal_reasoning_cost": "0"
    },
    "architecture": {
      "modality": "text",
      "tokenizer": "cl100k_base",
      "instruct_type": "chat",
      "modalities": [
        {
          "modality_type": "input",
          "modality_value": "text"
        },
        {
          "modality_type": "output",
          "modality_value": "text"
        }
      ]
    },
    "top_provider": {
      "context_length": 128000,
      "max_completion_tokens": 16384,
      "is_moderated": "true"
    },
    "supported_parameters": [
      {
        "parameter_name": "temperature"
      },
      {
        "parameter_name": "top_p"
      },
      {
        "parameter_name": "max_tokens"
      },
      {
        "parameter_name": "tools"
      }
    ],
    "default_parameters": {
      "parameters": {
        "temperature": 1.0,
        "top_p": 1.0,
        "max_tokens": 4096
      }
    },
    "providers": [
      {
        "name": "openai",
        "endpoint_model_name": "gpt-5-mini",
        "context_length": 128000,
        "provider_name": "OpenAI",
        "tag": "openai",
        "quantization": "",
        "max_completion_tokens": 16384,
        "max_prompt_tokens": 128000,
        "status": 0,
        "uptime_last_30m": "99.9%",
        "supports_implicit_caching": "true",
        "is_zdr": "false",
        "pricing": {
          "prompt_cost": "0.00015",
          "completion_cost": "0.0006",
          "request_cost": "0",
          "image_cost": "0.2890",
          "input_cache_read_cost": "0",
          "input_cache_write_cost": "0"
        }
      }
    ]
  }
]

Get Model by Name

id
string
required
Model identifier (e.g., “gpt-5-mini”, “claude-sonnet-4-5”)
curl https://api.llmadaptive.uk/v1/models/gpt-5-mini \
  -H "Authorization: Bearer apk_123456"
{
  "id": 123,
  "author": "openai",
  "model_name": "gpt-5-mini",
  "display_name": "GPT-5 Mini",
  "description": "Affordable and intelligent small model for fast, lightweight tasks",
  "context_length": 128000,
  "pricing": {
    "prompt_cost": "0.00015",
    "completion_cost": "0.0006",
    "request_cost": "0",
    "image_cost": "0",
    "web_search_cost": "0",
    "internal_reasoning_cost": "0"
  },
  "architecture": {
    "modality": "text",
    "tokenizer": "cl100k_base",
    "instruct_type": "chat",
    "modalities": [
      {
        "modality_type": "input",
        "modality_value": "text"
      },
      {
        "modality_type": "output",
        "modality_value": "text"
      }
    ]
  },
  "top_provider": {
    "context_length": 128000,
    "max_completion_tokens": 16384,
    "is_moderated": "true"
  },
  "supported_parameters": [
    {
      "parameter_name": "temperature"
    },
    {
      "parameter_name": "top_p"
    },
    {
      "parameter_name": "max_tokens"
    },
    {
      "parameter_name": "tools"
    }
  ],
  "default_parameters": {
    "parameters": {
      "temperature": 1.0,
      "top_p": 1.0,
      "max_tokens": 4096
    }
  },
  "providers": [
    {
      "name": "openai",
      "endpoint_model_name": "gpt-5-mini",
      "context_length": 128000,
      "provider_name": "OpenAI",
      "tag": "openai",
      "quantization": "",
      "max_completion_tokens": 16384,
      "max_prompt_tokens": 128000,
      "status": 0,
      "uptime_last_30m": "99.9%",
      "supports_implicit_caching": "true",
      "is_zdr": "false",
      "pricing": {
        "prompt_cost": "0.00015",
        "completion_cost": "0.0006",
        "request_cost": "0",
        "image_cost": "0.2890",
        "input_cache_read_cost": "0",
        "input_cache_write_cost": "0"
      }
    }
  ]
}

Response Schema

RegistryModel Object

FieldTypeDescription
idintegerDatabase ID (internal use)
authorstringModel author/organization (e.g., “openai”, “anthropic”)
model_namestringModel identifier for API calls
display_namestringHuman-readable model name
descriptionstringModel description and use cases
context_lengthintegerMaximum context window size in tokens
pricingobjectPricing information (see below)
architectureobjectModel architecture details (see below)
top_providerobjectTop provider configuration (see below)
supported_parametersarraySupported API parameters (see below)
default_parametersobjectDefault parameter values (see below)
providersarrayAvailable provider endpoints (see below)

Pricing Object

FieldTypeDescription
prompt_coststringCost per input token (USD, string format)
completion_coststringCost per output token (USD, string format)
request_coststringCost per request (optional)
image_coststringCost per image (optional)
web_search_coststringCost for web search (optional)
internal_reasoning_coststringCost for internal reasoning tokens (optional)
Note: Pricing is in string format to preserve precision. Multiply by 1M for cost per million tokens.

Architecture Object

FieldTypeDescription
modalitystringPrimary modality (e.g., “text”, “multimodal”)
tokenizerstringTokenizer used (e.g., “cl100k_base”, “o200k_base”)
instruct_typestringInstruction format (e.g., “chat”, null)
modalitiesarraySupported input/output modalities (see below)

ArchitectureModality Object

FieldTypeDescription
modality_typestring”input” or “output”
modality_valuestringModality value (e.g., “text”, “image”)

TopProvider Object

FieldTypeDescription
context_lengthintegerProvider’s context limit
max_completion_tokensintegerMaximum output tokens
is_moderatedstringWhether content is moderated (“true” or “false”)

ModelSupportedParameter Object

FieldTypeDescription
parameter_namestringName of supported parameter (e.g., “temperature”, “tools”)

ModelDefaultParameters Object

FieldTypeDescription
parametersobjectDefault parameter values (see DefaultParametersValues below)

DefaultParametersValues Object

Contains strongly typed default parameter values including sampling, penalty, token, and control parameters.

ModelProvider Object

FieldTypeDescription
namestringProvider name
endpoint_model_namestringModel name at the endpoint
context_lengthintegerContext length for this provider
provider_namestringHuman-readable provider name
tagstringProvider tag/slug
quantizationstringModel quantization (optional)
max_completion_tokensintegerMax completion tokens
max_prompt_tokensintegerMax prompt tokens
statusintegerStatus code (0 = active)
uptime_last_30mstringUptime percentage last 30 minutes
supports_implicit_cachingstringImplicit caching support (“true” or “false”)
is_zdrstringZero-downtime routing support (“true” or “false”)
pricingobjectProvider-specific pricing (see ProviderPricing below)

ProviderPricing Object

FieldTypeDescription
prompt_coststringCost per input token
completion_coststringCost per output token
request_coststringCost per request
image_coststringCost per image
image_output_coststringCost per image output
audio_coststringCost per audio
input_audio_cache_coststringCost for cached input audio
input_cache_read_coststringCost to read from input cache
input_cache_write_coststringCost to write to input cache
discountstringDiscount applied

Integration with Other APIs

Use with Chat Completions

Combine with the Chat Completions API for intelligent routing:
import requests

# 1. Query registry for available models
models_response = requests.get(
    "https://api.llmadaptive.uk/v1/models?provider=openai&provider=anthropic",
    headers={"Authorization": f"Bearer {api_key}"}
)
available_models = models_response.json()

# 2. Use models in chat completion with intelligent routing
chat_response = requests.post(
    "https://api.llmadaptive.uk/v1/chat/completions",
    headers={"Authorization": f"Bearer {api_key}"},
    json={
        "model": "adaptive/auto",  # Empty for intelligent routing
        "messages": [{"role": "user", "content": "Hello"}],
        "model_router": {
            "models": [
                f"{m['provider']}:{m['model_name']}"
                for m in available_models[:3]  # Use top 3 models
            ]
        }
    }
)

Use with Select Model API

Combine with the Select Model API for explicit selection:
import requests

# 1. Get models from registry with filtering
models_response = requests.get(
    "https://api.llmadaptive.uk/v1/models?supported_param=tools&min_context_length=100000",
    headers={"Authorization": f"Bearer {api_key}"}
)
models = models_response.json()

# 2. Use select-model to choose best model for prompt
selection_response = requests.post(
    "https://api.llmadaptive.uk/v1/select-model",
    headers={"Authorization": f"Bearer {api_key}"},
    json={
        "prompt": "Write a Python function to process CSV files",
        "models": [
            f"{m['provider']}/{m['model_name']}"
            for m in models
            if "tools" in m.get("supported_parameters", [])
        ]
    }
)