Overview
The Models API provides access to Adaptive’s comprehensive model registry, which contains detailed information about available LLM models including pricing, capabilities, context limits, and provider details. Use this API to:- Discover available models across all providers
- Get detailed pricing and capability information
- Filter models by provider
- Retrieve specific model details for integration
Registry Model System
Adaptive maintains a centralized Model Registry that tracks comprehensive information about LLM models from multiple providers (OpenAI, Anthropic, Google, DeepSeek, Groq, and more).What is a Registry Model?
A Registry Model is a comprehensive data structure containing:- Identity: Provider, model name, OpenRouter ID
- Pricing: Input/output token costs, per-request costs
- Capabilities: Context length, supported parameters, tool calling support
- Architecture: Modality, tokenizer, instruction format
- Provider Info: Top provider configuration, available endpoints
- Metadata: Display name, description, timestamps
How the Registry Works
- Centralized Data Source: The registry service maintains up-to-date model information
- Automatic Lookups: When you specify a provider or model, Adaptive queries the registry
- Auto-Fill: Known models automatically get pricing and capability data filled in
Endpoints
List All Models
Optional provider filter (e.g., “openai”, “anthropic”, “google”)
Advanced Filtering
The Models API supports comprehensive filtering with repeatable query parameters:Filter by model author (repeatable). Example:
?author=openai&author=anthropicFilter by model name (repeatable). Example:
?model_name=gpt-4&model_name=claude-3Filter by input modality (repeatable). Example:
?input_modality=text&input_modality=imageFilter by output modality (repeatable). Example:
?output_modality=textFilter by minimum context length. Example:
?min_context_length=128000Filter by maximum prompt cost. Example:
?max_prompt_cost=0.00001Filter by required parameters (repeatable). Example:
?supported_param=tools&supported_param=visionFilter by endpoint status (0=active). Example:
?status=0Filter by model quantization (repeatable). Example:
?quantization=fp16Query Parameter Syntax: For multiple values, repeat the parameter name:
- ✅ Correct:
?author=openai&author=anthropic - ❌ Incorrect:
?author=openai,anthropic(comma-separated not supported)
Get Model by Name
Model identifier (e.g., “gpt-5-mini”, “claude-sonnet-4-5”)
Response Schema
RegistryModel Object
| Field | Type | Description |
|---|---|---|
id | integer | Database ID (internal use) |
author | string | Model author/organization (e.g., “openai”, “anthropic”) |
model_name | string | Model identifier for API calls |
display_name | string | Human-readable model name |
description | string | Model description and use cases |
context_length | integer | Maximum context window size in tokens |
pricing | object | Pricing information (see below) |
architecture | object | Model architecture details (see below) |
top_provider | object | Top provider configuration (see below) |
supported_parameters | array | Supported API parameters (see below) |
default_parameters | object | Default parameter values (see below) |
providers | array | Available provider endpoints (see below) |
Pricing Object
| Field | Type | Description |
|---|---|---|
prompt_cost | string | Cost per input token (USD, string format) |
completion_cost | string | Cost per output token (USD, string format) |
request_cost | string | Cost per request (optional) |
image_cost | string | Cost per image (optional) |
web_search_cost | string | Cost for web search (optional) |
internal_reasoning_cost | string | Cost for internal reasoning tokens (optional) |
Architecture Object
| Field | Type | Description |
|---|---|---|
modality | string | Primary modality (e.g., “text”, “multimodal”) |
tokenizer | string | Tokenizer used (e.g., “cl100k_base”, “o200k_base”) |
instruct_type | string | Instruction format (e.g., “chat”, null) |
modalities | array | Supported input/output modalities (see below) |
ArchitectureModality Object
| Field | Type | Description |
|---|---|---|
modality_type | string | ”input” or “output” |
modality_value | string | Modality value (e.g., “text”, “image”) |
TopProvider Object
| Field | Type | Description |
|---|---|---|
context_length | integer | Provider’s context limit |
max_completion_tokens | integer | Maximum output tokens |
is_moderated | string | Whether content is moderated (“true” or “false”) |
ModelSupportedParameter Object
| Field | Type | Description |
|---|---|---|
parameter_name | string | Name of supported parameter (e.g., “temperature”, “tools”) |
ModelDefaultParameters Object
| Field | Type | Description |
|---|---|---|
parameters | object | Default parameter values (see DefaultParametersValues below) |
DefaultParametersValues Object
Contains strongly typed default parameter values including sampling, penalty, token, and control parameters.ModelProvider Object
| Field | Type | Description |
|---|---|---|
name | string | Provider name |
endpoint_model_name | string | Model name at the endpoint |
context_length | integer | Context length for this provider |
provider_name | string | Human-readable provider name |
tag | string | Provider tag/slug |
quantization | string | Model quantization (optional) |
max_completion_tokens | integer | Max completion tokens |
max_prompt_tokens | integer | Max prompt tokens |
status | integer | Status code (0 = active) |
uptime_last_30m | string | Uptime percentage last 30 minutes |
supports_implicit_caching | string | Implicit caching support (“true” or “false”) |
is_zdr | string | Zero-downtime routing support (“true” or “false”) |
pricing | object | Provider-specific pricing (see ProviderPricing below) |
ProviderPricing Object
| Field | Type | Description |
|---|---|---|
prompt_cost | string | Cost per input token |
completion_cost | string | Cost per output token |
request_cost | string | Cost per request |
image_cost | string | Cost per image |
image_output_cost | string | Cost per image output |
audio_cost | string | Cost per audio |
input_audio_cache_cost | string | Cost for cached input audio |
input_cache_read_cost | string | Cost to read from input cache |
input_cache_write_cost | string | Cost to write to input cache |
discount | string | Discount applied |



