Documentation Index
Fetch the complete documentation index at: https://mintlify.com/browser-use/browser-use/llms.txt
Use this file to discover all available pages before exploring further.
Overview
ChatVercel provides access to Vercel AI Gateway, an OpenAI-compatible API that routes requests to multiple LLM providers including OpenAI, Anthropic, Google, Meta, Mistral, Cohere, DeepSeek, xAI, and more. It includes features like rate limiting, caching, and monitoring.Basic Usage
Configuration
Required Parameters
Model identifier in format
provider/model. Available providers and models:OpenAI:openai/gpt-4o,openai/gpt-4.1-mini,openai/gpt-5,openai/o3-mini
anthropic/claude-sonnet-4.5,anthropic/claude-opus-4.1,anthropic/claude-haiku-4.5
google/gemini-2.5-flash,google/gemini-2.5-pro
meta/llama-4-maverick,meta/llama-4-scout,meta/llama-3.3-70b
mistral/magistral-medium,mistral/mistral-large,mistral/codestral
deepseek/deepseek-v3.2-exp,deepseek/deepseek-r1
xai/grok-4,xai/grok-3-mini-fast
Model Parameters
Sampling temperature (0.0 to 2.0). Controls randomness in responses.
Maximum tokens to generate.
Nucleus sampling parameter (0.0 to 1.0).
List of reasoning model patterns that require prompt-based JSON extraction instead of native structured output.
Client Parameters
Vercel API key for authentication.
Get your API key from Vercel Dashboard
Vercel AI Gateway endpoint URL.
Request timeout in seconds or httpx.Timeout object.
Maximum number of retries for failed requests.
Additional headers to include in all requests.
Additional query parameters for all requests.
Custom async HTTP client instance.
Gateway-Specific Parameters
Provider routing options for the AI Gateway. Use this to control which providers are used and in what order.Example:
Advanced Usage
Provider Routing
Control which providers handle your requests:Structured Output
Automatic structured output with provider-specific optimizations:ChatVercel automatically handles different structured output methods:
- OpenAI models: Native JSON schema
- Anthropic models: Prompt-based extraction
- Google models: Gemini-optimized schema
- Reasoning models: Prompt-based extraction
Multiple Providers
Access different providers through the same interface:Reasoning Models
Environment Setup
.env
Error Handling
Properties
provider
Returns the provider name:"vercel"
name
Returns the model identifier.Methods
get_client()
Returns anAsyncOpenAI client configured for Vercel AI Gateway.
ainvoke()
Asynchronously invoke the model with messages.Parameters
- messages (
list[BaseMessage]): List of messages - output_format (
type[T] | None): Optional Pydantic model for structured output
Returns
ChatInvokeCompletion[T] | ChatInvokeCompletion[str] with:
completion: Response contentusage: Token usage including:prompt_tokens: Input tokenscompletion_tokens: Output tokenstotal_tokens: Total tokens usedprompt_cached_tokens: Cached tokens (when available)
stop_reason: Completion reason
Gateway Features
Rate Limiting
- Built-in rate limiting across providers
- Automatic request queuing
- Configurable limits per model
Caching
- Response caching for repeated requests
- Reduced latency and costs
- Automatic cache invalidation
Monitoring
- Request tracing and analytics
- Performance metrics
- Error tracking
- Usage statistics
Provider Fallback
- Automatic fallback to alternative providers
- High availability
- Load balancing
Schema Optimization
Provider-Specific Handling
ChatVercel automatically optimizes schemas for different providers: Gemini Models:- Removes
additionalProperties - Resolves
$refreferences - Handles empty object types
- Cleans unsupported properties
- Prompt-based JSON extraction
- Custom schema instructions
- Markdown code block parsing
- Prompt-based extraction
- No native structured output
- JSON validation and cleanup
Supported Models
The implementation supports 150+ models across providers:- OpenAI: GPT-4o, GPT-5, o3-mini, o4-mini
- Anthropic: Claude Sonnet 4.5, Opus 4.1, Haiku 4.5
- Google: Gemini 2.5 Flash, Gemini 2.5 Pro
- Meta: Llama 4 Maverick, Llama 4 Scout, Llama 3.3
- Mistral: Magistral, Mistral Large, Codestral
- DeepSeek: DeepSeek v3.2, DeepSeek R1
- xAI: Grok 4, Grok 3 Mini
- Cohere: Command A, Command R+
- Amazon: Nova Pro, Nova Lite
- And many more…
Benefits
Unified Interface
- Single API for multiple providers
- Consistent error handling
- Standardized token counting
Cost Optimization
- Route to cheapest available provider
- Automatic caching reduces costs
- Pay only for what you use
Reliability
- Built-in retries and fallbacks
- High availability
- Enterprise-grade infrastructure
Flexibility
- Easy provider switching
- A/B testing different models
- Multi-provider redundancy
Related
- ChatBrowserUse - Recommended provider
- ChatOpenAI
- ChatAnthropic
- ChatGoogle