Documentation Index
Fetch the complete documentation index at: https://mintlify.com/browser-use/browser-use/llms.txt
Use this file to discover all available pages before exploring further.
Overview
ChatAzureOpenAI provides integration with Azure OpenAI Service, supporting GPT-4, GPT-5, and other OpenAI models through Microsoft Azure’s enterprise-grade infrastructure. It includes support for both Chat Completions API and the newer Responses API.Basic Usage
Configuration
Required Parameters
Azure OpenAI model deployment name. Common options:
gpt-4o: Latest GPT-4 optimized modelgpt-4-turbo: High performance GPT-4gpt-4.1-mini: Fast and cost-effectivegpt-5,gpt-5-mini,gpt-5-nano: Next generation modelsgpt-5.1-codex-mini,gpt-5.1-codex-max: Codex models (require Responses API)
Azure-Specific Parameters
Azure OpenAI API key. Falls back to
AZURE_OPENAI_KEY or AZURE_OPENAI_API_KEY environment variable.Get your API key from Azure Portal
Your Azure OpenAI resource endpoint. Falls back to
AZURE_OPENAI_ENDPOINT environment variable.Example: https://your-resource.openai.azure.comYour Azure OpenAI deployment name. Falls back to
AZURE_OPENAI_DEPLOYMENT environment variable.Azure OpenAI API version. Use
2025-03-01-preview or later for Responses API support.Azure Active Directory token for authentication (alternative to API key).
Token provider function for dynamic Azure AD authentication.
Custom base URL (alternative to azure_endpoint).
Model Parameters
Sampling temperature (0.0 to 2.0). Lower values make output more deterministic.
Penalty for token frequency (-2.0 to 2.0).
Reasoning effort for reasoning models. Options:
low, medium, high.Random seed for deterministic output.
Service tier:
auto, default, flex, priority, or scale.Nucleus sampling parameter (0.0 to 1.0).
Maximum tokens in the completion.
Client Parameters
Azure OpenAI organization ID.
Request timeout in seconds.
Maximum number of retries for failed requests.
Additional headers to include in all requests.
Additional query parameters for all requests.
Custom async HTTP client.
Responses API Parameters
Whether to use the Responses API instead of Chat Completions API.
True: Always use Responses APIFalse: Always use Chat Completions API"auto": Automatically detect based on model (default)
Responses API is required for models like
gpt-5.1-codex-mini and computer-use-preview.Advanced Parameters
Add JSON schema to system prompt for better structured output.
Disable forced structured output even when output_format is provided.
Remove
minItems from JSON schema for provider compatibility.Remove default values from JSON schema.
Advanced Usage
With Azure AD Authentication
Using Responses API
The Responses API is automatically used for models:
gpt-5.1-codex-mini, gpt-5.1-codex-max, gpt-5-codex, and computer-use-preview.Structured Output
With Custom Headers
Environment Setup
.env
Error Handling
Properties
provider
Returns the provider name:"azure"
name
Returns the model name.Methods
get_client()
Returns anAsyncAzureOpenAI client instance.
ainvoke()
Asynchronously invoke the model with messages. Automatically routes between Chat Completions API and Responses API based on model.Parameters
- messages (
list[BaseMessage]): List of messages - output_format (
type[T] | None): Optional Pydantic model for structured output
Returns
ChatInvokeCompletion[T] | ChatInvokeCompletion[str] with:
completion: Response contentusage: Token usage (includesprompt_cached_tokenswhen available)stop_reason: Finish reason
API Differences
Chat Completions API vs Responses API
| Feature | Chat Completions | Responses API |
|---|---|---|
| Models | Most GPT models | Codex, computer-use |
| API Version | Any | 2025-03-01-preview+ |
| Token Field Names | Standard | input_tokens/output_tokens |
| Response Format | choices[0].message | output_text |
| Auto-Detection | Default | For specific models |
Azure-Specific Features
Enterprise Security
- Azure AD authentication support
- Private network access with VNet
- Managed identity integration
- Compliance certifications
Deployment Options
- Dedicated model deployments
- Custom model fine-tuning
- Multi-region availability
- Provisioned throughput units (PTU)
Monitoring and Logging
- Azure Monitor integration
- Request tracing
- Usage analytics
- Cost management
Related
- ChatBrowserUse - Recommended provider
- ChatOpenAI - Standard OpenAI
- ChatAnthropic