Documentation Index
Fetch the complete documentation index at: https://mintlify.com/browser-use/browser-use/llms.txt
Use this file to discover all available pages before exploring further.
Overview
ChatOpenAI provides integration with OpenAI’s language models including GPT-4, GPT-4 Turbo, and reasoning models like o1 and o3.Basic Usage
Configuration
Required Parameters
OpenAI model to use. Common options:
gpt-4.1-mini: Fast and cost-effectivegpt-4o: Latest GPT-4 optimized modelgpt-4-turbo: High performance GPT-4o1,o1-pro,o3,o3-mini: Reasoning modelsgpt-5,gpt-5-mini,gpt-5-nano: Next generation models
Model Parameters
Sampling temperature (0.0 to 2.0). Lower values make output more deterministic.
Penalty for token frequency (-2.0 to 2.0). Helps avoid infinite generation loops.
Reasoning effort for reasoning models (
o1, o3, etc.). Options: low, medium, high.Random seed for deterministic output.
Service tier:
auto, default, flex, priority, or scale.Nucleus sampling parameter (0.0 to 1.0).
Maximum tokens in the completion.
Client Parameters
OpenAI API key. Defaults to
OPENAI_API_KEY environment variable.Get your API key at platform.openai.com/api-keys
OpenAI organization ID.
OpenAI project ID.
Custom base URL for OpenAI-compatible APIs.
Request timeout in seconds.
Maximum number of retries for failed requests.
Advanced Parameters
Add JSON schema to system prompt instead of using
response_format.Disable forced structured output even when output_format is provided.
Remove
minItems from JSON schema for provider compatibility.Remove default values from JSON schema for provider compatibility.
Advanced Usage
With Reasoning Models
Using OpenAI-Compatible APIs
Structured Output
With Custom Headers and Query Parameters
Environment Setup
.env
Error Handling
Properties
provider
Returns the provider name:"openai"
name
Returns the model name.Methods
get_client()
Returns anAsyncOpenAI client instance.
ainvoke()
Asynchronously invoke the model with messages.Parameters
- messages (
list[BaseMessage]): List of messages - output_format (
type[T] | None): Optional Pydantic model for structured output
Returns
ChatInvokeCompletion[T] | ChatInvokeCompletion[str] with:
completion: Response contentusage: Token usage (includes cached tokens for reasoning models)stop_reason: Finish reason (stop,length,content_filter, etc.)
Reasoning Models
Reasoning models (o1, o3, etc.) have special behavior:
- No temperature/frequency_penalty: These parameters are automatically removed
- reasoning_effort: Controls computational effort (
low,medium,high) - Token usage: Reasoning tokens are included in completion_tokens
Related
- ChatBrowserUse - Recommended provider
- ChatAnthropic
- ChatGoogle