Documentation Index
Fetch the complete documentation index at: https://mintlify.com/browser-use/browser-use/llms.txt
Use this file to discover all available pages before exploring further.
Overview
ChatAnthropic provides integration with Anthropic’s Claude models, including Claude 3.5 Sonnet, Claude 3 Opus, and other Claude family models.Basic Usage
Configuration
Required Parameters
Claude model to use. Common options:
claude-sonnet-4-0: Latest Claude 3.5 Sonnetclaude-3-5-sonnet-20241022: Specific Sonnet versionclaude-3-opus-20240229: Most powerful Claude 3 modelclaude-3-sonnet-20240229: Balanced performanceclaude-3-haiku-20240307: Fast and cost-effective
Model Parameters
Maximum tokens to generate in the response.
Sampling temperature (0.0 to 1.0). Controls randomness in responses.
Nucleus sampling parameter (0.0 to 1.0).
Random seed for deterministic output (experimental).
Client Parameters
Anthropic API key. Defaults to
ANTHROPIC_API_KEY environment variable.Get your API key at console.anthropic.com
Alternative authentication token.
Custom base URL for Anthropic API.
Request timeout in seconds or httpx.Timeout object.
Maximum number of retries for failed requests.
Additional headers to include in all requests.
Additional query parameters for all requests.
Custom async HTTP client.
Advanced Usage
Structured Output with Tool Calling
ChatAnthropic uses Claude’s tool calling feature for structured outputs:Prompt Caching
Claude supports prompt caching for frequently used context:Prompt caching can significantly reduce costs for repeated contexts. Cache hits are reflected in
usage.prompt_cached_tokens.Using with Custom System Prompts
Environment Setup
.env
Error Handling
Properties
provider
Returns the provider name:"anthropic"
name
Returns the model name.Methods
get_client()
Returns anAsyncAnthropic client instance.
ainvoke()
Asynchronously invoke the model with messages.Parameters
- messages (
list[BaseMessage]): List of messages - output_format (
type[T] | None): Optional Pydantic model for structured output
Returns
ChatInvokeCompletion[T] | ChatInvokeCompletion[str] with:
completion: Response content (string or structured output)usage: Token usage including:prompt_tokens: Total input tokens (including cached)completion_tokens: Output tokensprompt_cached_tokens: Tokens retrieved from cacheprompt_cache_creation_tokens: Tokens written to cache
stop_reason: Completion reason (end_turn,max_tokens,stop_sequence,tool_use)
Token Usage
Claude’s token counting is unique:Anthropic includes cached tokens in the total prompt token count. Check
prompt_cached_tokens for actual cache hits.Model Capabilities
Claude 3.5 Sonnet
- Best balance of intelligence and speed
- 200K token context window
- Strong coding and analysis capabilities
- Native vision support
Claude 3 Opus
- Most powerful Claude model
- Best for complex tasks
- 200K token context window
- Highest accuracy
Claude 3 Haiku
- Fastest and most cost-effective
- Great for simple tasks
- 200K token context window
- Instant responses
Related
- ChatBrowserUse - Recommended provider
- ChatOpenAI
- ChatGoogle