Documentation Index
Fetch the complete documentation index at: https://mintlify.com/browser-use/browser-use/llms.txt
Use this file to discover all available pages before exploring further.
Overview
ChatGoogle provides integration with Google’s Gemini models including Gemini 2.0 Flash, Gemini 2.5 Pro, and Gemini 3 series with advanced thinking capabilities.Basic Usage
Configuration
Required Parameters
Gemini model to use. Available options:Latest Models:
gemini-flash-latest: Latest Flash model (recommended)gemini-flash-lite-latest: Lightweight Flash model
gemini-3-pro-preview: Most powerful with advanced reasoninggemini-3-flash-preview: Fast with thinking capabilities
gemini-2.5-pro: High-performance Pro modelgemini-2.5-flash: Fast and efficientgemini-2.5-flash-lite: Lightweight option
gemini-2.0-flash: Standard Flashgemini-2.0-flash-exp: Experimental Flashgemini-2.0-flash-lite-preview-02-05: Lite preview
gemma-3-27b-it,gemma-3-4b,gemma-3-12b: Open modelsgemma-3n-e2b,gemma-3n-e4b: Nano models
Model Parameters
Sampling temperature (0.0 to 2.0). Controls randomness in responses.
Nucleus sampling parameter (0.0 to 1.0).
Random seed for deterministic output.
Maximum tokens in the response.
Thinking Configuration
For Gemini 2.5 models: Control thinking tokens.
-1: Dynamic/auto (default)0: Disable thinking> 0: Specific token count for thinking
For Gemini 3 models: Control reasoning depth.
- Gemini 3 Pro:
low,high - Gemini 3 Flash:
minimal,low,medium,high
Client Parameters
Google API key. Defaults to
GOOGLE_API_KEY environment variable.Get your free API key at aistudio.google.com/app/apikey
Whether to use Vertex AI instead of AI Studio.
Google Cloud credentials object for Vertex AI.
Google Cloud project ID for Vertex AI.
Google Cloud region for Vertex AI (e.g.,
us-central1).Advanced Parameters
Include system messages in the first user message (for models without system instruction support).
Use native JSON mode. Set to
False for prompt-based fallback.Number of retries for retryable errors.
HTTP status codes to retry on.
Base delay in seconds for exponential backoff.
Maximum delay between retries.
Advanced Usage
Gemini 3 with Thinking
Gemini 2.5 with Dynamic Thinking
Structured Output
Using Vertex AI
With Code Execution Tool
Environment Setup
.env
Error Handling
Properties
provider
Returns the provider name:"google"
name
Returns the model name.Methods
get_client()
Returns agenai.Client instance.
ainvoke()
Asynchronously invoke the model with messages.Parameters
- messages (
list[BaseMessage]): List of messages - output_format (
type[T] | None): Optional Pydantic model for structured output
Returns
ChatInvokeCompletion[T] | ChatInvokeCompletion[str] with:
completion: Response contentusage: Token usage including:prompt_tokens: Input tokenscompletion_tokens: Output tokens (includes thinking tokens for Gemini 2.5/3)prompt_cached_tokens: Cached tokensprompt_image_tokens: Tokens from images
stop_reason: Completion reason
Token Usage
Gemini includes thinking tokens in completion counts:Thinking Models
Gemini 3 Pro
- Uses
thinking_level:loworhigh - Best for complex reasoning
- Validates unsupported thinking configurations
Gemini 3 Flash
- Supports all
thinking_levelvalues:minimal,low,medium,high - Defaults to
thinking_budget=-1if no thinking_level set - Balances speed and reasoning
Gemini 2.5
- Uses
thinking_budgetonly -1for dynamic thinking (default)0to disable- Set specific token count for controlled thinking
Model Comparison
| Model | Speed | Cost | Thinking | Context |
|---|---|---|---|---|
| Gemini 3 Pro | Medium | High | Advanced | 2M |
| Gemini 3 Flash | Fast | Low | Advanced | 1M |
| Gemini 2.5 Pro | Medium | Medium | Good | 2M |
| Gemini 2.5 Flash | Fast | Low | Good | 1M |
| Gemini 2.0 Flash | Fast | Low | Basic | 1M |
Related
- ChatBrowserUse - Recommended provider
- ChatOpenAI
- ChatAnthropic