Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/browser-use/browser-use/llms.txt

Use this file to discover all available pages before exploring further.

Overview

ChatMistral provides integration with Mistral AI’s language models, including Mistral Large, Mistral Medium, and Mistral Small with optimized schema sanitization for reliable structured outputs.

Basic Usage

from browser_use import Agent, ChatMistral
import asyncio

async def main():
    llm = ChatMistral(
        model='mistral-medium-latest',
        api_key='your_mistral_api_key'
    )
    agent = Agent(
        task="Find the number 1 post on Show HN",
        llm=llm,
    )
    await agent.run()

if __name__ == "__main__":
    asyncio.run(main())

Configuration

Required Parameters

model
str
default:"mistral-medium-latest"
Mistral model to use. Common options:
  • mistral-medium-latest: Balanced performance (default)
  • mistral-large: Most powerful Mistral model
  • mistral-small: Fast and cost-effective
  • Or any other Mistral model identifier

Model Parameters

temperature
float
default:"0.2"
Sampling temperature (0.0 to 1.0). Controls randomness in responses.
top_p
float
default:"None"
Nucleus sampling parameter (0.0 to 1.0).
max_tokens
int
default:"4096"
Maximum tokens to generate. Mistral uses max_tokens (not max_completion_tokens).
seed
int
default:"None"
Random seed for deterministic output.
safe_prompt
bool
default:"False"
Enable Mistral’s safe prompt mode for content filtering.

Client Parameters

api_key
str
default:"None"
Mistral API key. Falls back to MISTRAL_API_KEY environment variable.
Get your API key at console.mistral.ai
base_url
str
default:"https://api.mistral.ai/v1"
Base URL for Mistral API. Can be overridden with MISTRAL_BASE_URL environment variable.
timeout
float
default:"None"
Request timeout in seconds or httpx.Timeout object.
max_retries
int
default:"5"
Maximum number of retries for failed requests.
default_headers
dict
default:"None"
Additional headers to include in all requests.
default_query
dict
default:"None"
Additional query parameters for all requests.
http_client
httpx.AsyncClient
default:"None"
Custom async HTTP client instance.

Advanced Usage

Structured Output with JSON Schema

ChatMistral uses Mistral’s native JSON schema support with automatic schema optimization:
from browser_use import Agent, ChatMistral
from pydantic import BaseModel

class Product(BaseModel):
    name: str
    price: float
    description: str
    in_stock: bool

llm = ChatMistral(
    model='mistral-large',
    api_key='your_mistral_api_key',
)

agent = Agent(
    task="Extract product information",
    llm=llm,
    output_model_schema=Product,
)

result = await agent.run()
print(result.structured_output)  # Product instance
ChatMistral includes automatic schema sanitization to ensure compatibility with Mistral’s JSON schema requirements.

Custom Base URL

from browser_use import Agent, ChatMistral

llm = ChatMistral(
    model='mistral-medium-latest',
    base_url='https://custom-mistral-endpoint.com/v1',
    api_key='your_api_key',
)

agent = Agent(task="Your task", llm=llm)

Safe Prompt Mode

from browser_use import Agent, ChatMistral

llm = ChatMistral(
    model='mistral-medium-latest',
    api_key='your_mistral_api_key',
    safe_prompt=True,  # Enable content filtering
)

agent = Agent(task="Your task", llm=llm)

Custom Headers and Query Parameters

from browser_use import Agent, ChatMistral

llm = ChatMistral(
    model='mistral-medium-latest',
    api_key='your_mistral_api_key',
    default_headers={'X-Custom-Header': 'value'},
    default_query={'custom_param': 'value'},
)

agent = Agent(task="Your task", llm=llm)

Environment Setup

.env
MISTRAL_API_KEY=your_api_key_here
# Optional: Custom base URL
MISTRAL_BASE_URL=https://api.mistral.ai/v1

Error Handling

from browser_use import Agent, ChatMistral
from browser_use.llm.exceptions import (
    ModelProviderError,
    ModelRateLimitError
)

try:
    llm = ChatMistral(
        model='mistral-medium-latest',
        api_key='your_mistral_api_key',
    )
    agent = Agent(task="Your task", llm=llm)
    result = await agent.run()
except ModelRateLimitError as e:
    print(f"Rate limit exceeded: {e.message}")
    print(f"Status code: {e.status_code}")
except ModelProviderError as e:
    print(f"API error: {e.message}")
    print(f"Status code: {e.status_code}")

Properties

provider

Returns the provider name: "mistral"
llm = ChatMistral(
    model='mistral-medium-latest',
    api_key='your_mistral_api_key',
)
print(llm.provider)  # "mistral"

name

Returns the model name.
llm = ChatMistral(
    model='mistral-medium-latest',
    api_key='your_mistral_api_key',
)
print(llm.name)  # "mistral-medium-latest"

Methods

ainvoke()

Asynchronously invoke the model with messages.
from browser_use.llm.messages import SystemMessage, UserMessage

llm = ChatMistral(
    model='mistral-medium-latest',
    api_key='your_mistral_api_key',
)

messages = [
    SystemMessage(content="You are a helpful assistant"),
    UserMessage(content="What is Browser Use?")
]

response = await llm.ainvoke(messages)
print(response.completion)     # String response
print(response.usage)          # Token usage

Parameters

  • messages (list[BaseMessage]): List of messages
  • output_format (type[T] | None): Optional Pydantic model for structured output

Returns

ChatInvokeCompletion[T] | ChatInvokeCompletion[str] with:
  • completion: Response content (string or structured output)
  • usage: Token usage including:
    • prompt_tokens: Input tokens
    • completion_tokens: Output tokens
    • total_tokens: Total tokens used
  • stop_reason: Not exposed by Mistral implementation

Implementation Details

Direct HTTP API

ChatMistral uses direct HTTP requests instead of the official SDK for better control:
  • Custom retry logic with httpx transport
  • Automatic schema sanitization for Mistral compatibility
  • Flexible message content handling (string and list formats)
  • Custom error parsing for better error messages

Schema Optimization

The implementation includes MistralSchemaOptimizer that:
  • Ensures strict JSON schema compatibility
  • Removes unsupported schema features
  • Optimizes nested object structures
  • Validates schema before sending to API

Model Capabilities

Mistral Large

  • Most powerful Mistral model
  • Best for complex reasoning tasks
  • Strong multilingual support
  • Excellent code generation

Mistral Medium

  • Balanced performance and cost
  • Good for general tasks
  • Fast inference speed
  • Recommended default

Mistral Small

  • Fastest and most cost-effective
  • Great for simple tasks
  • High throughput
  • Low latency