Documentation Index
Fetch the complete documentation index at: https://mintlify.com/browser-use/browser-use/llms.txt
Use this file to discover all available pages before exploring further.
Overview
ChatMistral provides integration with Mistral AI’s language models, including Mistral Large, Mistral Medium, and Mistral Small with optimized schema sanitization for reliable structured outputs.Basic Usage
Configuration
Required Parameters
Mistral model to use. Common options:
mistral-medium-latest: Balanced performance (default)mistral-large: Most powerful Mistral modelmistral-small: Fast and cost-effective- Or any other Mistral model identifier
Model Parameters
Sampling temperature (0.0 to 1.0). Controls randomness in responses.
Nucleus sampling parameter (0.0 to 1.0).
Maximum tokens to generate. Mistral uses
max_tokens (not max_completion_tokens).Random seed for deterministic output.
Enable Mistral’s safe prompt mode for content filtering.
Client Parameters
Mistral API key. Falls back to
MISTRAL_API_KEY environment variable.Get your API key at console.mistral.ai
Base URL for Mistral API. Can be overridden with
MISTRAL_BASE_URL environment variable.Request timeout in seconds or httpx.Timeout object.
Maximum number of retries for failed requests.
Additional headers to include in all requests.
Additional query parameters for all requests.
Custom async HTTP client instance.
Advanced Usage
Structured Output with JSON Schema
ChatMistral uses Mistral’s native JSON schema support with automatic schema optimization:ChatMistral includes automatic schema sanitization to ensure compatibility with Mistral’s JSON schema requirements.
Custom Base URL
Safe Prompt Mode
Custom Headers and Query Parameters
Environment Setup
.env
Error Handling
Properties
provider
Returns the provider name:"mistral"
name
Returns the model name.Methods
ainvoke()
Asynchronously invoke the model with messages.Parameters
- messages (
list[BaseMessage]): List of messages - output_format (
type[T] | None): Optional Pydantic model for structured output
Returns
ChatInvokeCompletion[T] | ChatInvokeCompletion[str] with:
completion: Response content (string or structured output)usage: Token usage including:prompt_tokens: Input tokenscompletion_tokens: Output tokenstotal_tokens: Total tokens used
stop_reason: Not exposed by Mistral implementation
Implementation Details
Direct HTTP API
ChatMistral uses direct HTTP requests instead of the official SDK for better control:- Custom retry logic with httpx transport
- Automatic schema sanitization for Mistral compatibility
- Flexible message content handling (string and list formats)
- Custom error parsing for better error messages
Schema Optimization
The implementation includesMistralSchemaOptimizer that:
- Ensures strict JSON schema compatibility
- Removes unsupported schema features
- Optimizes nested object structures
- Validates schema before sending to API
Model Capabilities
Mistral Large
- Most powerful Mistral model
- Best for complex reasoning tasks
- Strong multilingual support
- Excellent code generation
Mistral Medium
- Balanced performance and cost
- Good for general tasks
- Fast inference speed
- Recommended default
Mistral Small
- Fastest and most cost-effective
- Great for simple tasks
- High throughput
- Low latency
Related
- ChatBrowserUse - Recommended provider
- ChatOpenAI
- ChatAnthropic