Anthropic Provider
Anthropic API provider implementing AiProvider with streaming, tool use, thinking, and vision support. Connects to Claude 4, Claude 3.5 Sonnet, Claude 3 Haiku, and other Claude models.
Installation
$
flai add anthropic_provider
Import
import 'package:my_app/flai/providers/anthropic_provider.dart';
Setup
final provider = AnthropicProvider(
apiKey: 'sk-ant-...',
model: 'claude-sonnet-4-20250514',
);
// Use with ChatScreenController
final controller = ChatScreenController(
provider: provider,
);
Never hardcode API keys in your source code. Use environment variables, a secrets manager, or a backend proxy.
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
| apiKey | String | required | Your Anthropic API key. |
| model | String | 'claude-sonnet-4-20250514' | Model ID to use for messages. |
| baseUrl | String | 'https://api.anthropic.com' | API base URL. Change for proxies or custom deployments. |
| temperature | double? | null | Sampling temperature (0.0 to 1.0). |
| maxTokens | int | 4096 | Maximum tokens in the response. |
| systemPrompt | String? | null | System message for all conversations. |
| enableThinking | bool | false | Enable extended thinking for compatible models. |
| thinkingBudget | int? | null | Max tokens for thinking. Required if enableThinking is true. |
Capabilities
| Capability | Supported | Notes |
|---|---|---|
| supportsStreaming | Yes | Server-sent events via Messages API |
| supportsToolUse | Yes | Tool use with JSON schema input definitions |
| supportsVision | Yes | Image inputs via base64 (all Claude 3+ models) |
| supportsThinking | Yes | Extended thinking with Claude 3.5 Sonnet and Claude 4 |
Extended Thinking
Claude's extended thinking exposes the model's reasoning process. Enable it to show thinking indicators in the UI:
final provider = AnthropicProvider(
apiKey: apiKey,
model: 'claude-sonnet-4-20250514',
enableThinking: true,
thinkingBudget: 10000, // max thinking tokens
);
// The ChatEvent stream will include ThinkingDelta events
await for (final event in provider.streamChat(request)) {
switch (event) {
case ThinkingDelta(:final text):
// Show in ThinkingIndicator
thinkingContent += text;
case TextDelta(:final text):
// Regular response text
responseContent += text;
// ...
}
}
When
enableThinking is true, the Message.thinkingContent field will be populated, and the MessageBubble will automatically render the collapsible thinking indicator.Streaming
The Anthropic provider uses the Messages API with server-sent events:
final stream = provider.streamChat(ChatRequest(
messages: messages,
));
await for (final event in stream) {
switch (event) {
case TextDelta(:final text):
print(text);
case ThinkingDelta(:final text):
print('Thinking: $text');
case ToolCallDelta(:final toolCall):
print('Tool: ${toolCall.name}');
case UsageEvent(:final usage):
print('Input: ${usage.inputTokens}, Output: ${usage.outputTokens}');
if (usage.cacheReadTokens != null) {
print('Cache read: ${usage.cacheReadTokens}');
}
case DoneEvent():
print('Done');
case ErrorEvent(:final error):
print('Error: $error');
}
}
Tool Use
final request = ChatRequest(
messages: messages,
tools: [
Tool(
name: 'get_weather',
description: 'Get current weather for a location',
parameters: {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'City name (e.g., San Francisco)',
},
},
'required': ['location'],
},
),
],
);
Prompt Caching
The Anthropic provider automatically reports cache statistics in UsageInfo:
// Cache info is available in UsageInfo
if (usage.cacheReadTokens != null) {
print('Cache read tokens: ${usage.cacheReadTokens}');
print('Cache creation tokens: ${usage.cacheCreationTokens}');
}
Prompt caching reduces costs by up to 90% for repeated system prompts and context. The
TokenUsage component displays cache stats when available.