Skip to content

fix: forward max_completion_tokens to Anthropic native provider#5014

Open
khan5v wants to merge 1 commit intocrewAIInc:mainfrom
khan5v:fix/anthropic-max-completion-tokens
Open

fix: forward max_completion_tokens to Anthropic native provider#5014
khan5v wants to merge 1 commit intocrewAIInc:mainfrom
khan5v:fix/anthropic-max-completion-tokens

Conversation

@khan5v
Copy link
Copy Markdown

@khan5v khan5v commented Mar 22, 2026

Summary

  • max_completion_tokens (OpenAI-style parameter) is silently ignored by the native Anthropic provider
  • The parameter falls into **kwargs -> additional_params and never reaches self.max_tokens
  • All Anthropic API calls use the default 4096 regardless of what the user sets
  • The LiteLLM fallback path handles this correctly (llm.py:717), but the native path does not
  • The OpenAI native provider already handles both parameters (completion.py:203)

Fix

Accept max_completion_tokens in AnthropicCompletion.__init__ with the same fallback chain: max_tokens -> max_completion_tokens -> 4096

Test plan

  • max_completion_tokens=150 flows through to _prepare_completion_params as max_tokens: 150
  • max_tokens takes precedence when both are set
  • Default 4096 preserved when neither is set
  • Existing tests unaffected

  The AnthropicCompletion class did not accept max_completion_tokens,
  causing it to silently fall into **kwargs and get ignored. Users
  setting max_completion_tokens (the OpenAI-style parameter name) on
  an Anthropic model always got the default 4096 max_tokens instead.

  Add max_completion_tokens as an explicit parameter with the same
  fallback chain used by the LiteLLM path: max_tokens wins, then
  max_completion_tokens, then the 4096 default.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant