Summary
The Anthropic instrumentation extracts only input_tokens and output_tokens from the usage object, silently dropping the prompt caching fields cache_creation_input_tokens and cache_read_input_tokens. These fields are present in every Anthropic API response (including this repo's own test cassettes) and are critical for tracking prompt caching cost savings.
What is missing
In InstrumentationSemConv.tagAnthropicResponse() (lines 216-226), only two usage fields are extracted:
if (usage.has("input_tokens")) metrics.put("prompt_tokens", usage.get("input_tokens"));
if (usage.has("output_tokens")) metrics.put("completion_tokens", usage.get("output_tokens"));
The following fields from the Anthropic usage object are never extracted:
cache_creation_input_tokens — tokens written to the cache (billed at 25% premium)
cache_read_input_tokens — tokens read from the cache (billed at 90% discount)
cache_creation.ephemeral_5m_input_tokens / cache_creation.ephemeral_1h_input_tokens — breakdown by TTL
For comparison, the OpenAI handler in the same file extracts output_tokens_details.reasoning_tokens (line 145-150), showing that nested usage detail extraction is an established pattern here.
The repo's own Anthropic test cassettes contain these fields — e.g., every response in test-harness/src/testFixtures/resources/cassettes/anthropic/__files/ includes "cache_creation_input_tokens":0,"cache_read_input_tokens":0 — but no test asserts they are captured in metrics.
Braintrust docs status
Upstream sources
Local files inspected
braintrust-sdk/src/main/java/dev/braintrust/instrumentation/InstrumentationSemConv.java — lines 203-231 (tagAnthropicResponse)
braintrust-sdk/instrumentation/anthropic_2_2_0/src/test/java/dev/braintrust/instrumentation/anthropic/v2_2_0/BraintrustAnthropicTest.java
test-harness/src/testFixtures/resources/cassettes/anthropic/__files/v1_messages-759ba9d9-fbff-4177-8666-aac06350c678.json — example response with cache_creation_input_tokens and cache_read_input_tokens present but ignored
Summary
The Anthropic instrumentation extracts only
input_tokensandoutput_tokensfrom theusageobject, silently dropping the prompt caching fieldscache_creation_input_tokensandcache_read_input_tokens. These fields are present in every Anthropic API response (including this repo's own test cassettes) and are critical for tracking prompt caching cost savings.What is missing
In
InstrumentationSemConv.tagAnthropicResponse()(lines 216-226), only two usage fields are extracted:The following fields from the Anthropic
usageobject are never extracted:cache_creation_input_tokens— tokens written to the cache (billed at 25% premium)cache_read_input_tokens— tokens read from the cache (billed at 90% discount)cache_creation.ephemeral_5m_input_tokens/cache_creation.ephemeral_1h_input_tokens— breakdown by TTLFor comparison, the OpenAI handler in the same file extracts
output_tokens_details.reasoning_tokens(line 145-150), showing that nested usage detail extraction is an established pattern here.The repo's own Anthropic test cassettes contain these fields — e.g., every response in
test-harness/src/testFixtures/resources/cassettes/anthropic/__files/includes"cache_creation_input_tokens":0,"cache_read_input_tokens":0— but no test asserts they are captured in metrics.Braintrust docs status
Upstream sources
usageobject includescache_creation_input_tokensandcache_read_input_tokensUsageclass includescacheCreationInputTokens()andcacheReadInputTokens()fieldsLocal files inspected
braintrust-sdk/src/main/java/dev/braintrust/instrumentation/InstrumentationSemConv.java— lines 203-231 (tagAnthropicResponse)braintrust-sdk/instrumentation/anthropic_2_2_0/src/test/java/dev/braintrust/instrumentation/anthropic/v2_2_0/BraintrustAnthropicTest.javatest-harness/src/testFixtures/resources/cassettes/anthropic/__files/v1_messages-759ba9d9-fbff-4177-8666-aac06350c678.json— example response withcache_creation_input_tokensandcache_read_input_tokenspresent but ignored