Cost-Reduction
1 post
I Tested LLM Prompt Caching With Anthropic and OpenAI
Experiments testing prompt caching with actual API calls, measuring cache hits, token counts, and costs across Anthropic Claude and OpenAI GPT.
1 post
Experiments testing prompt caching with actual API calls, measuring cache hits, token counts, and costs across Anthropic Claude and OpenAI GPT.