Skip to content
Personal blog. Opinions are my own. Always refer to official documentation.

AI Pulse: March 13, 2026

View all 16 stories →
Blog Posts
AI/ML

LLM Token Economics

How LLMs process tokens, why prompt caching cuts input costs by 90%, and why output tokens are always the biggest line item.

EL
Eric Lam
Mar 12, 2026 · 10 min read
All Posts
ai pulse
ai/ml
security

Latest Posts