If you do the calculations against the cheapest available models (GPT-4.1-nano and Gemini 1.5 Flash 8B and Amazon Nova Micro for example - I have a table on https://www.llm-prices.com/ ) it is shockingly inexpensive to process even really large volumes of text.
$20 could cover half a billion tokens with those models! That's a lot of firehose.
$20 could cover half a billion tokens with those models! That's a lot of firehose.