No that's not how that works either. Large, powerful models need a lot of compute during pre-training and fine-tuning, but they also require less large, but still large, compute instances to produce output from prompts. GenAI is expensive on both ends, training and runtime, vs other lesser...