This is why I asked if you were serious. I literally, in the first sentence, explained why using them offline invalidates their main function, which is to keep learning.
You think AI already knows everything it needs to know. This is not how humans work, we don't just "go offline" and stop learning. AI is "Artificial Intelligence". The I is pretty important.
But I guess I understand why you think this way. You asked if writing software and code was different now than it was 10 years, the answer is yes, quite a bit. The LLMs are designed to keep learning, to keep up with human ingenuity and to try to behave like humans do. You have the same misunderstanding of AI most people do. And its why so many people think AI is already capable of taking high skill jobs.
AI does not write code for you, it doesn't understand how to write code, not the way humans do. If you try to throw an AI agent into a custom codebase it fails miserably until it gets trained on that code base. It reads through its own data "memory" and outputs what it thinks is an appropriate answer to your coding question, based on information it has absorbed and learned from humans writing it down. And it's often not correct. So we correct it, collectively and it "learns".
If you still don't understand why an offline LLM invalidates its main function, I'm not sure what else to say.