It's not anymore. If the model is publicly accessible, its skills can be distilled by performing some API calls and recording input-output pairs. This scheme works so well it has become the main mode to prepare data for small models. Model skills leak.
I agree, publicly deployed models seem to be easy to train from. I did say "internally deployed LLM" though. agentcoops said "...where the models in question increase the productivity of workers in their non-ai-related profit centers" above, that's the bit I was thinking about. I think private models, either trained from scratch or fine-tuned, are going to be a big deal though they won't make the PR splash that public models make.
The conclusion for that seems to be that it just yields a model that has the surface look and feel of GPT3 or 4 but without the depth, so the experience quickly becomes unsatisfactory once you go out of the fine tuning dataset.