Small models keep getting smarter and local hardware keeps getting better.
At some point, they will converge and an inflection for local LLMs will happen. Local LLMs will never be as smart or fast as cloud LLMs but they will be very useful for lower value tasks.
The Apple I was a pretty poor predictor of what mainstream mass-market computing was going to end up looking like. I don't think anybody has yet come up with the Apple II of local LLMs, let alone the VisiCalc or Windows 95.