> But for other things that are real-time or near real-time where network latency would be unacceptable, we're already there.
Agreed. Something else I wonder is if local AI in mobile devices might be better able to learn from its real-time interactions with the physical world than datacenter-based AI.
It's walking around in the world with a human with all its various sensors recording in real-time (unless disabled) - mic, camera, GPS/location, LiDAR, barometer, gyro, accelerometer, proximity, ambient light, etc. Then the human uses it to interact with the world too in various ways.
All that data can of course be quickly sent to a datacenter too, and integrated into the core system there, so maybe not. But I'm curious about this difference and wonder what advantages local AI might eventually confer.
This is a fascinating thought! It could send all the data to the cloud, but all those sensors going all the time would be a lot of constant data to send, and would use a lot of mobile data which would be unacceptable to many people (including probably the mobile networks). If it's running locally though, the data could we quickly analyzed and probably deleted, avoiding long term storage issues. There's got to be a lot of interesting things you could do with that kind of data
Agreed. Something else I wonder is if local AI in mobile devices might be better able to learn from its real-time interactions with the physical world than datacenter-based AI.
It's walking around in the world with a human with all its various sensors recording in real-time (unless disabled) - mic, camera, GPS/location, LiDAR, barometer, gyro, accelerometer, proximity, ambient light, etc. Then the human uses it to interact with the world too in various ways.
All that data can of course be quickly sent to a datacenter too, and integrated into the core system there, so maybe not. But I'm curious about this difference and wonder what advantages local AI might eventually confer.