Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Probably. Apple published a paper back in 2017 about improving synthetic data for the purposes of training models (though not transformers).

The examples they give are for eye and hand tracking -- which not coincidentally are used for navigating the Apple Vision Pro user interface.

https://machinelearning.apple.com/research/gan



It'd be cool to run some tests where you train a model with data and then supplement the training data with generated stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: