I don’t know if they will release the models, but are you sure you can train a 170 billion parameter model? Last I heard it’s around 500GB, which would require serious infrastructure.
What's interesting with machine learning is that in a few years time algorithms get efficient enough to train the same quality models on commodity hardware. At the same time organizations are always a few years ahead :(