"Normally in a quantum computer, if we apply measurement function F to input x so y=F(x), it's impossible to reverse F and turn y into x. However, if we constrain y, we can make a special function G_Y such that x=G_Y(y)"
Whilst I don't have time to really dive through all the sections, this seems weirdly click-baity and IBM-promoting for an academic article.
I didn't think "Attention > Convolution" was a prevalent myth, given how integral convolutions are to SOTA image classifiers and GANs (if anything, I believe attention is unde-utilised here and due to grow in usage a lot)
A couple of months ago I quit the tech startup I founded to focus full time on research. So far, I'm hugely enjoying it and really glad I made the (scary) jump.
We had a little MySQL db, and both the data and the different systems consuming it grew quite rapidly, faster than we could get ahead of given company priorities. We have a read-replica for the BI dashboarding system, and this keeps our world relatively stable and reliable.