That is addressing the incomprehensibility of PCA and applying a transformation to the entire latent space. I've never found PCA to be meaningful for deep learning. As far as I can tell, polysemous issue with neurons cannot be addressed with a single linear transformation. There is no sparse analysis (via linear probes or SAEs) and hence the unaddressed issue.
I believe they just mean that you should edit the comment where you added the links to mention that you are the author, to add that additional context.
I just meant 'it's good for people to know one of the authors is in the thread because it makes for more interesting conversation'. Clearly did not figure out how to do that without starting a bunch of meta!
I believe this could (or should) have been a Show HN, which would have allowed you to include explanatory text. See the top of this page for the rules.
I am interested in doing research like this. Is there any way I can be a part of it or a similar group? I have been fighting for funding from DoD for many years but to no avail so I largely have to do this research on my own time or solve my current grant's problems so that i can work on this. In my mind, this kind of research is the most interesting and important right now in the deep learning field. I am a hard worker and a high-throughput thinking... how can i get connected to otherwise with a similar mindset?