Mmmhmmm, i think about long term effects a lot, any particular problems with these AI models you're thinking of? Or is it just the garden variety lesswrongian AGI hard takeoff type beat?
I assume you’re some sort of a seriously high-level research scientist, given the high-level writing and use of the word lesswrongian. Otherwise, you have no way of understanding what this “research assistant“ is generating. Furthermore, if such high-level scientist, such as yourself, are no longer hiring research assistants, what humans will replace you? There will be few low-level scientist to move up to your level of expertise and provide validation of what’s generated by the AIs. Is that not a potential problem?
I think just like with robots working in factories AIs doing info work is not an issue of the AI. It's the economic system that is bad at capturing externalities that we're hell bent on keeping. On a base level I think you're right that it will cause problems, but it's not the AI itself that's the issue.
AI is reliant upon training data that prior to recent years was generated by humans. If going forward, humans are no longer part of the training data and humans are no longer capable of verifying the results and output of AI, the quality and value of AI output would be degraded and not sustainable especially regarding recent research.
The end result given the scenario is that humans become less capable and the AI becomes less capable. So where is the capability come from then?