And now the convergence is going to be massively accelerated by LLMs and generative art/video and code tools.
Because these work best (actually only work at all) in the middle lane of the masses of text/images/code that they ingest, and from which they generate their output.
They generate the most likely output to result from the given input. This necessarily homogenizes out any surprise or highly valuable information. We get the most average output
(which, to be fair to their creators, is an average of the above-average human inputs, since they are training on the output of skilled humans in each field, and e.g., that grammar of GPT-4 is noticeably better than almost all current journalists, even when it is hallucinating an answer)
Because these work best (actually only work at all) in the middle lane of the masses of text/images/code that they ingest, and from which they generate their output.
They generate the most likely output to result from the given input. This necessarily homogenizes out any surprise or highly valuable information. We get the most average output
(which, to be fair to their creators, is an average of the above-average human inputs, since they are training on the output of skilled humans in each field, and e.g., that grammar of GPT-4 is noticeably better than almost all current journalists, even when it is hallucinating an answer)