I'd be fine with SO turning into the place where effectively useful AI based answers get posted, where humans have worked together with AI to obtain a valid result, proven to be correct. In that case I wouldn't care if someone posts an AI answer, as long as it is guaranteed to be valid.
That's the place which SO can take in a world with AI assistants.
SO should not use AI to generate content, but to organize it and make it searchable.
It could shift towards the analysis of existing and generation of new content when AI is really ready for it, but I don't see it happen within the next couple of years. And at that point a LLM would probably be smart enough as to not have its user require the use of SO.
Maybe limiting the posting of commented AI results to the top 5% or so, because from the content I've seen new users posting (specially the questions), the quality has degraded strongly over the last 10 years. There's close to zero effort in crafting good questions from many of them.
I wouldn't use it if it means AI will be trained on my code unless they pay me per answer or vote. I already removed everything from GitHub etc. AI on SO smells like how Google's business strategy is advertising first, search second. SO will be in the same position soon.
Frankly I can foresee reduced online participation on places like Stack Overflow. Why would anyone help train AI, that employers are eager to replace you with, without getting anything in return?
The “work together with AI” is just a transitional phrase. The endgame is to completely replace the employee.
I don't understand this attitude. If your code is used to train AI, then it would be used to help more people in a wider variety of contexts. So assuming that your goal is to help people (which I assume it is since SO doesn't pay for answers), wouldn't you be excited about AI learning from you?
That's the place which SO can take in a world with AI assistants.
SO should not use AI to generate content, but to organize it and make it searchable.
It could shift towards the analysis of existing and generation of new content when AI is really ready for it, but I don't see it happen within the next couple of years. And at that point a LLM would probably be smart enough as to not have its user require the use of SO.
Maybe limiting the posting of commented AI results to the top 5% or so, because from the content I've seen new users posting (specially the questions), the quality has degraded strongly over the last 10 years. There's close to zero effort in crafting good questions from many of them.