I know everyone has different opinions on LLM safety/ethics and I respect that. However for me this “AI Welfare” indicates that Anthropic is lead by lunatics and that’s more scary. You have to be pretty crazy to worry the code running on GPUs has “feelings”.
I assume it's another tactic to make it seem like even Anthropic are kind of shocked at how powerful their own tech has become: "wow, our AI is SO ADVANCED we decided we better start considering its well-being", along the lines of when Altman pretended some model (which they're currently selling) was just too dangerous to release. Marketing.