Personal experience here in a FAANG, there has been a considerable increase in:
1. Teams exploring how to leverage LLMs for coding.
2. Teams/orgs that already standardized some of the processes to work with LLMs (MCP servers, standardized the creation of the agents.md files, etc)
3. Teams actively using it for coding new features, documenting code, increasing test coverage, using it for code reviews etc.
Again, personal, experience, but in my team ~40-50% of the PRs are generated by Codex.
“Teams exploring how to leverage [AI]s for [anything]” is true for about a decade now in every large multinational companies at every level. It’s not new at all. AI is the driving buzzword for a while now, even well before ChatGPT. I’ve encountered many people who just wanted the stamp that they use AI, no matter how, because my team was one of the main entry point to achieve this at that specific company. But before ChatGPT and co, you had to work for it a lot, so most of them failed miserably, or immediately backtracked when they realized this.
There are places that offer Copilot to any team that wants it, and then behind the scenes they informed their managers that if the team (1+ persons) adopts it they will have to shed 10%+ human capacity (lose a person, move a person, fire a person) in the upcoming quarters next year.
Sorry, I meant my comment was sarcasm. I was being sarcastic. The original comment was sincere, I'm quite certain. And, they are right - there are some companies that really are getting a lot of value out of LLMs already. I'd guess that the more folks who actually understand how LLMs work, the more a company can do. There just isn't a neat abstraction layer to be had, so folks who don't have a detailed mental model get caught up applying them poorly or to the wrong things.
2. Wild claim that the companies that sell LLMs are actually downplaying their capabilities instead of hyping them