With a caveat that typically "it was seen as AI-worthy" only because people expected the full solution, which would require an AI, but learned to accept a partial but much simpler one.
For example, free-form document search used to be considered AI problem, because people assumed the program needs to understand the documents and the query, the way a librarian or an archivist did. It turns out that some fuzzy matching and a silly graph algorithm can get you 90% there - so search is now called "non-AI". But it's not the people's understanding of AI that changed. It's that the AI part of search is the unsolved remaining 10% (that would also make it not suck).
Same story with machine translation. It was considered AI, now it's 90% solved with clever algorithms and a big corpus. Definitely not AI - but again, only because the AI part is the remaining 10% that would make machine translation not suck. Note how business and legal language is still translated by humans - that's because the 90% that was solved is good enough only for casual use, where people are willing to tolerate mistakes and to meet the machine halfway.
Point is, I'd argue people still have the same vague definition of AI as they always had - a program that is smart in general sense, that can navigate its environment and figure stuff out on its own. A program that, if you squint hard enough, could be considered a person[0]. Problems that transitioned from "AI-complete" to "doesn't need AI after all" are ones for which we found good enough solutions that didn't need the AI.
--
[0] - At least to sensibilities trained on science fiction, which very much broadens what you'd consider a sapient life form, a person.
For example, free-form document search used to be considered AI problem, because people assumed the program needs to understand the documents and the query, the way a librarian or an archivist did. It turns out that some fuzzy matching and a silly graph algorithm can get you 90% there - so search is now called "non-AI". But it's not the people's understanding of AI that changed. It's that the AI part of search is the unsolved remaining 10% (that would also make it not suck).
Same story with machine translation. It was considered AI, now it's 90% solved with clever algorithms and a big corpus. Definitely not AI - but again, only because the AI part is the remaining 10% that would make machine translation not suck. Note how business and legal language is still translated by humans - that's because the 90% that was solved is good enough only for casual use, where people are willing to tolerate mistakes and to meet the machine halfway.
Point is, I'd argue people still have the same vague definition of AI as they always had - a program that is smart in general sense, that can navigate its environment and figure stuff out on its own. A program that, if you squint hard enough, could be considered a person[0]. Problems that transitioned from "AI-complete" to "doesn't need AI after all" are ones for which we found good enough solutions that didn't need the AI.
--
[0] - At least to sensibilities trained on science fiction, which very much broadens what you'd consider a sapient life form, a person.