If carrying around a little bit of x86 compatibility baggage had enough die size cost to matter for Apple, then Intel and AMD would be pushing much harder to reduce their comparative mountain of x86 compatibility baggage.
In reality, the costs of design changes and validation and updating software to not rely on a newly-deprecated hardware feature can easily outweigh the potential per-chip cost savings of eliminating an instruction or two from a CPU core.
Intel and AMD have to maintain compatibility in a much different way. They don’t control what is done on the OS, they don’t control what is done with the physical product sold in stores. They have gigantic customers like Dell and HP and Microsoft that make specific technical demands out of their architecture.
Apple controls the whole stack. They decide exactly what features are in software and are in hardware. There are zero Apple machines in data centers running BigCorp’s legacy CashCow software. There are zero laptop or desktop OEMs that use Apple’s chips besides Apple. Apple won’t piss off their core consumer or creative professional customers by changing some technical behind-the-scenes feature.
I think Apple would gladly cut a very small and specific architecture feature from their chips if they feel it’s obsolete, better accomplished in software or not at all.
And this isn’t just for cost savings, it could be for performance and/or battery life optimization, or to make room for something else in the package.