This is an extremely relevant video. EDIT: The title is hyperbolic but the video is extremely interesting. You have to actually watch the whole thing.
There are a few misconceptions in that video: ARM is not an "open architecture", it's basically just a license for their cores. You can either build your own SoC with everything else around the cores yourself, or license other parts of ARM, or even let them put something together. But ARM still controls the cores.
One example: Apple used the ARM cores and put a PowerVR GPU for the A10 in the 2017 iPad. Yes, that's the PowerVR that was in the Sega Dreamcast. Since then Apple has invested in their own GPU cores.
Also, I would not fellate that much about ARM: Yes, it's more modern, but it's still an architecture from 1985. When talking about "wasted silicon no one needs anymore like x86" and "keeping the architecture alive by stacking extension after extension on top", ARM has those parts, too. For example, just like MIPS, ARM is Bi-Endianned.
What most "tech journalists" fail to realize: Modern CPU architectures are now completely decoupled from the actual instruction set: The executed code is interpreted by a software layer on the CPU and translated into internal micro-operations. Ever heard of the Spectre and Meltdown exploits, and how that triggered a plethora of microcode updates? By then you should've realized that your processor is just another computer. Yes, ARM is a load-store RISC architecture, while x86 is an old school CISC register-memory architecture, but internally the x86 probably works the same as load-store RISC.
You could say the x86 instruction set is basically the "lingua franca" of the computing world, as most code is compiled to that instruction set. So why would people choose ARM, another closed instruction set, as the next instruction set to compile their software to? Good question! This is also why ARM is starting to sweat right now, because just like Linux some people came up with an open source ISA for a modern RISC architecture: RISC-V. Which is starting to gain traction. With x86 being walled off between Intel and AMD, and ARM not being much better, a lot of companies are looking for alternatives. From the Europeans to the Chinese, even Nvidia are testing RISC-V as replacement for their Falcon controller on the GeForce cards.
On another note, you could probably get comparable performance and power usage to the M1/M2 chips from Apple with an x86 architecture, if you would do the same as Apple did:
- Use the latest TSMC node (5 nm, which Apple has booked exclusively for a while)
- Put all I/O and the GPU on one big die, leave out parts you don't need like PCI-Express
- Put all the RAM directly next to the CPU, but make it non-expansible in the process
With all this you have drawbacks: Big dies and modern processes have low yields, making the result expensive (Apple doesn't care, it was always an expensive premium brand), and once baked you can't change the RAM in the chip (Apple doesn't care, they want users to buy new hardware instead of upgrading).