And it’s bound to happen. The issue here is that Newegg is not careful, as well as not trying to help the customer once they sell them something broken. Amazon makes up for it by basically blindly trusting the customer.
If you do legitimately have a workaround for this, I can put you in touch with those grabbing metadata for the return dislike extension. They’d love to see it. (If it’s the averageRating endpoint, then they’re aware, and it’s likely going to be removed anyway.)
This is changing. YT has become more aware of archivists saving removed content in the past, and much more quickly lock things down nowadays. Pre-2017 unlisted videos were gone within days, and the community tab cut off within hours. Gone are the days of “removed” features sticking around in APIs for months. :(
But that’s system memory. Not GPU memory. M1 shares that memory, so it’s addressable by both directly, but with ryzen (and almost every other consumer platform) the cpu and gpu memory are separate.
It's unlike 5700G would push 200-400GB/s for GPU tasks, assuming one gets pytorch/tensorflow to use the shared memory and BIOS allows setting such a large shared window, all of them unlikely unfortunately.
Now that's a completely different argument, and still mostly incorrect. The 5700G will access as much memory throughput from its GPU as you can feed it. The limitation is not the GPU, it's how fast you can clock your RAM.
The BIOS doesn't set the shared memory. The BIOS sets the dedicated memory. The shared memory is set by the OS and driver as you need, and the only limit is how much memory you have and how much is used by other processes.
You can force any program to use shared memory by making dedicated memory low. As I said, these programs don't really choose to use it, it's a driver/OS responsibility.
The 5700Gs memory controller indeed can't go above 100GB/s. However 200-400GB/s is not what the M1 Max GPU can do, it's combined performance. You'd have to substract CPU performance. The M1 Max GPU would still be faster of course. But the premise is that GPU performance doesn't really matter.