It's not compression in the normal sense of the word. Most parsing is directly to data. So e.g. you know the square of some piece is the next 5 bits. In languages that allow it you can cast directly from the next bit offset to an e.g. byte. This is going to dramatically faster than parsing much more loosely structured JSON. As database sizes increase you also get worse performance there, so it's a double hit. So with these sort of representations you get orders of magnitude faster and smaller. Sometimes there really is a free lunch!
Also I'd add the sizes involved here are kind of insane. I wrote a database system that was using a substantially better compression that averaged out to ~19 bytes per position IIRC. And I was still getting on the order of 15 gigabytes of data per million games. Ideally you want to support at least 10 million games for a modern chess database, and 150 gigabytes is already getting kind of insane - especially considering you probably want it on an SSD. But if that was JSON, you'd be looking at terrabytes of data, which is just completely unacceptable.
Also I'd add the sizes involved here are kind of insane. I wrote a database system that was using a substantially better compression that averaged out to ~19 bytes per position IIRC. And I was still getting on the order of 15 gigabytes of data per million games. Ideally you want to support at least 10 million games for a modern chess database, and 150 gigabytes is already getting kind of insane - especially considering you probably want it on an SSD. But if that was JSON, you'd be looking at terrabytes of data, which is just completely unacceptable.