Agreed, JSON is just a wire protocol. What TFA complains about is lack of contract on the part of logs' producers. (Schema would be a crucial part of such contract.)
In .Net we're using Serilog which supports CLEF[1], and I'm in the process of changing our non-.Net code to do structured logging to CLEF as well (I made an internal library to support structured logging).
Since Serilog supports consuming CLEF as well, this makes it trivial to upload the non-.Net logs to Azure Application Insights for example.
Might be other options as well, I didn't look much further as this fit our needs well.
edit: This doesn't completely solve the main point in the article of course, as the variables, error codes etc in the structured log message can change willy-nilly in a general system setting.
There are a few more that are fairly common to be logged somewhat separately: severity, host and component/service, but even if so, the structure is lacking in universality (i.e. lot of logging systems support this, but in their own ways). So, yeah, it's not much.
In my limited experience the amount of lines logged correlated with expected or actual activity of the system(s) is perhaps useful for monitoring. Looking into and analyzing the actual log events and their text is then the next step if there seems to be too much or too little information logged.