Explanation of reason and reasoning paths is a product of manipulating language.
Most ideas, even in the reasoning fields, are generated in non-linguistic processes.
Of course, some problems are solved by step-by-step linguistic (or math) A, then B, then C steps, etc., but even for those types of problems, when they get complex, the solution looks more like follow a bunch of paths to dead ends, think some more, go away, and then "Aha!" the idea of a solution pops into our head, then we back it up and make it explicit with the linguistic/logical 'chain of reasoning' to explain it to others. That solution did not come from manipulating language, but from some other cognitive processes we do not understand, but the explanation of it used language.
LLMs aren't even close to that type of processing.
It's nice that you say partially, that's bit different from every other HN comment that wondered this. Yeah, probably partially, as in, you have reason, you add in language, you get more reason.
Extraordinary claims demand extraordinary evidence. We have machines that talk, which is corollary to nothing.