In terms of having the proofs be understandable to humans, I'd say that's still a challenge. Not a fundamental challenge, but a real practical challenge still.
Lean makes the proofs verifiable, and sure to a degree understandable, but normal Lean proofs written by humans are already a challenge to understand without additional documentation, and, as with all code-generation, AI generated proofs are probably even harder to understand. There are many paths to arrive at a proof, the AI may take very messy routes that don't align with how humans create abstractions and assign meaning to break the process down into understandable steps.
In a sense all AI is understandable, nothing is hidden, you can see all the numbers inside a Transformer. Just being able to see the code might not necessarily help mathematicians too much in extracting deeper meaning and lessons from AI-generated proofs, at least not without significant additional work.
Lean makes the proofs verifiable, and sure to a degree understandable, but normal Lean proofs written by humans are already a challenge to understand without additional documentation, and, as with all code-generation, AI generated proofs are probably even harder to understand. There are many paths to arrive at a proof, the AI may take very messy routes that don't align with how humans create abstractions and assign meaning to break the process down into understandable steps.
In a sense all AI is understandable, nothing is hidden, you can see all the numbers inside a Transformer. Just being able to see the code might not necessarily help mathematicians too much in extracting deeper meaning and lessons from AI-generated proofs, at least not without significant additional work.