Bad typing is definitely not enough to measure if an AI is really a human. As a teenager, I wrote a chatbot for an online text based game. I have it knowledge of a QWERTY keyboard layout, and when "typing" it had a small random chance of pressing the key next to the key it wanted to press. It would also sometimes transpose characters. Sloppy typing can be simulated.
Might be an interesting test to do a statistical analysis of your subject's mistakes against a corpus of real human mistakes, since there are many common mistakes humans make, and a random AI might make inhuman mistakes, but this would of course not be conclusive.
Simulating bad typing is only necessary to fake a human. Here, having bad typing when faking an AI is stranger.
That said, AIs trained through the chat transcripts of a large number of conversations may produce mistakes. I remember reading a paper that gave good results that way, with the side-effect that it produces typing mistakes as a result. I cannot find that paper again, unfortunately.
Might be an interesting test to do a statistical analysis of your subject's mistakes against a corpus of real human mistakes, since there are many common mistakes humans make, and a random AI might make inhuman mistakes, but this would of course not be conclusive.