That's assuming the malicious syscalls actually fire while Apple is reviewing the app. An iOS malware author would likely do everything possible to prevent the payload going off during the review process; it might be as simple as checking the system time and only launching the obfuscated malicious code a month or two after submission.
Disabling the payload under circumstances that put the perpetrator at risk of being discovered is a very common malware tactic (see Conficker disabling itself if it detects it's being run in the Ukraine, for example).
That's assuming that Apple doesn't think about serving different GPS-coordinates, different IPs, different device IDs, different system time.
And this could very well be completely automated - you just need someone to design a map of gestures / time intervals for simple workflows of the app, and then let the system run the app and interact with it for every possible configuration.
Of course, the malware author could then try different tactics to detect if the app is running inside a virtual machine, and not on a real device. But that's as hard as detecting a well-behaved / modern root-kit, and that also implies certain sequences of system-calls that can be detected.
Virtual machines do this all the time, i.e. detecting illegal operations.
Not only that, but it doesn't have to be accurate - it just has to raise a red-flag in some intranet bug-tracker that these and these apps need closer inspection. It also makes it much harder for malware author, because instead of searching just for an iOS / Android exploit, now they also have to game this approval process.
This is practically impossible. You could randomize the input for years and people could get around it. e.g. just make a web request to your website, and depending on the reply, do something nasty. You could never catch this with this sort of automated testing.
That said, I don't think Apple does any kind of audit that targets exploits and only relies on reviews.