They do static analysis of your executable and check what you call, which is why they know if you're using a private API and reject you because of it. They've also caught bugs in my app and sent it back.
I believe the advantage of Objective-C for them is that all messages pretty much go through one point in the runtime (as far as I know). That signature is probably very obvious, and they can probably do a lot of looking at your code with very little effort. If you have malware in your code, there's a pretty good chance they'll find it based on what it does. I imagine they can see a lot more about our apps than you'd think, since they probably run them in a debug build of iOS.
I can think of literally dozens of things they can look at, and I don't even have access to their systems to know what data is available. They have hundreds of engineers who have probably figured all of that out, and automated it for the approvers, too.
And how would we know? Is a malware author going to blog and say "I tried to slip malware into the App Store, and they denied it?" Since an attempt costs $99 (I can't imagine you'd keep your account after a failed attempt), that raises the bar to trying. I seriously doubt we're the first to think of exploiting iOS, and you haven't heard a word about it...
I think security theater is a bit of a stretch, frankly.
They do static analysis of your executable and check what you call
Doesn't Objective-C have dynamic binding? I'm sure you can determine what function to call at runtime, which means you can always get past a static analysis.
I can think of literally dozens of things they can look at
I can think of literally dozens of ways any analysis can be subverted. Even dynamic analysis wouldn't work in case the thing uses a timebomb or even something simple like downloading data from the internet (apps do that right), and sending in a special payload that instructs the application to do something evil once you reach, say, 1,000 users. Apple wouldn't test your application by installing it on a thousand phones, would it?
> Doesn't Objective-C have dynamic binding? I'm sure you can determine what function to call at runtime,
You need a message name (@selector), and the names are strongly typed -- meaning, they're pretty obvious with reflection tools. If there's a way to send a dynamic message without putting a @selector in your code, I don't know it, and I'm willing to learn.
When you start talking about the POSIX layer and stuff near the bottom (C), traditional wisdom applies there: what does your app link against? If you're linking against the dynamic libraries near the bottom of the stack and walking their contents (to avoid putting a string in your binary of what you're looking for, perhaps?), Apple's probably going to check that disassembly pretty heavily.
> Even dynamic analysis wouldn't work in case the thing uses a timebomb
You keep on writing time bomb as if it's some magical device that circumvents all security. A time bomb needs a callback in the binary, and Apple's going to wonder why your app registers a timer for a specific date. This is what you don't seem to get: Apple has a disassembly of your entire binary, and they can see when they run it that you register a callback for December 2012. Where does that callback go? Code in the binary.
Same thing with your 1,000 phones case: clearly something needs to count, and the obvious candidate is a Web server of some kind, and then the app needs to actually do something if the response comes back as 1,000. Which means that code needs to be in the binary.
Or you need to download code from the Web server to execute. Which is easily detectable by Apple, and you'd never get approved.
> something simple like downloading data from the internet (apps do that right)
You will be rejected if it's used as any part of execution, and they can (and do) check that. If you even touch APIs that try to sneak data into executable pages, I bet they'd terminate the app in record time.
Trust me, they've thought this through. The reason that I asked if you're speaking from experience is because you're making a lot of FUD claims which are easily fixable. Seriously, buy a Mac and try doing something malware-like with the iOS API. Then try submitting it to Apple. Otherwise, you and I are both bags of hot air, theorizing about hypotheticals.
"You need a message name (@selector), and the names are strongly typed -- meaning, they're pretty obvious with reflection tools. If there's a way to send a dynamic message without putting a @selector in your code, I don't know it, and I'm willing to learn."
sel_registerName() translates a C string to a SEL at runtime [1].
I find it pretty hard to believe the iOS review monkeys have the technical ability to reason about the disassembly of a binary (if sufficiently obfuscated there are maybe 1000 people tops on Earth with the ability to do this).
As long as the operating system has a dynamic linker all bets are off wrt. static analysis, and the halting problem definitely applies to automated analysis.
If the OS allows writable pages to ever be executable then you can pretty easily hide your nasty code in the low order bits of an innocent looking data file (say an image), then pull them out and execute them at runtime.
iOS doesn't allow writable pages to be executed. As I recall that was one reason why Android 2.2's V8 javascript JIT was so much faster than iOS's.
Also, use of dlopen() is not allowed. A long time ago I heard of two AR apps that used it (they dynamically linked against CoreSurface, to copy the bits from the camera preview into an opengl texture) but I haven't heard of anyone sneaking an app using dlopen() into the store in over a year.
hence "to ever be executable", they don't need to be writable and executable, as long as at one point they are writable (for you to dump your code in), then at some point they're executable (writable or otherwise).
as for dlopen(), you could just compile that directly into your app rather than using it from libc/libdl, bypassing that limitation entirely.
On non-jailbroken iOS you can't mark a section of memory as executable once it has been writable (and vice-versa, apparently). Executable pages are also checked against a signature by the kernel before they're used.
There are thousands of applications in the App store that implement a 'time' bomb without reading the clock. If you hide your payload in the code that runs level X of your executable, or that runs when a user gets the 'completed 100000 lines in Tetris, Apple's manual testing will not find it.
For the same reason I doubt that 'insufficient code coverage' can be a ground for not accepting a binary in the store.
You need a message name (@selector), and the names are strongly typed
What do you mean by strong typing here? AFAIK ObjC has dynamic binding, which means you can send the same message to a different object based on a condition. So from what I see, you can pretend you're sending a message to an internal object, but then switch the object out for an external one later.
A time bomb needs a callback in the binary
Nope. You read the current time in at startup, to, I don't know, display to the user, then at some later point, after enough obfuscation and misdirection, innocuously check if the number you got back was past 1335830400.
and then the app needs to actually do something if the response comes back as 1,000. Which means that code needs to be in the binary.
But the response won't be 1,000. The response will have lots of data you'd send otherwise, then an innocuous-sounding string like, I don't know, "true" or something, tacked on at the end, and you'll have a check for the end of the string being "true" buried somewhere deep within your code, which is where you'll switch the object out.
Seriously, buy a Mac
Sorry, I don't care enough about the issue to devote the large sum of money it would take to buy a Mac, or my time. I don't even own a smartphone, and I know I'm never going to buy an iPhone. I'm not even a security researcher, just someone who knows a little about programming languages.
I believe the advantage of Objective-C for them is that all messages pretty much go through one point in the runtime (as far as I know). That signature is probably very obvious, and they can probably do a lot of looking at your code with very little effort. If you have malware in your code, there's a pretty good chance they'll find it based on what it does. I imagine they can see a lot more about our apps than you'd think, since they probably run them in a debug build of iOS.
I can think of literally dozens of things they can look at, and I don't even have access to their systems to know what data is available. They have hundreds of engineers who have probably figured all of that out, and automated it for the approvers, too.
And how would we know? Is a malware author going to blog and say "I tried to slip malware into the App Store, and they denied it?" Since an attempt costs $99 (I can't imagine you'd keep your account after a failed attempt), that raises the bar to trying. I seriously doubt we're the first to think of exploiting iOS, and you haven't heard a word about it...
I think security theater is a bit of a stretch, frankly.