Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"I demand a walled garden, and I will gladly pay 30% to ensure all apps are reviewed, approved, and subject to the whims of the closed-shop store providing them."

This is obviously a serious issue - as the OP notes, the double-edged sword of openness. Still, the speedy response of Google makes me feel warmer than the (not-so-common now) decidely un-speedy application process Apple put many developers through.

Edit: As jevans points out in response, and others have noted in this discussion, Google's 5 minute response time might be better characterised as '1 week of sitting on their hands when the developers complained, and 1 rapid response when it went public in a loud way'.

I'm feeling less warm now, and looking fondly at the non-smartphone Nokia I own which is so clearly targeted at the 11-year-old-girl's-first-phone market that it came inside a pink cardboard handbag. But has no malware.



I'm not sure I would qualify Google's response as "speedy." On the Reddit thread, the developer of the real Guitar Solo Lite claims to have been attempting to contact Google about this issue through various channels for a week with no response. http://www.reddit.com/r/Android/comments/fvepu/someone_just_...


Would be nice to find some middle ground here for sure. Seems to me that this is only starting to become a serious issue. This also seems to be a much easier route than finding 0 day Flaws on a PC, and while most folks would think twice about installing software from the back alleys of the internet, they may not feel that way about android.(yet)


Most HN readers might think twice on a PC, but anti-virus creators have got rich off the majority who don't think twice.


This is probably where third party 'app stores' come in. EG, Amazon's app store or someone else's who manually picks what apps go in may become the 'trusted' source for apps.

It is pretty difficult given Android's structure to have any chance of keeping the automated shipping of apps while also retaining security. Yes, you can keep adding more warnings about what apps do, but in general I don't think people read them, and you can ask for an awful lot of privileges.

The great thing about Android is just how much power you have to do stuff as an app maker, but that is of course one of the main problems as well.


> Google's 5 minute response time might be better characterised as '1 week of sitting on their hands when the developers complained, and 1 rapid response when it went public in a loud way'.

To be fair, the first complaint was of copyright infringement, the second was of malware. You can hardly expect the same response time to both kinds of issues.


The Apple walled garden is way more dangerous, because it's almost as easy to slip malware into your code (remember, Apple doesn't do full source code audits), and the false sense of security makes users even more complacent. It really is a perfect example of security theatre.


They do static analysis of your executable and check what you call, which is why they know if you're using a private API and reject you because of it. They've also caught bugs in my app and sent it back.

I believe the advantage of Objective-C for them is that all messages pretty much go through one point in the runtime (as far as I know). That signature is probably very obvious, and they can probably do a lot of looking at your code with very little effort. If you have malware in your code, there's a pretty good chance they'll find it based on what it does. I imagine they can see a lot more about our apps than you'd think, since they probably run them in a debug build of iOS.

I can think of literally dozens of things they can look at, and I don't even have access to their systems to know what data is available. They have hundreds of engineers who have probably figured all of that out, and automated it for the approvers, too.

And how would we know? Is a malware author going to blog and say "I tried to slip malware into the App Store, and they denied it?" Since an attempt costs $99 (I can't imagine you'd keep your account after a failed attempt), that raises the bar to trying. I seriously doubt we're the first to think of exploiting iOS, and you haven't heard a word about it...

I think security theater is a bit of a stretch, frankly.


They do static analysis of your executable and check what you call

Doesn't Objective-C have dynamic binding? I'm sure you can determine what function to call at runtime, which means you can always get past a static analysis.

I can think of literally dozens of things they can look at

I can think of literally dozens of ways any analysis can be subverted. Even dynamic analysis wouldn't work in case the thing uses a timebomb or even something simple like downloading data from the internet (apps do that right), and sending in a special payload that instructs the application to do something evil once you reach, say, 1,000 users. Apple wouldn't test your application by installing it on a thousand phones, would it?


> Doesn't Objective-C have dynamic binding? I'm sure you can determine what function to call at runtime,

You need a message name (@selector), and the names are strongly typed -- meaning, they're pretty obvious with reflection tools. If there's a way to send a dynamic message without putting a @selector in your code, I don't know it, and I'm willing to learn.

When you start talking about the POSIX layer and stuff near the bottom (C), traditional wisdom applies there: what does your app link against? If you're linking against the dynamic libraries near the bottom of the stack and walking their contents (to avoid putting a string in your binary of what you're looking for, perhaps?), Apple's probably going to check that disassembly pretty heavily.

> Even dynamic analysis wouldn't work in case the thing uses a timebomb

You keep on writing time bomb as if it's some magical device that circumvents all security. A time bomb needs a callback in the binary, and Apple's going to wonder why your app registers a timer for a specific date. This is what you don't seem to get: Apple has a disassembly of your entire binary, and they can see when they run it that you register a callback for December 2012. Where does that callback go? Code in the binary.

Same thing with your 1,000 phones case: clearly something needs to count, and the obvious candidate is a Web server of some kind, and then the app needs to actually do something if the response comes back as 1,000. Which means that code needs to be in the binary.

Or you need to download code from the Web server to execute. Which is easily detectable by Apple, and you'd never get approved.

> something simple like downloading data from the internet (apps do that right)

You will be rejected if it's used as any part of execution, and they can (and do) check that. If you even touch APIs that try to sneak data into executable pages, I bet they'd terminate the app in record time.

Trust me, they've thought this through. The reason that I asked if you're speaking from experience is because you're making a lot of FUD claims which are easily fixable. Seriously, buy a Mac and try doing something malware-like with the iOS API. Then try submitting it to Apple. Otherwise, you and I are both bags of hot air, theorizing about hypotheticals.


"You need a message name (@selector), and the names are strongly typed -- meaning, they're pretty obvious with reflection tools. If there's a way to send a dynamic message without putting a @selector in your code, I don't know it, and I'm willing to learn."

sel_registerName() translates a C string to a SEL at runtime [1].

[1]: http://developer.apple.com/library/mac/#documentation/Cocoa/...


I find it pretty hard to believe the iOS review monkeys have the technical ability to reason about the disassembly of a binary (if sufficiently obfuscated there are maybe 1000 people tops on Earth with the ability to do this).

As long as the operating system has a dynamic linker all bets are off wrt. static analysis, and the halting problem definitely applies to automated analysis.

If the OS allows writable pages to ever be executable then you can pretty easily hide your nasty code in the low order bits of an innocent looking data file (say an image), then pull them out and execute them at runtime.


iOS doesn't allow writable pages to be executed. As I recall that was one reason why Android 2.2's V8 javascript JIT was so much faster than iOS's.

Also, use of dlopen() is not allowed. A long time ago I heard of two AR apps that used it (they dynamically linked against CoreSurface, to copy the bits from the camera preview into an opengl texture) but I haven't heard of anyone sneaking an app using dlopen() into the store in over a year.


hence "to ever be executable", they don't need to be writable and executable, as long as at one point they are writable (for you to dump your code in), then at some point they're executable (writable or otherwise).

as for dlopen(), you could just compile that directly into your app rather than using it from libc/libdl, bypassing that limitation entirely.


On non-jailbroken iOS you can't mark a section of memory as executable once it has been writable (and vice-versa, apparently). Executable pages are also checked against a signature by the kernel before they're used.


that's pretty incredible, I'm genuinely surprised.

I suppose it's possible when you start from scratch, there's no way they could do that on the Mac.


There are thousands of applications in the App store that implement a 'time' bomb without reading the clock. If you hide your payload in the code that runs level X of your executable, or that runs when a user gets the 'completed 100000 lines in Tetris, Apple's manual testing will not find it.

For the same reason I doubt that 'insufficient code coverage' can be a ground for not accepting a binary in the store.


You need a message name (@selector), and the names are strongly typed

What do you mean by strong typing here? AFAIK ObjC has dynamic binding, which means you can send the same message to a different object based on a condition. So from what I see, you can pretend you're sending a message to an internal object, but then switch the object out for an external one later.

A time bomb needs a callback in the binary

Nope. You read the current time in at startup, to, I don't know, display to the user, then at some later point, after enough obfuscation and misdirection, innocuously check if the number you got back was past 1335830400.

and then the app needs to actually do something if the response comes back as 1,000. Which means that code needs to be in the binary.

But the response won't be 1,000. The response will have lots of data you'd send otherwise, then an innocuous-sounding string like, I don't know, "true" or something, tacked on at the end, and you'll have a check for the end of the string being "true" buried somewhere deep within your code, which is where you'll switch the object out.

Seriously, buy a Mac

Sorry, I don't care enough about the issue to devote the large sum of money it would take to buy a Mac, or my time. I don't even own a smartphone, and I know I'm never going to buy an iPhone. I'm not even a security researcher, just someone who knows a little about programming languages.


I would like to see you qualify that comment with fact.

Apple does not examine full source code, but they do watch network traffic and examine API calls. At least they do something.


You can slip the malware in as a timebomb. It's not exactly as easy as it is with Android or an open platform, but it is almost as easy.


Why aren't we seeing it happen, then?


It's much easier to spread FUD on the internet than to actually do it.


How do you know it isn't already happening?

Do you monitor the outgoing traffic from your cellphone?


> How do you know it isn't already happening?

There is no evidence that it is happening, with plenty of security researchers and interested amateurs keeping their eyes open for it. There's nothing special about iOS that prevents you from discovering this sort of app behaviour that isn't present on Android.


The threat is not theoretical. Several iPhone apps have been pulled from the App Store after being found to be harvesting user data, intentionally or unintentionally. A game called Aurora Feint was uploading all the user contacts to the developer's server, and salespeople from Swiss road traffic information app MogoRoad were calling customers who downloaded the app. Game app Storm8 was sued last fall for allegedly harvesting customer phone numbers without permission, but it later stopped that practice. And users also complained that Pinch Media, an analytics framework used by developers, was collecting data about customer phones.

http://news.cnet.com/8301-27080_3-10446402-245.html


There is no evidence that it is happening

You mean other than seeing it happen in the biggest similar ecosystem?

There's nothing special about iOS [...] that isn't present on Android

Exactly. So why should iOS be different with regard to malware then?


> You mean other than seeing it happen in the biggest similar ecosystem?

It's being noticed in the biggest similar ecosystem, too, so by that logic it should be noticed in both if it is present in both.

> Exactly. So why should iOS be different with regard to malware then?

The Apple review process is present in iOS. The process to market is markedly different.


It's being noticed in the biggest similar ecosystem, too, so by that logic it should be noticed in both if it is present in both

Sorry, but how does discovering one instance of malware in the android market imply that any instance in the iOS Store will be discovered at the same time? Is there some sort of quantum-link that I'm missing?

The Apple review process is present in iOS.

I was told the Apple review process does not involve a full code analysis. And even if it did, malware authors are known to be quite creative in hiding their payloads.

Apps you have installed might or might not already contain shell-code embedded into seemingly innocent images or assets, with very little chance of detection.

I'm not a security researcher or blackhat. But under the premise that you can (afaik) not root a phone without the user noticing, my strategy for pulling off an attack would be a sleeper-strategy. I'd first seed my payload silently, and then pull the trigger all at once, at some point in the future.

Moreover, considering there has been a one-click safari jailbreak[1], you may not even need to embed actual malware in an app. It may be enough to be able to remotely instruct the app to load a specific URL at your command - now how's that for an attack vector.

So, technically there is no difference between doing either on android or doing it on iOS.

If you still want to claim otherwise then you should come up with a better argument than "but apple has a review process!".

[1] http://lifehacker.com/#!316287/jailbreak-your-iphone-or-ipod...


> Sorry, but how does discovering one instance of malware in the android market imply that all instances in the iOS Store will be discovered at the same time?

Twofold: this is not the only incidence of malicious software on Android, and I never made the claim that all instances should necessarily be immediately found - just that, if it's as easy to slip in as the OP claimed, that SOMETHING should've been found by now.


SOMETHING should've been found by now

Well, I'm working about as hard as PG. No, actually I work much harder. I SHOULD have found the one startup-idea by now that takes off and makes me as wealthy as him!

Notice the flaw in your reasoning? There is no correlation.


Finding a great startup idea and detecting malicious software are vastly different things.

If inserting malware into iOS is simple, it would be done, and done widely. If done widely, the chances are very good that someone would've detected it in at least one such application.


Finding a great startup idea and detecting malicious software are vastly different things.

Oh, you think so? Both are a function of skill, heuristics, sweat - and a great deal of luck.

If inserting malware into iOS is simple...

I'm not sure how I could make it any clearer, perhaps look at some of the other threads on this article?

So I'll just repeat:

   iOS is not different to Android with regard to malware.
Long version: The difference is so small as to be negligible.

I'm not sure I understand why that is such a bitter pill to swallow for some people.


> iOS is not different to Android with regard to malware.

Then why is malware being identified on Android but not iOS?


Erm, actually malware is being identified on iOS as well;

http://news.cnet.com/8301-27080_3-10446402-245.html


None of those appear to fit the malware definition.


This is one thing I've been wondering about, how is it that they don't know every single API call the executable is linked to?

I know that Objective C and messaging is different from function linking in some fashion, but certainly there must be a way of determining if disallowed APIs are ever called, at all, without just using the app and hoping you trap them.

I think at the very least they should be able to examine the executable for object types used, and function signatures used, as well as determining what signatures are passed to which objects.


> This is one thing I've been wondering about, how is it that they don't know every single API call the executable is linked to?

They do. I used an old example from the Internet, and that API was now private; Apple rejected the app and included the name of the API that I wasn't supposed to use.


But you were being honest in your use of the API.


Are you speaking from experience?


No, from logic. I don't have a Mac to develop on, sorry.


I think you're both right in different ways. The positions "if it can happen it has or will" versus "yes but there is no evidence so it probably hasn't or won't" both have merit and are not explicitly in conflict. But I'm reminded of that quote:

"In theory there is no difference between theory and practice. In practice there almost always is!"


So how is that more dangerous?


It gives users a false sense of security. The average user is far more willing to trust an arbitrary iPhone application than an arbitrary Windows application, do you not agree?


No, I don't really. I don't think those with a high level of technical intelligence will be affected by wall or wall-less, while those with less than moderate TI probably don't even realize that one marketplace has or doesn't have an approval process. So it's equally dangerous, but I don't think moreso.


You have people talking in this very thread about a walled garden being better:

http://news.ycombinator.com/item?id=2279823


A walled garden is unquestionably better. It isn't fool proof but it's the same situation as security: you can't make your system completely secure but you can make it more secure than the next guy. The Android store is the next guy.


Not to mention that the Underhanded C Programming content is showing how you can have malicious code hidden in plain sight. Even doing a source code audit wouldn't be guaranteed to show exploits.


It's not so hard to detect exploits - usually all you have to do is to detect system-calls - it's more efficient than looking at source-code.

That said, I don't think Apple does any kind of audit that targets exploits and only relies on reviews.


That's assuming the malicious syscalls actually fire while Apple is reviewing the app. An iOS malware author would likely do everything possible to prevent the payload going off during the review process; it might be as simple as checking the system time and only launching the obfuscated malicious code a month or two after submission.

Disabling the payload under circumstances that put the perpetrator at risk of being discovered is a very common malware tactic (see Conficker disabling itself if it detects it's being run in the Ukraine, for example).


That's assuming that Apple doesn't think about serving different GPS-coordinates, different IPs, different device IDs, different system time.

And this could very well be completely automated - you just need someone to design a map of gestures / time intervals for simple workflows of the app, and then let the system run the app and interact with it for every possible configuration.

Of course, the malware author could then try different tactics to detect if the app is running inside a virtual machine, and not on a real device. But that's as hard as detecting a well-behaved / modern root-kit, and that also implies certain sequences of system-calls that can be detected.

Virtual machines do this all the time, i.e. detecting illegal operations.

Not only that, but it doesn't have to be accurate - it just has to raise a red-flag in some intranet bug-tracker that these and these apps need closer inspection. It also makes it much harder for malware author, because instead of searching just for an iOS / Android exploit, now they also have to game this approval process.


interact with it for every possible configuration

This is practically impossible. You could randomize the input for years and people could get around it. e.g. just make a web request to your website, and depending on the reply, do something nasty. You could never catch this with this sort of automated testing.


Hyperbolic nonsense as explained here: http://news.ycombinator.com/item?id=2283338




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: