Hacker Newsnew | past | comments | ask | show | jobs | submit | itripn's commentslogin

Considering only compile time is a shallow approach to the idea of using Rust more widely. I would encourage you to think about the aggregate amount of time our industry spends finding, and then fixing, and then repairing the damage done by, classes of bugs which idiomatic Rust completely prevents. It does all this without impacting runtime (like GC language often do).

Fast compute at this point is quite literally the least expensive part of the equation. Machines will get faster. Compilers will get optimized.

We've spent decades optimizing the developer experience (compile times) at the expense of the rigor, robustness, and quality of our resulting product. I've been doing this for 30 years, and I can categorically say that I've spent FAR more time chasing NPE, OBO, and race condition bugs than I would have ever added to my build time with a slightly slower compiler.

itripn&


Also, please do not confuse abstraction with reuse. "abstracting things" has nothing to do with "making them from scratch". You're definitely conflating two things inappropriately there. I can write reusable code that has absolutely no abstraction in it, and I can write code based on an abstraction that is completely un-re-usable.


Arguably, the conflating of those ideas would speak to his readiness for a senior role.


That is empirically not true, at least under go 1.2.1. The poster's own code produces identical iteration results even when iterating the same unchanged Map multiple times in a single code execution.


People needs to stop confusing "random" with "non-deterministic". It is a stated attribute of a Hash Map (leaving aside special classes, such as a Tree Map), that iteration of keys will occur in a non-deterministic order.

That is different than random. Random implies that iterating the same Map twice (without intervening updates) will produce different results. I've never met a Hash Map for which that was true.

The irony is that the poster's own code violates the point he was trying to make -- when run under go 1.2.1, it produces insertion order results every time (at least for me). I've also never met a Hash Map (until this example in Go) which predictably produced insertion order key iteration. I am sure even here, it is mere coincidence.

I do believe the poster lacks a formal understanding of data structures. My apologies if that is incorrect, and I am missing his point.


Yes, you are missing the point.

In Go, iteration order of keys was always non-deterministic.

At some point it was changed to be also random i.e. iterating the same hash the second time would produce a different sequence of key/values than the first time.

This is exactly what the article says, except using more words.


Listen. Order of key iteration is well understood attribute of a Hash Map, and it's always non-deterministic. That's what I have said, and I am not missing the point. The poster tries to imply an insertion order iteration in hash maps that has never existed.

My second, arguably more interesting point, is that the poster's own code violates the premise of his post.


Order of key iteration in a Hash Map is not non-deterministic, unless the hash is using a random salt. Unless you are using a random salt for each Hash Map, hashes can be precomputed and will always be the same for the same input of keys, therefore deterministic even though it may look non-deterministic in nature.

On the other hand, I agree that the author of the article, does not seem to understand the underlying data structures. Either that, or the way he has written the article portrays lack of understanding.

It is quite possible that pre 1.0 they may not have been using a hash map at all, and instead they were using an ordered map, which would have given an insertion order iteration.

Although note this is pure speculation as I have not tested this on pre 1.0, and in fact you may be absolutely correct that he is implying an iteration order that never existed.

What should be noted though is this: https://code.google.com/p/go/issues/detail?id=6719. It looks like that as of Go-1.3, there will be a sort of semi-random iteration, where while iterating each bucket in the hash map will be iterated over in increasing or decreasing order, chosen at random. Which is good as it is not too much of a performance hit, with the benefit that the iteration order will be non-deterministic for small maps, which is currently not the case.

EDIT:

Here is an explanation of iteration over Maps in Go <1.3 and >1.3:

Map iteration previously started from a random bucket, but walked each bucket from the beginning. Now, iteration always starts from the first bucket and walks each bucket starting at a random offset. For performance, the random offset is selected at the start of iteration and reused for each bucket.


Most European/Asian showers have already solved this, without anything that runs on electricity or needs firmware. Purely mechanical.


I had a pretty detailed discussion with iOS engineers at a WWDC a couple of years back. It was a somewhat frustrating conversation, mostly because of how badly I wanted true user generated Framework support, but also because the engineers had decent reasons for the existing state of things. Primarily that Frameworks (in their fullest expression) are dynamically loaded.

Apple has made a decision that allowing 3rd parties to dynamically load code (outside of Apple certified frameworks) is a security issue on a mobile platform in particular. I don't have a solid counter argument, although there are certainly some technical constraints they could put in place to help mitigate the risk.

Anyway, agree with your essay in general. But I also understand how we got here.

Cheers


It's the app developer's responsibility to update and QA their program.

If Apple allowed dynamically loaded libraries across the OS, then subtle issues in an App update could cause that one update to break seemingly unrelated apps. Windows developers call this DLL hell, and even with manifests and SxS, Microsoft still doesn't have an attractive solution to the problem.

Meanwhile, from a security standpoint, the sandbox should prevent apps from interfering with the files of each other and the OS.

And from a performance perspective, the few kilobytes (even entire megabytes!) of duplicated code segments is inconsequential on a phone with 1GB of RAM and very few context switches across apps.


No, you are conflating two separate concepts. Just because a framework is dynamic doesn't mean it has to be shared. MacOS X provides all the benefits of dynamic frameworks that Landon outlines, but third party frameworks are almost always bundled within each app (and on iOS they would certainly be required to be).


I am not actually conflating them at all. However, Frameworks as implemented on iOS are at present dynamically loaded. As I said, there are technical ways to address that particular issue, some of which bring iOS Frameworks more in parity with OS X.


We're already dynamically loading one chunk of third-party code: the app. Why are third-party frameworks any different? Presumably they would be subject to the exact same code signing and approval processes as the main app.


Then you could probably replace a dylib inside one app, with a dylib from another. If Apple codesigns all dylibs in apps, you could just submit a silly little app with a malicious dylib, grab the signed dylib from the appstore later and play games with third party apps.


The code signature for an app extends to the frameworks it contains. You can't just replace them and still have a valid signature.


The code signature is on the multiple architecture binary, thereby including any statically linked object files, right?

If Apple were to add dynamic libs, they would presumably be separate binary files, with their own signatures. This could raise the concern noted by 0x0.


> If Apple were to add dynamic libs, they would presumably be separate binary files, with their own signatures.

No. That's not how it works on Mac OS X today, where bundled shared libraries are supported.


That's right, thanks.

Separate binary files have individual hashes, which are included in the package manifest file. The manifest is then signed, so a single signature covers all hashed files in the manifest.

Curiously though, in all of the MAS apps I've checked, bundled dylibs are explicitly not hashed in the manifest. This is the developers choice, but perhaps a default?


If anything, in my mind, not using shared libraries is a security issue.

For example, if every application links to a static version of some image loading library, then all of the applications must be patched if there is a vulnerability in that library.

Whereas if they all share the same copy, you patch the library, and they all get fixed.

I'm aware that model works better when the same vendor is providing all of the binaries, but there are cases where it's also appropriate for general ISVs.


It's a two sided coin, if you update a dynamically loaded library that subtly breaks backwards compatibility, you end up with apps that mysteriously stop working because of some other update in the system.

Really, it's up to the app maintainer to update their program, and if it has a vulnerability, in theory the sandbox will prevent it from doing damage to others.


If someone updates a library incompatibility, they deserve what they get. That's why shared libraries have versioning.

In the mobile space, it would be even more beneficial if platform holders and ISVs actually followed this; the memory and space usage savings could be substantial.


I'm uncertain why someone would downvote my comment above, but shared library versioning is a real thing, and it is a best practice:

http://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html

http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries...

https://www.usenix.org/legacy/publications/library/proceedin...

Linux distributions heavily depend on this for the GCC runtime libraries (such as libgcc_s); it's how they provide backwards compatibility.

Many operating system distributors also rely on symbol versioning for their shared libraries as well so they can compatibly evolve interfaces for consumers.

So my original point stands, if someone incompatibly updates a shared library without accounting for versioning, they're doing it wrong.


That is the case anyway on OS X, since each .app bundle contains all of the libraries for that app, the only ones that you don't include are the ones that Apple ships with the platform itself.

So even if an issue was found in a shared framework, it has to be fixed for every app that includes it.


I'm well aware of that. Which is why I specifically said they all share the same copy.

As for the shared framework; that's not true necessarily. Not all system frameworks are included in the app bundle.


Gentoo has a whole wiki page[1] detailing the pitfalls of bundling libraries.

The biggest downside is that updates to shared libraries, done incorrectly, can break applications. That said, modern package managers allow an application to list what versions of a library is or isn't compatible with.

It'll be interesting to see if anyone ever comes up with a good solution that mixes the strengths of mobile platforms' security model and modern desktop package managers together. It seems quite nontrivial. (Does the library inherit the permissions of the app? Does it have it's own? what if I push malicious code in an update? Bundled prevents having to think about these problems.)

[1]: http://wiki.gentoo.org/wiki/Why_not_bundle_dependencies


Assuming that there really do still existing technical constraints, these aren't insurmountable problems. They're not even difficult problems. Apple has more than enough cash on hand to spend the engineering time necessary to solve them and still have money left over for a campus Beer Bash.

Note that on Mac OS X when using code signing, including when distributing via the Mac App Store, dynamic libraries are supported.


I don't get this. Based on my experience, even without framework support, it should be pretty easy to write some code in iOS that loads some code in from a remote location, just by using NSBundle functionality. After all, NSBundle files may include runnable code, in addition to resources. I'm not sure if such apps would get past Apple's tooling / testers though, when submitted to the store.

Is it harder for Apple to check if remote code loading functionality (and other potential security issues) is included in frameworks compared to bundles?


Apple blocks loading any new executable code after your process starts. Fundamentally, the OS prohibits normal processes from marking any pages as executable. You can load the data fine, but you can't execute it. NSBundle won't help you.

The problem being proposed here is that the ability to have embedded frameworks would somehow weaken this strong protection against loading new code at runtime, although I don't really see how personally.


Besides security issues, I'm wonder if another motivation is anti-DRM circumventing, working around App Store restrictions, etc.


Try using a netipot regularly. Took me a while to get over the weirdness of it, but it's changed my sinuses forever (for the better).


The squeeze bottles you can buy from Costco are way more effective than neti pots. I've used both and found the neti pot just not really doing that much.


The squeeze bottles (which seem better than neti pots) help, but they don't actually cure infections, in my case. On the plus side, my kids think it's extremely entertaining to watch their dad shoot water through his nose!


Indeed, the DNA database building potential of this project is a little outside my comfort zone.


Given that you found so many web pages outlining the problem tells me you could have easily researched this before you purchased the product. You didn't. Your fault.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: