Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It appears that most of these attacks relied on exploiting the unfortunate design of C, which makes manual memory management the default and safe, managed memory the special case. It should be the reverse. Speed will always matter, but you don't have to use risky, manual memory mgt everywhere to get speed; you just need it in the few spots where it makes a difference.

In the majority of places in your code, manual memory management gives you no benefit but does expose you to a possible vulnerability if you make a mistake. If the default, lazy option were to let the well-tested runtime do the job for you, yet you could do a little extra work and get manual override wherever you wanted, and manual override everywhere brought you essentially back to C, I think we would have much safer code without a noticeable loss of performance.

Edit: I just realized in the shower that I was saying "memory management" when I meant direct "memory manipulation" more generally. I'm including arrays accessed by memory address rather than by bounds-checked index, pointer arithmetic, etc., not just malloc and free.



> It appears that most of these attacks relied on exploiting the unfortunate design of C, which makes manual memory management the default and safe, managed memory the special case. It should be the reverse. Speed will always matter, but you don't have to use risky, manual memory mgt everywhere to get speed; you just need it in the few spots where it makes a difference.

That's true, but I would claim something even stronger. Getting safety doesn't mean giving up manual memory management, as Rust shows (disclaimer: I work on Rust). You just have to need to have a language or system that enforces that you use safe manually-managed idioms. The idea that safety requires giving up performance (e.g. opting into a garbage collector, or even a runtime) is not true in most cases. In a properly designed system, safety doesn't even require opting into a runtime.


Would Firefox be better (more secure and with no performance handicap) if written in Rust? I realize that there is an enormous amount of existing code that shouldn't be thrown away, but if Mozilla wanted to create a browser from scratch today (or in a couple of years, when Rust has been debugged and polished), would they write it in Rust?



Pcwalton, do you guys at mozilla think servo could be foolproof against sandbox escapes, or that this is a bit unrealistic ?


There's no such thing as "foolproof against sandbox escape" without proving a sandbox, as well as all it depends on, correct. But I believe that memory safety is a security advance.


Your edit almost made me eat my comment, but I will go further: there's nothing risky about manual memory management, as long as the compiler and/or runtime prohibit you from accessing memory you didn't allocate, adding null pointer and boundary checks (which may need runtime support to get the size of an allocated memory block) where the compiler cannot guarantee that you only access memory you allocated.

The reverse situation, garbage-collecting systems that do nothing to prevent you from dereferencing null pointers or going out of bounds, is just as dangerous as C.


Mozilla's Rust & Servo go exactly in this direction, thankfully.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: