"White diaspora" is blowing my mind a bit. I think I would have been slapped by my social studies teachers if I ever used that phrase to describe colonialism. (I'm white, US)
Nice of you to assume that all white people had a chance to colonise (rather than be colonised and conquered by neighbouring nations on a regular basis). I'm not carrying the sins of the English-speaking (or otherwise formerly colonising) nations, so could the whites == colonialism assumption stay wherever other such weird assumptions belong.
European diaspora is an accepted term[1]. Yes there's a redirect, but the word "diaspora" is used in the very first paragraph. It's ok, Europeans are people too. There's nothing offensive about the term. Plenty of "colonials" were sentenced to transportation, they didn't choose it.
You can take solace in the fact that "whites" will never again command such societies in the future. The exaggerated "colonialism" narrative comes off as pining for a long-lost era of dominance, but in a socially acceptable way.
The era of Northern Europeans ("whites") dominating the globe is over forever, so no need to beat yourself up about it. Just letting you know that this extreme narrative is quite bizarre to non-Americans.
Thanks! I managed to get the two lines confused, not sure how, and it is too late to edit; just posted a correction:
The place where RC consistently excels is the "no random pauses" - most GCs will occasionally need to stop the world, even when they can mostly do incremental collections. Note that this does not mean they are slower - it is just that the overhead tends to be concentrated in bursts instead of uniformly spread out as in RC.
The place where GC consistently excels is reference loops, and less dependence on implementation robustness.
Totally agree, but I just have to pick one nit on your final point re GHC and unpacking: If you do this you forfeit all generic programming because a 'lifted' polymorphic type has to be represented by pointer to a heap object. This is where C++'s unique take on generic programming still wins big.
It generalizes the choice of 0 or 1 to an arbitrary starting index. So when you create an array you specify
not just where it ends but also where it begins. This lets you do neat things (consider a filter kernel with range [-s,+s]^n instead of [1,2s]^n) and the extra complexity it adds can be hidden when not needed using for-statements or higher order functions.
Nobody uses it because the implementation is not very efficient and Haskellers have a chip on their shoulder about performance. It subtracts the origin and computes strides on every index, but you could easily avoid this by storing the subtracted base pointer and strides with the array. Of course when you go to implement it you'll see light on 0-based indexing :)
Why is there not a service that tracks news stories and algorithmically relates them into the narrative of history, driven by the singular question of "why?"? Something in between google news and quora/stack exchange (anonymized to avoid embarrassment of dumb questions like "what are Israel and Palestine fighting over anyway?" but with a points/voting system to propagate quality answers.) Whoever makes this, please Tell HN so I can sign up for your beta.
Hi Will. Sure, you can use the standard invite form on the home page now, and we'll post a special HN sign-up form when we make the official announcement in the upcoming months.
C++ template's aren't just unusual, they're unique among mainstream languages. They're C++'s secret weapon. These days, if you're not making heavy use of templates in your C++ code then you're using the wrong language. Like you said, many people use C++ to solve problems they could be solving more quickly and more safely with another language, but it's not C++'s fault, it's their fault.
In particular, the key feature of C++ templates (as opposed to other similar systems of polymorphism) is that they abstract over not just code but data representation. This is what CJefferson was getting at and it's orthogonal to macros. Ocaml and Scala don't offer it. It's about data, not code, so you'll never be able to gloss over it with extra parallelism.
In C++ a std::pair<double,double> really is just two doubles, 16 bytes. It can be passed into and returned from functions in registers. If I have a singly-linked list of them, each list cell is 24 bytes (16 for the pair, 8 for the next pointer.) In any other (mainstream) language, the pair is really a pair of pointers to boxed doubles which reside in the heap, and the list cell holds a pointer to that. The overhead of this is unacceptable in many situations, and these situations are where C++ still thrives. (cf. the Eigen linear algebra library for an example of "doing it right.")
Most languages have given up on this feature because of the drawbacks (code bloat) and the unified runtime representation of data (basically everything is a void*) makes many things easier (garbage collection, serialization, etc.) (I'm not super familiar with C# but I understand the struct types address this somewhat, but not without their own drawbacks.) But thanks to LLVM (written in C++ btw) people are pushing this area of language development forward (Rust, Deca, and others) in an attempt to fix the (many) problems of C++ while preserving its strengths.
Oh and BTW: A tank with wings is called an A-10 warthog :)