Hacker Newsnew | past | comments | ask | show | jobs | submit | jeremiep's commentslogin

Read "The Structure of Scientific Revolutions" recently, science ignoring what it does not understand is far from a new phenomenon.

Science is fantastic to dig into areas it can already see, and terrible at seeing new areas from the greater unknown.


We studied "The History of Science in Society" by Andrew Ede and Lesley Cormack, which left a big impression on me.

ISBN-13: 978-1442634992, ISBN-10: 1442634995


Been using react transparently through clojurescript's reagent for years and was never hit by any of react's API changes.

The problem usually isn't react, but that most projects using it directly use it wrong.


That's not a step further, you lose the ability to use the latest version without code changes.


Which in any case you'd have to do in other languages. What Elm provides is a way for library users to decide whether to upgrade the library or not and more importantly not automatically upgrade to a library which needs code changes.


I find D to be a good tradeoff between C++ and Rust. I can't stand not having compile-time evaluation, code generation and reflection in Rust and C++ is just too slow to iterate with.


Working with Python lately I very much got to like the interactive REPL and its immediate feedback.

I actually think I forgot how to program in C. It lacks almost every data structure I'd deem useful to getting complex and mixed problems solved quickly. C++ provides many things and is useful if you are in a tight spot or you want/need the speed.

If Rust keeps evolving at a quick pace I think I'll look into it a lot in the future.

D seems to me like a good thing that is better in many ways but has some critical drawbacks for me like not working on many microcontrollers (an area where C++ really shines). On the other hand it isn't radical enough to really dive into it.


Why wouldn't D work on microcontrollers? Runtime is too big? GC too difficult to avoid? Even in -betterC territory?


Compiling the hello world with -betterC yields a binary of size 8.2K. Seems small but when your microcontroller has 8K of program space that's just untenable. Of course those small micros are getting rarer nowadays when a large AVR and ARM micros are cheap, but there is still an entire range of minimal microcontrollers that are extremely limited in resources.


> because they make our work easier

This seems to be my biggest pet peeve with the JS ecosystem as a whole; people chase things that are easy to learn rather than simple to reason about once learned and the result is almost always a convoluted mess of a codebase, regardless of the project.

I used jQuery, then Underscore, and later lodash; now I'm using ClojureScript and wondering why I ever could enjoy any of those crippled FP alternatives in the past.

"Someone who doesn't know about X will create its own worse version" is true, but it also applies to you and everyone else :) We don't know what we don't know (thats meta-ignorance) and we tend to assume what we do know is the state of the art, it quite usually is very far from it!


I'm perfectly happy in ClojureScript to be skipping TypeScript and most of the JS ecosystem :)

Not everything on the web has to use the normal stacks. I used TypeScript in the past but didn't enjoy the complexity of most JS libs, even with typings. Clojure was a breath of fresh air, and still is.


Its not like the poor is going down, they're also going up but not nearly as fast. Almost all of our poor people are rich by global standards is an argument I hear often. Most poor also end up middle class as they grow up, take responsibility and contribute to society is another I hear.

I lived in poverty for years after dropping out of college, sometimes with roommates that were way beyond toxic, worked multiple jobs 70 hours a week just to pay rent and food and still ended up middle class with a job I love; I've been through hell to find heaven. What I learned on the way I now use every day, its made me a stronger and better person and I can now help others do the same.

If someone had given me what I have today, just for the sake of equality, I would not have learned responsibility, discipline, I would hardly have developed most of the skills I now have and probably would've lost all of it by now. I would basically still be an angsty teen in an adult's body, which is what kept me poor in the first place.


Notice I didn't argue for basic income or any other specific redistribution scheme (I like cost effective schools and low cost health care though) rather i argued for things that end rich dynasties that purpetuate inequality across generations.

I think each generation should earn their riches rather than rich dynasties persisting across generations. I favor capitalism and inequality of outcome but earned capitalism from at least a relative equal start.


I see your point, thanks for the precision. I can agree perpetuated inequality does give kids a huge head start.

Taxation is already high for the rich, but the top bracket is usually quite low. I don't think the solution is more taxes, but more tax brackets; they should scale to accommodate the extra rich.


The very rich also tend to hide their money offshore or play games with multiple residencies to find the best tax rates.


Even if the poor were not going down (they are in the west https://www.youtube.com/watch?v=v1oHJezqBYU ) that wouldnt matter much as we all compete for any goods that are pseudo finite. If inequality grows faster than the "growing pie" certain people will be worse off.


You can also take "modern design patterns" out of that sentence and still be accurate. Its usually just a form of cargo-cult-programming that hasn't been found out to be terrible yet.


When you realize the compiler's optimizations only account for about 10% of the total program's performance you find that the other 90% is entirely up to the programmer.

Architecture, data structures, batching operations, memory locality, and a bunch of other metrics are all concepts the compiler can't really help you with whatsoever and they have a much larger impact on performance than the 10% the compiler is actually able to optimize.

The problem is that either programmers don't care, or they can't make the distinction between premature optimizations and architecture planning.


You're right to emphasise good data-structures and algorithms (also concurrency, parallelism, etc), but compiler optimisation is nothing to sneeze at. '10%' is laughably off-base.

From a quick google: compiler optimisation can accelerate CPU-bound code to over 5x the unoptimised performance. https://www.phoronix.com/scan.php?page=article&item=clang-gc...


They seem to be benchmarking very specific things and not actual applications. These numbers do not hold in the real world.


In a sense you're right, but hand-tuning assembly is kind of an orthogonal problem to determining whether you're using the right algorithms and data structures.


Where do you get that 90/10 split from? Just curious.


Talks from Mike Acton and Scott Meyers, specifically "Data-Driven Development" and "CPU Caches and why you should care" respectively.

I forgot exactly where I got that number, but it's been a pretty good metric so far.

In a nutshell; the compiler is great a micro-optimizations and absolutely terrible at macro-optimizations. The former will get you a few percent of perf boosts while the later usually results in orders of magnitudes of performance gains.

Its near impossible to apply macro-optimizations at the end of a project without massive refactors.


Cool, I'll check those out, thank you!


I believe the 10/90 number may be from this talk, though I don't have time to rewatch it to confirm: https://www.youtube.com/watch?v=rX0ItVEVjHc


> Wow, who writes that crap?

Have you read your own comment?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: