Hacker Newsnew | past | comments | ask | show | jobs | submit | throwayedidqo's commentslogin

I can't figure out the circle jerk around Node. Debugging sucks A LOT, JS syntax sucks, single thread sucks, NPM is filled with 95% junk.

Python, Java, golang, C#, and maybe even PHP have more mature and reasonable stacks.


I completely agree. The fact that there are so many abstractions that compile/transpile into javascript is also a big red flag. What other languages do you see that in? And then of course there is the horrible standard library, where the most obvious functions don't exist and require a package and maintainer.

https://mobile.twitter.com/mitsuhiko/status/7126249140717281...

https://www.reddit.com/r/programming/comments/4bjss2/an_11_l...


I find it extremely telling that both of the issues you linked are now included in the core language library, and what happened with left-pad will never happen again after amends were made by npm to make sure that it indeed never does.

If you want to critize JavaScript you can do better with something more substantial. If anything, the speed at which problems you linked get remedied is a plus for the language. Even more so if you take into account the extremely unique position JavaScript is in where you can simply never change an API after it goes public.

> The fact that there are so many abstractions that compile/transpile into javascript is also a big red flag

What did you expect? JavaScript is the only language browsers natively understand. If you want to write code in your favorite programming language you will have to transpile/compile it to JS.


Javascript is the reason WebAssembly came into existence.


That statement is highly uninformed.

I would suggest starting reading from here https://github.com/WebAssembly/design/blob/master/FAQ.md#is-...


I never said it would replace it, I was saying that it was for those who didn't want to use javascript.


No, it really isn't... it's for code that absolutely must perform well, without the need for browser plugin/extension installation. Gaming logic, transcoding streams, etc.


Aren't there a lot of languages that compile to the Java platform? Also, there are several that build for .Net as well.


Ultimately, it's because it's easily approachable and it mostly works. You can learn JS and do stuff on both node and the browser. That's incredibly powerful.

Debugging node has gotten much better as of the last year. [0] You can use Chrome Dev Tools to debug node! I'll admit I've had limited experiences in this area, but so far Chrome Dev Tools has provided one of the best debugging experiences I've had. The killer feature is that it's easy to get started and gradually pick more stuff up as you go along.

JS syntax sucks? That's subjective. After learning functional languages like Elixir and Haskell, I agree that JavaScript is lacking many nice features. I don't hate or dislike JavaScript, though. To help solve this, have you tried out macros? [1] I'll admit I've only looked lightly into the feature and I haven't given it a very serious try, but it looks quite promising.

NPM is filled with a lot of junk. That's pretty accurate. You need to be incredibly diligent when pulling in dependencies. Make sure your dependencies are well tested and not bloated and overly complicated. Have you had a better experience with other languages? I can't say I've had any more luck with other languages. From an outsider's perspective, I think Go and Rust seem to be maturing quickly, but I haven't used either seriously enough to answer with confidence. If I'm hacking something together, I think using node is an amazing choice; if I'm looking for incredibly long-term support, I'd probably use Java (as much as I dislike it).

[0] https://nodejs.org/api/debugger.html#debugger_v8_inspector_i...

[1] http://sweetjs.org/


What you stated IS the circle jerk. I can't come to a single thread mentioning node without 1/4 of the comments being your complaints.


I'm with you on NPM and the JavaScript language, but suggesting that Node debugging "sucks" is sure indicative that you've chosen to ignore the available tools - it is one of the only dynamic languages where post-mortem debugging is possible _at all_.


I don't have a lot of love for node.js or depending on npm but debugging? I have not had a problem using webstorm.


Turns out knowing more than one language isn't a rare or sought after skill. In Europe maybe 40% of the population knows multiple languages fluently. Perhaps a third of these write well enough to translate.

You're not going to get paid well for a job that close to 15% of the population can do with no training


First, this is the sort of shallow dismissal we need less of on HN.

Second, please don't routinely create throwaway accounts to post with here. It's fine if there's some specific purpose, e.g. something personally sensitive, but we ban users who do it routinely. Hacker News is a community. Anonymity is fine, but users should have some consistent identity that other users can relate to. Otherwise we may as well have no usernames and no community, and that would be an entirely different forum.


To be fair, community can and does develop without usernames at all, it's just that people aren't on a personal basis with one another. To be honest, I recognise or even look at most usernames and I doubt many others do. It's similar to reddit, in which someone's username will only be noticed if they are extremely popular (very rare) or have a novelty username.

The effect is lessened here because the prominence of one's posts is not tied to how many points they have (like on old forums) because a person's points are not immediately visible. But anonymous messaging with optional usernames works just fine as seen on 2channel, 4chan, 8chan and the 'old chans' (now defunct).

I don't think community is at all dependent on usernames being used or not.


No, I have to insist on this. Some people want the kind of forum where there are no usernames, and that's just fine. But HN is not that kind of forum. Users who want that are welcome to find (or create) such a place, just not to turn HN into it.

It's important to draw clear lines around what HN is and isn't, and this is one of those lines.


I agree totally that HN is not that kind of forum, I was more picking up on the point that community does not require names to exist. Sorry if you weren't trying to make that point!


Ah I get it now. Yes, that's a valid distinction.


Translating literature is not a trivial task. You could Google translate most books and be able to understand what's going on, but it won't make for good reading.

It actually requires a degree of creativity and literary skill on the behalf of the translator to make a good version in another language. Arguably it's a different work of art to the version in the original language.


>Turns out knowing more than one language isn't a rare or sought after skill. In Europe maybe 40% of the population knows multiple languages fluently. Perhaps a third of these write well enough to translate.

Depending on the combination of languages - it is an extremely rare skill! Just being multilingual is not enough, you need to be multilingual in the correct combination of two languages! Although a majority of translations are English->Target language, that is not always the case.

How many people know both Japanese and Arabic well enough to translate? How about Mandarin Chinese and German? Portuguese and Danish? French and Tagalog? Hokkien and Russian?

Your second point "sought after" is the important part. How much demand for Russian books are there for Hokkien speakers? No demand - no money.


Along with that, you also have to know how to translate into a format that people actually want to read. You can translate a work into a different language, but you may lose the spirit of the work in the process by writing words that are technically a "correct" translation, but end up being very dry or boring to read, compared to the original work.


Translation isn't "translation". It's rewriting a work or source in another language. Very few people can write in their native language, even, so finding a good translator is difficult.


This does a bad job of simulating a slow network. First, as others mentioned, it only throttles outbound. It also doesn't simulate buffer bloat.

Linux network stack isn't designed for this. The best easy thing to use is BSD's Dummynet pipes.


The Linux network stack is perfectly capable of simulating bufferbloat and other network degradation, in either direction. This article simply fails to mention the relevant modules.


Which are....?


I disagree. There's always a possibility that someone else already knows about it and isn't disclosing it. Waiting to disclose will naturally lead a company to take longer to fix the issue.

Immediately disclosing allows customers to take action to protect themselves in case someone else is already exploiting the bug. Waiting to disclose is being peddled by the corporate agenda as "the ethical thing to do" because it makes vendors look bad.

Here's typically what happens. You disclose a bug, company fixes it for next release and puts a footnote in the release notes. Nobody ever looks to see if it was exploited because the instinct is to bury it. Customers aren't widely notified and the seriousness is downplayed because "the bug is already fixed" . In the meantime the software was vulnerable for up to three months when it didn't have to be.

If you disclose immediately there's a temporary panic as everyone does mitigating measures (which is how it should always be done!!!). the company is under tremendous pressure to out a patch in a matter of days which they usually do. Then you get yelled at by the company for making them look bad and "putting their customers at risk" even though the customers are provably safer because they were only vulnerable for a few hours


You're forgetting the biggest factor: immediate disclosure also informs malicious parties.

What's really more dangerous, an extra week with a vulnerability that might be known, or two hours with a vulnerability everybody knows about?

Who's really more likely to see that disclosure on your personal Twitter account, every single (potentially non-technical) user of software you aren't even related to, or a few black hats who know you like to hack and brag?

Yes, it also makes companies look better, but in this case my anti-corporate agenda needs to take a back seat.


Knowing a vulnerability exists is useless without a clue what direction to search. Read the tweets again.


You're taking an unknown known and making it a known known.

I'd rather an exploit stay secret so there's a chance that someone doesn't use it against me, rather than telling everyone the exploit and hoping someone fixes it fast enough.

Disclose it to the company, and give them a hard time limit.


The opinions of people who work in the industry, whose reputations are on the line, are strongly aligned toward immediate disclosure for fairly persuasive reasons (see elsewhere in the thread). It makes us all safer to do so, for example, because you have the option to stop using the affected software.


Disclose with hard time limit is what project zero day does, no?


>Waiting to disclose will naturally lead a company to take longer to fix the issue.

Source?


20 years of vuln disclosure history.


Wow. These things just scream military use. My guess is phased array radar.


I worked on a space based imaging radar which used an FPGA. It cost mid 6 figures per chip...

This was admittedly a space certified radiation hardened chip. Still alarming when you had to pick it up and carry it somewhere


The average American commits ~ three felonies a day by accident. The combination of district,city, county, state, country, and international law that applies to you would probably take a few hundred lifetimes to read.

Let's say, one day you write an article critical of your cities parking ticket policies. The police chief doesn't like it, he implemented the program.

Do you have anything to hide now?


Playing this was deeply satisfying


Article is a bit short but I like the premise. Balsamiq is excellent, I guess this explains why.

The happiest people I've known have been lifestyle business owners. Investors act like it's a sin to create a business that consistently makes money and stays small.


Vector seems to be some kind of hybrid entity, presumably designed to soak up ML talent at the University that revived AI research while also soaking up taxpayer dollars for Google.

Added bonus of making Trump look bad by doing cool stuff in USA's backyard instead of the valley.


I have a feeling this is one of those places where ML will not be useful until we have strong AI.

Certain grammatical errors are impossible to fix unless you understand the overall meaning of the text. Sometimes this meaning is embedded over many paragraphs. Errors involving incorrect word usage are unsolvable when words have more than one meaning and you don't comprehend the subject at hand.


You don't think we can "fake it" in the the vast statistical majority of cases simply by relying on a corpus containing nearly the same cases?

We can already "understand the meaning" in a latent space well enough to do machine translation between language pairs the model wasn't trained on, or do additions and subtractions in the latent space of word to vec to suggest they have picked up some semantic meaning from the text.

I don't think this is a problem that requires Strong AI in the vast majority of cases, just very large well groomed corpa and clever engineers.


>Certain grammatical errors are impossible to fix unless you understand the overall meaning of the text. Sometimes this meaning is embedded over many paragraphs.

A non-strong AI can get clues to that meaning (without really understanding anything) based on the words in those previous and subsequent paragraphs, and a huge text corpus.


Once we have strong AI, whatever that buzz word means, what then would be the usefulness of understanding slang?

Personaly I think the usefulness is already to be able to enterpret a concept encoded in slang as the same as the concept derived from a message encoded in a different dialect (or language).

I would never assume a machine spoke this language, only that they understood it. Machines should evolve into speaking succinctly as to not include unneccessary complexity in their messages as they would strive to be well-understood like all other persons do. I fail to see why we would want to produce slang-encoded messages, unless we want to mask the fact we are a machine.


Ambiguous messages do not imply slang. Plenty of words have multiple meanings in normal and formal English. It's a much worse problem in tonal languages like Chinese. Tell me how you could grammatically correct this without understanding meaning https://en.m.wikipedia.org/wiki/Lion-Eating_Poet_in_the_Ston...

Strong AI isn't a buzzword either, it's been in use for as long as I can remember. Maybe you would be able to understand my Grammer better if I said super human general intelligence and wasted a bunch of space in the process.

I don't think you read my comment? You seem to imply that the corrections would be unambiguous while my point was that some errors are uncorrectable without understanding meaning.


> Plenty of words have multiple meanings in normal and formal English.

There are some stats from Wordnet on polysemy in English. Obviously this depends on the granularity of a set of senses in a dictionary, but regardless English has many polysemous words (26,000+ according to Wordnet). And more importantly, these polysemous words also tend to be the most common words, hence words like "set" having around 120 definitions in the Oxford English dictionary.

https://wordnet.princeton.edu/wordnet/man/wnstats.7WN.html#s...


if comprehension at level don't exist, someone has incentive to correct those to lower level. I certainly do. We are not talking about poetry, are we?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: