Hacker Newsnew | past | comments | ask | show | jobs | submit | vrosas's commentslogin

If the damage was large, it would be hard to cover up. And if it was very large the US would seek to minimize it. “A few people killed” might be interpreted as “probably a ton of people killed” by the enemy and they keep doing it. Zero information means you can’t argue the case one way or the other, and in those cases the project gets scrapped.

One of the most interest facts about this disaster is that if the submarine was standing on its tail straight up, its nose would be sticking 150ft OUT of the water it sunk in.


It was 155m long and the ocean was 108m deep, in case anyone else was wondering.


I didn't realize how big the submarine actually was

- Ohio class - US' largest: 18,750 tonnes displaced submerged, 170m long, 13m beam

- Typohoon-class - USSR's biggest: 48,000 tonnes displaced, 175m long, 23m beam

- Oscar II-class (Kursk) - 19,400 tonnes submerged, 154m long, 18.2m beam


I think I read something similar about the Edmund Fitzgerald i.e. it sank in water that was less deep than the length of the ship.


And yet even in that shallow of water the pressure would have been around 10 atm. It's amazing how dangerous something as mundane as water can be.


This is the first time I see someone refer to 100m deep as shallow.


It only takes a little over a minute to walk 100m. And if I stand at point A and look at point B, 100m away, it doesn’t feel far away either.

That’s why I think even though I am only able to swim what 4 meters or something down, maybe less, 100m under the water sounds really little for a submarine. Also probably because I have no experience with submarines so I was imagining that for the most part they would be many hundred meters under the sea level.


it's all relative!


nothing but respect for water


Definitely a strong contender for favorite 3-atom molecule


Sangamon's Principle


Which sounds good, but isn't Nitrous Oxide actually pretty fucking bad for you if you use it continuously?


Similarly, a human can drown in only a few inches of water, not even enough to fully submerge you while lying face first in it, let alone while standing.

Water is not to be trifled with.


It’s basically GCP AppEngine circa 2010


It’s also pretty damn obvious when LLMs write code. Nobody out here commenting every method in perfect punctuation and grammar.


I have been doing this for years, especially for libraries (internal or otherwise), anything that's `pub`/`export`, or gnarly logic that makes the intent not obvious. Not _everything_ is documented, but most things are.

I'm doing it because I know how much I appreciate well-written documentation. Also this is a bit niche, but if you're using Rust and add examples to doc-comments, they get run as tests too.

Also given we both managed to produce more than one sentence, and include capital letters in our comments, it's entirely possible both of us will be accused of being an AI. Because, you know... People don't write like this, right?


Strict grammar school teachers, who enforced a "Form And Style" level of prose, have become a liability in the AI world!


>Also given we both managed to produce more than one sentence, and include capital letters in our comments, it's entirely possible both of us will be accused of being an AI.

Could anyone explain the esoteric meaning of why people started doing that shit? I got a hypothesis, what's going on is something like this:

1. Prove you are human: write Like A Fucking Adult You Weirdo (internal designator for a specific language register, you know the one)

2. Prove you are human: _DON'T_ write Like A Fucking Adult You Weirdo (because that's how LLMs were trained to write, silly!)

3. ???? (cognitive dissonance ensues)

4. PROFIT (you were just subject to some more attrition while the AI just learned how to pass a lil bit better)

I never thought computer programmers of all people would get trapped in such a simple loop of self-contradiction.

But I guess the human materiel really has degraded since whenever. I blame remote work preventing us from even hypothetically punching bosses, but anyway weird fucking times eh?

Maybe the posts trying to figure "this post is AI, that post is not AI" are themselves predominantly AI-generated?

Or is it just people made uncomfortable by what's going on, but not able to articulate further, jumping on the first bandwagon they see?

Or maybe this "AI-doubting of probably human posters" was started by humans, yes - then became "a thing", and as such was picked up by the LLM?

Like who the fuck knows, but with all honesty that's how I felt about so many things, dating from way before LLMs became so powerful that the above became a "sensible" question to ask...

Predominantly those things which people do by sheer mimesis - such as pop culture.

"Are you a goddam robot already - don't you see how your liking the stupid-making song is turning you into stupid-you, at a greater rate than it is bringing non-stupid-you aesthetic satisfaction?" type of thing -- but then I assume in more civilized places than where I come from people are much more convincingly taught that personal taste "doesn't matter" (and simultaneously is the only thing that matters; see points 1-4... I guess that's what makes some people believe curating AI, i.e. "prompt engineering" can be a real job and not just boil down to you being the stochastic parrot's accountability sink?)

I'm not even sure English even has the notions to point out the concrete issue - I sure don't know 'em.

Ever hear of the strain of thought that says "all metaphysical questions are linguistic paradoxes (and it's self-evidently pointless to seek answers to nonsensical questions)"?

Feels kinda like the same thing, but artificially constructed within the headspace of American anti-intellectuallism.

Maybe a correct adversarial reading of the main branding acronym would be Anti-Intelligence.

You know, like bug spray, or stain remover.

But for the main bug in the system; the main stain on the white shirt: the uncomfortable observation that, in the end, some degree of independent thinking is always required to get real things done which produce some real value. (That's antithetical to standard pro-social aversive conditioning, which says: do not, under any circumstance, just put 2 and 2 together; lest you turn from "a vehicle for the progress of civilization" back into a pumpkin)


What?


What?


skill issue


Commenting every method, how pedestrian! I comment every letter of my code!


Single letter variables all the way. Then it's easy to tell what code is human welritten. /s


the skill is the issue!


Please elaborate, poopmonster! What sort of skills are required to expertly use an LLM?


    // [umbrella] Describe skill 1. (Prompt engineering)
    1. Skillful use of **prompt engineering**
    // [rocketship] Describe skill 2. (Agentic loops)
    2. Know how to use **agentic loops** skillfully
    ....


I thought they were saying they comment every method with perfect punctuation and grammar.


1. Be unable to read...


It's a pretty sad state of affairs when someone can say with a straight face "Nobody out here" (sic) taking their job seriously and giving it the care and attention it rightly deserves.


I feel seen...


Engineers who ship get promoted. Sometimes they also write simple code. Often they do not.


Interesting. I set up a bunch of slack webhooks for server events that's been working decently well but maybe I'll look at telegram.


Slack (and Discord) webhooks are good for just shooting one-sided data into channels, but for interactive bots Telegram is so far ahead of anyone else it's crazy.

Signal specifically is missing any kind of official bot support, cutting off massive audiences from even considering it as an option.


I also get the impression this is way more complicated than it needs to be. Or maybe it's simple and they keep inventing new terminology for stuff that basically already exists. The crypto bros did the same shit. Like, bidirectional communication has been a thing for decades. We're just changing what we call the client and the server? And the protocol is just strings the bot on the other end is a little better at reading?


Meta is just paying engineers not to work at any other faang company.


I've struggled to understand how Loop earplugs cost $25+ when you can get actual music-playing earphones for less than $10.


Almost no one actually knows how to set up their monitoring. Like, they know the words but not the full picture or how the pieces should actually fit together. Then they do shit like this to try and make up for that fact.


the ones that know do not check anything every morning


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: