Hacker Newsnew | past | comments | ask | show | jobs | submit | eslaught's commentslogin

Not the same industry but at least one literary agent does this: if you physically print and mail your book proposal, they will respond with a short but polite, physical rejection letter if they reject you.

But I think it's a generational thing. The younger agents I know of just shut down all their submissions when they get overwhelmed, or they start requiring everyone to physically meet them at a conference first.


But this is what I don't get. Writing code is not that hard. If the act of physically typing my code out is a bottleneck to my process, I am doing something wrong. Either I've under-abstracted, or over-abstracted, or flat out have the wrong abstractions. It's time to sit back and figure out why there's a mismatch with the problem domain and come back at it from another direction.

To me this reads like people have learned to put up with poor abstractions for so long that having the LLM take care of it feels like an improvement? It's the classic C++ vs Lisp discussion all over again, but people forgot the old lessons.


> Writing code is not that hard.

It's not that hard, but it's not that easy. If it was easy, everyone would be doing it. I'm a journalist who learned to code because it helped me do some stories that I wouldn't have done otherwise.

But I don't like to type out the code. It's just no fun to me to deal with what seem to me arbitrary syntax choices made by someone decades ago, or to learn new jargon for each language/tool (even though other languages/tools already have jargon for the exact same thing), or to wade through someone's undocumented code to understand how to use an imported function. If I had a choice, I'd rather learn a new human language than a programming one.

I think people like me, who (used to) code out of necessity but don't get much gratification out of it, are one of the primary targets of vibe coding.


I'm pretty damn sure the parent, by saying "writing code" meant the physical act of pushing down buttons to produce text, not the problem solving process that preceeds writing said code.

This. Most people defer the solving of hard problems to when they write the code. This is wrong, and too late to be effective. In one way, using agents to write code forces the thinking to occur closer to the right level - not at the code level - but in another way, if the thinking isn’t done or done correctly, the agent can’t help.

Disagree. No plan survives first contact.

I can spend all the time I want inside my ivory tower, hatching out plans and architecture, but the moment I start hammering letters in the IDE my watertight plan suddenly looks like Swiss cheese: constraints and edge cases that weren't accounted for during planning, flows that turn out to be unfeasible without a clunky implementation, etc...

That's why Writing code has become my favorite method of planning. The code IS the spec, and English is woefully insufficient when it comes to precision.

This makes Agentic workflows even worse because you'll only your architectural flaws much much later down the process.


I also think this is why AI works okay-ish on tiny new greenfield webapps and absolutely doesn't on large legacy software.

You can't accurately plan every little detail in an existing codebase, because you'll only find out about all the edge cases and side effects when trying to work in it.

So, sure, you can plan what your feature is supposed to do, but your plan of how to do that will change the minute you start working in the codebase.


Yeah, I think this is the fundamental thing I'm trying to get at.

If you think through a problem as you're writing the code for it, you're going to end up the wrong creek because you'll have been furiously head down rowing the entire time, paying attention to whatever local problem you were solving or whatever piece of syntax or library trivia or compiler satisfaction game you were doing instead of the bigger picture.

Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on; but the problem with doing that without an agent is then it becomes boring. You've basically laid out a plan ahead of time and now you've just got to execute on the plan, which means (even though you might even fairly often revise the plan as you learn unknown unknowns or iterate on the design) that you've kind of sucked all the fun and discovery out of the code rights process. And it sort of means that you've essentially implemented the whole thing twice.

Meanwhile, with a coding agent, you can spend all the time you like building up that initial software design document, or specification, and then you can have it implement that. Basically, you can spend all the time in your hammock thinking through things and looking ahead, but then have that immediately directly translated into pull requests you can accept or iterate on instead of then having to do an intermediate step that repeats the effort of the hammock time.

Crucially, this specification or design document doesn't have to remain static. As you would discover problems or limitations or unknown unknowns, you can revise it and then keep executing on it, meaning it's a living documentation of your overall architecture and goals as they change. This means that you can really stay thinking about the high level instead of getting sucked into the low level. Coding agents also make it much easier to send something off to vibe out a prototype or explore the code base of a library or existing project in detail to figure out the feasibility of some idea, meaning that the parts that traditionally would have been a lot of effort to verify that what your planning makes sense have a much lower activation energy. so you're more likely to actually try things out in the process of building a spec


I believe programming languages are the better language for planning architecture, the algorithms, the domain model, etc... compared to English.

The way I develop mirrors the process of creating said design document. I start with a high level overview, define what Entities the program should represent, define their attributes, etc... only now I'm using a more specific language than English. By creating a class or a TS interface with some code documentation I can use my IDEs capabilities to discover connections between entities.

I can then give the code to an LLM to produce a technical document for managers or something. It'll be a throwaway document because such documents are rarely used for actual decision making.

> Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on;

I do this with code, and the IDE is much better than MS Word or whatevah at detecting my logical inconsistencies.


The problem is that you actually can't really model or describe a lot of the things that I do with my specifications using code without just ending up fully writing the low level code. Most languages don't have a type system that actually lets you describe the logic and desired behavior of various parts of the system and which functions should call which other functions and what your concurrency model is and so on without just writing the specific code that does it; in fact, I think the only languages that would allow you to do something like that would have to be like dependently typed languages or languages adjacent to formal methods. This is literally what the point of pseudocode and architecture graphs and so on are for.

Ah, perhaps. I understood it a little more broadly to include everything beyond pseudocode, rather than purely being able to use your fingers. You can solve a problem with pseudocode, and seasoned devs won't have much of an issue converting it to actual code, but it's not a fun process for everyone.

Yeah I basically write pseudocode and let the ai take it from there.

But this is exactly my point: if your "code" is different than your "pseudocode", something is wrong. There's a reason why people call Lisp "executable pseudocode", and it's because it shrinks the gap between the human-level description of what needs to happen and the text that is required to actually get there. (There will always be a gap, because no one understands the requirements perfectly. But at least it won't be exacerbated by irrelevant details.)

To me, reading the prompt example half a dozen levels up, reminds me of Greenspun's tenth rule:

> Any sufficiently complicated C++ program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. [1]

But now the "program" doesn't even have formal semantics and isn't a permanent artifact. It's like running a compiler and then throwing away the source program and only hand-editing the machine code when you don't like what it does. To me that seems crazy and misses many of the most important lessons from the last half-century.

[1]: https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule (paraphrased to use C++, but applies equally to most similar languages)


Replying to sibling comment:

the problem is that you actually have to implement that high level DSL to get Lisp to look like that, and most DSLs are not going to be able to be as concise and abstract as a natural language description of what you want, and then just making sure it resulted in the right thing — which then I'd want to use AI for, to write that initial boilerplate, from a high level description of what the DSL should do.

And a Lisp macro DSL is not going to help with automating refactors, automatically iterating to take care of small compiler issues or minor bugs without your involvement so you can focus on the overall goal, remembering or discovering specific library APIs or syntax, etc.


I think of it more like moving from sole developer to a small team lead. Which I have experienced in my career a few times.

I still write my code in all the places I care about, but I don’t get stuck on “looking up how to enable websockets when creating the listener before I even pass anything to hyper.”

I do not care to spend hours or days to know that API detail from personal pain, because it is hyper-specific, in both senses of hyper-specific.

(For posterity, it’s `with_upgrades`… thanks chatgpt circa 12 months ago!)


It's not hard, but it's BORING.

I get my dopamine from solving problems, not trying to figure out why that damn API is returning the wrong type of field for three hours. Claude will find it out in minutes - while I do something else. Or from writing 40 slightly different unit tests to cover all the edge cases for said feature.


> it's time to sit back and figure out why there's a mismatch with the problem domain and come back at it from another direction

But this is exactly what LLMs help me with! If I decide I want to shift the abstractions I'm using in a codebase in a big way, I'd usually be discouraged by all the error, lint, and warning chasing I'd need to do to update everything else; with agents I can write the new code (or describe it and have it write it) and then have it set off and update everything else to align: a task that is just varied and context specific enough that refactoring tools wouldn't work, but is repetitive and time consuming enough that it makes sense to pass off to a machine.

The thing is that it's not necessarily a bottleneck in terms of absolute speed (I know my editor well and I'm a fast typist, and LLMs are in their dialup era) but it is a bottleneck in terms of motivation, when some refactor or change in algorithm I want to make requires a lot of changes all over a codebase, that are boring to make but not quite rote enough to handle with sed or IDE refactoring. It really isn't, for me, even mostly about the inconvenience of typing out the initial code. It's about the inconvenience of trying to munge text from one state to another, or handle big refactors that require a lot of little mostly rote changes in a lot of places; but it's also about dealing with APIs or libraries where I don't want to have to constantly remind myself what functions to use, what to pass as arguments, what config data I need to construct to pass in, etc, or spend hours trawling through docs to figure out how to do something with a library when I can just feed its source code directly to an LLM and have it figure it out. There's a lot of friction and snags to writing code beyond typing that has nothing to do with having come up with a wrong abstraction, that very often lead to me missing the forest for the trees when I'm in the weeds.

Also, there is ALWAYS boilerplate scaffolding to do, even with the most macrotastic Lisp; and let's be real: Lisp macros have their own severe downsides in return for eliminating boilerplate, and Lisp itself is not really the best language (in terms of ecosystem, toolchain, runtime, performance) for many or most tasks someone like me might want to do, and languages adapted to the runtime and performance constraints of their domain may be more verbose.

Which means that, yes, we're using languages that have more boilerplate and scaffolding to do than strictly ideally necessary, which is part of why we like LLMs, but that's just the thing: LLMs give you the boilerplate eliminating benefits of Lisp without having to give up the massive benefits in other areas of whatever other language you wanted to use, and without having to write and debug macro soup and deal with private languages.

There's also how staying out of the code writing oar wells changes how you think about code as well:

If you think through a problem as you're writing the code for it, you're going to end up the wrong creek because you'll have been furiously head down rowing the entire time, paying attention to whatever local problem you were solving or whatever piece of syntax or library trivia or compiler satisfaction game you were doing instead of the bigger picture.

Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on; but the problem with doing that without an agent is then it becomes boring. You've basically laid out a plan ahead of time and now you've just got to execute on the plan, which means (even though you might even fairly often revise the plan as you learn unknown unknowns or iterate on the design) that you've kind of sucked all the fun and discovery out of the code rights process. And it sort of means that you've essentially implemented the whole thing twice.

Meanwhile, with a coding agent, you can spend all the time you like building up that initial software design document, or specification, and then you can have it implement that. Basically, you can spend all the time in your hammock thinking through things and looking ahead, but then have that immediately directly translated into pull requests you can accept or iterate on instead of then having to do an intermediate step that repeats the effort of the hammock tim


Here's a paper from September 2025 that compares programs for (a) semantic equivalence (do they do the same thing) and (b) syntactic similarity (are the parse trees similar).

LLMs are more likely to judge programs (correctly or incorrectly) as being semantically equivalent when they are syntactically similar, even though syntactically similar programs can actually do drastically different things. In fact LLMs are generally pretty bad at program equivalence, suggesting they don't really "understand" what programs are doing, even for a fairly mechanical definition of "understand".

https://arxiv.org/pdf/2502.12466

While this is a point in time study and I'm sure all these tools will evolve, this matches my intuition for how LLMs behave and the kinds of mistakes they make.

By comparison the approach in this article seems narrow and doesn't explain a whole lot, and more importantly doesn't give us any hypotheses we can actually test against these systems.


If you drive in the FasTrak lanes without an account you pay the fee + $10 surcharge (for a first time violation), and it goes up on the second violation:

https://www.bayareafastrak.org/en/help/invoices-and-penaltie...

I'm having a hard time finding a citation but according to Google's AI summary if the second violation is unpaid they put a hold on your DMV registration, and the fine itself can be sent to a collection agency.

I agree empirically I see people driving through the lane without a tag (i.e., no number shows up in the overhead display), but maybe these are people with FasTrak accounts being lazy?


> but according to Google's AI summary

Rarely a good citation. No pun intended.


Or people who drive over the cones right before the RFID reader

Or lie and set the transponder to 3 people

Or don't have license plates so can't be identified


One annoying thing is I've tried to pay, but can't.

I spend about four to five months per year in the Bay Area, but have Canadian license plates. The website doesn't even let you enter a Canadian plate, or a foreign plate.

So I bought one of the transponders at Walgreens, and just leave it in the glove box because it has 20 bucks or something when you buy it.

But I can't check its status, don't know how much is left on it, have no idea what I'm paying, really sucks.


Go to https://www.bayareafastrak.org/en/home/index.shtml, make an account and link your tag using it's serial number. Hopefully you'll get the information you're looking for there.


As I said, making an account requires a license plate. Even for "Create a Fasttrak account". It's on the second page, demanding a license plate.


Some people just set it to 3+...


The other answers are great, but let me just add that C++ cannot be parsed with conventional LL/LALR/LR parsers, because the syntax is ambiguous and requires disambiguation via type checking (i.e., there may be multiple parse trees but at most one will type check).

There was some research on parsing C++ with GLR but I don't think it ever made it into production compilers.

Other, more sane languages with unambiguous grammars may still choose to hand-write their parsers for all the reasons mentioned in the sibling comments. However, I would note that, even when using a parsing library, almost every compiler in existence will use its own AST, and not reuse the parse tree generated by the parser library. That's something you would only ever do in a compiler class.

Also I wouldn't say that frontend/backend is an evolution of previous terminology, it's just that parsing is not considered an "interesting" problem by most of the community so the focus has moved elsewhere (from the AST design through optimization and code generation).


Note that depending on what parsing lib you use, it may produce nodes of your own custom AST type

Personally I love the (Rust) combo of logos for lexing, chumsky for parsing, and ariadne for error reporting. Chumsky has options for error recovery and good performance, ariadne is gorgeous (there is another alternative for Rust, miette, both are good).

The only thing chumsky is lacking is incremental parsing. There is a chumsky-inspired library for incremental parsing called incpa though


If you want something more conservative for error reporting, annotate-snippets is finally at parity with rustc's current custom renderer and will soon become the default for both rustc and cargo.


Will migrating to annotate-snippets change rustc/cargo formatting of errors in any way?

Also, in what sense it is more conservative?


The output will cause no user visible change.

It uses ASCII for all output, replaces ZWJs to have consistent terminal output in the face of multi codepoint emoji for two out of the top of my head.


GLR C++ parsers were for a short time in use on production code at Mozilla, in refactoring tools: Oink (and it's fork, pork). Not quite sure what ended that, but I don't think it was any issue with parsing.


I disagree. It is interesting, that is why there many languages out there without an LSP.


Not just C++. Even C parsing is context-dependent because of typedef. Requires a bit of hackery to parse in a conventional LL/LARL/LR parser.


The solution I've found is to make using the API a hard error with an explicitly temporary and obnoxiously-named workaround variable.

    WORKAROUND_URLLIB3_HEADER_DEPRECATION_THIS_IS_A_TEMPORARY_FIX_CHANGE_YOUR_CODE=1 python3 ...
It's loud, there's an out if you need your code working right now, and when you finally act on the deprecation, if anyone complains, they don't really have legs to stand on.

Of course you can layer it with warnings as a first stage, but ultimately it's either this or remove the code outright (or never remove it and put up with whatever burden that imposes).


I heard a second-hand story about some team at Google who did this, and named the variable something like:

  I_ACKNOWLEDGE_THAT_THIS_CODE_WILL_PERMANENTLY_BREAK_ON_2022_09_20_WITHOUT_SUPPORT_FROM_TEAM_X=1
a year before the deadline. I would be mildly amused by adding

  _AND_MY_USER_ID_IS="<user_id>"


Love this idea, as it is already implemented in a $FAANG company tool. (used company wide by 80k+ SDEs). I got used to seeing these in the logs and terminal. So much that my brain now automatically ignores it from my view/seeing like it does for my nose.


It's because if you explain what's going on, you stop the action. And viewers/readers don't like that.

In fiction it's called an info dump. As an aspiring science fiction author, virtually every beta reader I've had has told me they don't like them. I want my fiction to make sense, but you have to be subtle about it. To avoid readers complaining, you have to figure out how to explain things to the reader without it being obvious that you're explaining things to the reader, or stopping the action to do it.

Movies are such a streamlined medium that usually this gets cut entirely. At least in books you can have appendices and such for readers who care.


> In fiction it's called an info dump. As an aspiring science fiction author, virtually every beta reader I've had has told me they don't like them. I want my fiction to make sense, but you have to be subtle about it. To avoid readers complaining, you have to figure out how to explain things to the reader without it being obvious that you're explaining things to the reader, or stopping the action to do it.

The whole "The audience wants to know, but they don't want to hear it" problem.

Usually solved by having characters do something that shows their character. If it's from the past, have a flashback, don't have a narration.

Like real life, people hate sermons.


I would argue that it is the opposite. People expect an info dump and everything explained to them. I remember watching Captain America: The Winter Soldier (I think it was the last movie I watched in theatre) and pretty much everything was explained to the audience. Guy Richie has character intro screens like Street Fighter in his movies.

Even in movies where everything is explained e.g. in Blade where they will have a scene where someone explains how a weapon works, I've noticed in a recent viewing of the movie that people forgot the explanations of the gadgets he has. In Blade they have a James Bond / Q like conversation between the characters to say "this weapons does X against vampires" and sets the weapon for later on in the movie and people forgot about it.

I watched "The Mothman Prophecies" and quite a lot of the movie was up to interpretation and there was many small things in the film that you might overlook e.g. there is a scene in a mirror where the reflection in the mirror is out of sync with his movements, suggesting something supernatural is occurring and he hasn't realised it yet. While I love the movie, there is very few movies like that.

If you watch movies before the 90s. A huge number of movies will have characters communicate efficiently and often realistically.


Current movies have Reed-Solomon error correction (repetition of concepts, names and explanations) built in so the stream receiver (human watching movie while still holding smartphone in hand) can recover from missed data (scenes).


It's interesting, because old comic books have this as well. For decades (I'm not sure if they still do it) every issue of Wolverine would have some silly bit where Wolverine is talking to himself to remind the reader that the has an adamantium skeleton, razor-sharp claws, enhanced animal senses and an advanced healing factor which can heal from almost any wound. Every single issue, nearly without fail.

It's silly to the reader (and especially to an adult reader) but it's also obvious why this was present: the comic was meant for kids, and also Marvel never know when they might be getting a brand new reader who is totally unfamiliar with the character.


> It's silly to the reader (and especially to an adult reader) but it's also obvious why this was present: the comic was meant for kids, and also Marvel never know when they might be getting a brand new reader who is totally unfamiliar with the character.

The same was present in any serials such as Conan.

There is a description of Conan and where he comes from, how black his hair is, how manly he, how he is the "noble savage "etc. every story.

Conan is definitely not for children. It verges on erotica in many of the stories e.g. in one story there is a older woman whipping a younger teenage girl while tied up and it is made known to the reader the teenage girl is "young" with the implication that she is probably 14 or 15.

Also every Conan story typically ends up with him using sheer overwhelming aggression to defeat super natural entities and then escape with the girl.

I with there was more "King Conan" stuff. But it is a property that Hollywood doesn't really understand.


There is something about super healing that writers feel obligated to re-iterate to the audience. In Heroes, the Cheerleader was taking ludicrous amounts of damage to give everyone a reminder that she could regenerate quickly.


It drives me insane. I don't mind if there is a reminder of what happened like a season ago, but often it is literally the episode before.


TV series really annoy me on this with the "Previously on.." 3 minute time killer at the start recapping the major points of the plot



Most/all streaming providers allow you to skip the recap.


> People expect an info dump and everything explained to them. I remember watching Captain America

People don't have an expectation of that. The number one rule of movie making used to be "Show, don't tell".

With the rise of streaming this changed. People "watch" movies while chatting on their phones, doing home chores etc. A lot of movies in the streaming era spell everything out because people no longer watch the screens.


> People don't have an expectation of that. The number one rule of movie making used to be "Show, don't tell"

I am aware that it is supposed to be like that however around the 90s/2000s this changed.

> With the rise of streaming this changed. People "watch" movies while chatting on their phones, doing home chores etc. A lot of movies in the streaming era spell everything out because people no longer watch the screens.

This was in a movie theatre and this was still in the era where it was considered rude to be speaking on chatting on the phone in the cinema.


This is my wife starting up a 20 minute conversation the moment the first actor shows up on the screen xD

Don't worry, I love her anyway. But yes, we're restarting the movie because no, I don't have any idea what happened either, you were talking. ahahaha


> Even in movies where everything is explained e.g. in Blade where they will have a scene where someone explains how a weapon works, I've noticed in a recent viewing of the movie that people forgot the explanations of the gadgets he has. In Blade they have a James Bond / Q like conversation between the characters to say "this weapons does X against vampires" and sets the weapon for later on in the movie and people forgot about it.

That’s because you’re seeing the rule of cool in action. The explanation itself makes the item interesting enough that the (2 seconds) setup gets the audience excited up watch a grenade blow a vampire’s head off.


The gadgets were often used several scenes later, or much later and integrated with the other action with Blade.


I mean... yeah, that's exactly what happened and that's how filmmaking works?


If you go back and watch the first two seasons of HBO's Westworld, you will see Anthony Hopkins' character repeatedly doing exposition dumps out of his mouth. The difference is in how he does it, that he is in such complete command of his craft that he can work out exactly what the screenwriters intended without drawing any attention to it.

And Trekkies will remember the time Larry Niven wrote a screenplay for TAS and gave all the exposition dumps to Leonard Nimoy. See how nicely he handles it?

https://youtu.be/B65HEhBR-1s


That's very interesting, would you happen to have any example videos of Hopkins in the show?


https://youtu.be/fs9Wyuub3jY

Once you develop an awareness of how SF screenplay writers do this, you can't unsee it.

Babylon 5 was particularly egregious, I was never a fan but I was puzzled that JMS had to do rely on it so heavily. It was like he created the character of Delenn just to be an exposition dumper and Mira Furlan faithfully did what was asked of her. Screenwriters also call this diegesis if the writer goes all the way and uses dialog to explicitly feed the narrative to the audience.

https://youtu.be/VhD0hbGEDSU


My favorite is Con Air (1997). As they're marching the prisoners onto the plane, a warden explains to a colleague who everyone is so we know just what a dangerous crowd the protag is in with/up against.

"That's So-and-so. Drug and weapons charges. Took out a squad of cops before he was finally arrested."

"That's Such-and-such. They call him The Butcher. He eats his victims after he murders them."

"That's the ringleader. Runs the whole drug trade along the entire west coast. Anybody crossing him has a death wish."

Then Nicolas Cage's character, the hero, comes out. He gives a toss of his luxurious hair (must've been smuggling Pantene in his "prison pocket"), everything goes slo-mo, and I swear to you, a beam of holy light falls on him like he's Simba from The Lion King.

"Who's that?"

"Oh, him? He's nobody."


> Then Nicolas Cage's character, the hero, comes out. He gives a toss of his luxurious hair (must've been smuggling Pantene in his "prison pocket"), everything goes slo-mo, and I swear to you, a beam of holy light falls on him like he's Simba from The Lion King.

Don't forget the scene near the end where he says to Bubba (I think at least that is his name), "I will show you that God exists", and in almost every other movie it is left upto interpretation whether God is really protecting/guiding the hero.

However in Conair, Cyrus shoots at him at point blank range and I think every bullet misses and/or grazes him. As he is walking through the plane to finally confront Cyrus there is a number of events that should kill him e.g a propeller flies through the fuselage and narrowly misses him and kills Jonny 23. There is really no other way to interpret it other than Nicolas Cage is very literally demonstrating that God exists.

The movie is not subtle about anything. It was the last "All American" action movie, where the hero beats everyone by just punching them harder and believing in Jesus. I quite like it.


That's like when Ernest undergoes his own version of the Trial of the Blade, the Stone, and the Arrow in Ernest Goes to Camp!


you weren't kidding one bit: https://www.youtube.com/watch?v=sqKCkk8qWxs


You should see the rest of the movie. Nic Cage essentially proves the existence of God by punching guys in the face.

https://www.youtube.com/watch?v=Zm9eKCPGHb0


Maybe some people like that. I have no idea how common this is, but if everything makes sense, I find that kind of boring. I like to have at least a little bit of ambiguity or mystery to chew on.


I really enjoyed the Mothman Prophecies (only watched it recently) because you were really never sure if the Characters involved weren't suffering from some sort of mental illness, or if things were just an unfortunate series of events. It also has a bunch of trippy visual effects in there that don't appear to be CGI.

My friend and I had a completely different interpretations of what happened in the final act. Well worth watching the movie.


Yep, I totally get it, and my initial observation was made when I was maybe 17 or so. Sometimes these topics do get put into movies, such as the sequence in Shazam where they test his newly-found powers -- but even that was played more for laughs and was really just an entertaining way to acknowledge that much of the audience probably never heard of Shazam.


If we succumbed to everyone's complaints we'd have a much more dumbed down version of everything. Consider if you had a concussion on the right temporal lobe and had hypergraphia as a symptom of the resultant temporal lobe epilepsy. I'd write everything I'd want to write regardless of who complains. Philip K. Dick was one such person.


It depends on what you care about. If you're writing purely for yourself, then by all means, go ahead and do so.

I've found there's a balance to be found in listening to others vs yourself. Usually, if multiple people give you the same feedback, there is some underlying symptom they are correctly diagnosing. But they may not have the correct diagnosis, or even be able to articulate the symptoms clearly. The real skill of an author/editor is in figuring out the true diagnosis and what to do about it.

In the communication example, this means rooting conflicts in the true personalities of the characters and/or their context, so that even if they sat down to have a deep chat, they still wouldn't agree. E.g., character A has an ulterior motive to see character B fail. Now you hint at that motive in a subtle way that telegraphs to readers that something is going on, without stopping the action for what would turn into a pedantic conversation. At least, that's what I'd do.


No, you need to be able to potray humans well enough to convey their motivations, goals, emotions, etc without explaining it. Anybody can explain a character, but that's not interesting to read.


The Matrix already has quite an info dump when he joins the real world that halts most of the momentum (on a re-watch, at least). I would not want even more of that.


That doesn’t answer why we don’t do it in real life, for people like parent commmenter who actually are interested in it


To me it's interesting that (a) most people die of old age, and (b) the leading cause of death is essentially preventable (heart disease being highly lifestyle related) or else plausibly curable in the future (I certainly hope we'll see progress on cancer in my lifetime).

That was very much not the case historically; you can Google numbers yourself but the percentage of childhood deaths prior to modern medicine was truly shocking.

It also seems to indicate that, with some thought and care, a meaningful impact (both at individual and societal levels) is possible by altering our lifestyles to be healthier.


>> I certainly hope we'll see progress on cancer in my lifetime).

Good news. You already have :).

Firstly, it's worth pointing out that "cancer" is not really 1 thing. There are lots of different conditions that are cancer, but they are different in many ways. For example lung cancer is pretty bad because your body needs lungs to function. Whereas say a melanoma on your foot is easier for your body to cope with (because your organs are all working.)

Some cancers are easily removed via surgery, some are not.

Likewise chemotherapy is a term covering a lot of different drugs and drug combinations. Advances in this space, matching doses, and drugs, to cancers have progressed enormously over the last couple decades. Some (although very much not all) cancers are now curable.

The most critical part of cancer survival is how early you catch it. But cancers are mostly asymptomatic so unless you "go looking" it's likely they'll be advanced before detection.

The biggest progress with cancer is thus regular screening. Especially for the most common ones. Prostate cancer for example is a simple blood test. How many of us are doing that every 6 months?

Cancer will always be with us. The causes are diverse, and often unexplainable. But we have made huge strides in early detection, as well as treatments. No doubt there will be more strides to come.

So let me be the first to turn your hope into reality :)


I don't think there's a silver bullet coming within our lifetimes.

There's no single points of failure: as you get older, everything just starts wearing out and failing.

If you cure heart disease and cancer, then others will just take their place: strokes, respiratory disease, Alzheimer's disease, falls.

And even if you do extend your lifespan, the reality is quality of life at 90+ is a lot worse than in your 20s or 30s.


> the reality is quality of life at 90+ is a lot worse than in your 20s or 30s.

All my grandparents lived well into their 90s (mediterranean lifestyle + modern medicine), and all of them would’ve chosen euthanasia had it been an option (they phrased that in various ways - essentially something along the lines of “if God could bring me home now it’d be good”).

It’s been a sobering thing to experience and it leaves me hoping that if I’m ever in their position, that option will be available to me somehow.


While it's true that preventing cancer means you're likely to die in a few years of heart disease, and preventing heart disease means you're likely to die in a few years of cancer, solving both will add dramatically more than both effects combined to both life and healthspan.

Those really are the big two - as the graphs in the article show, the next biggest things are much smaller and much less likely to get you, which means you live a lot longer and healthier.


Standard engineering. You fix the thing that breaks the system first. Fix that, the next bug appears. Rinse, repeat.

You don’t think we have been doing this already? Car safety improved, general violence, death by food poisoning, etc. Now we have contacts, knee replacement surgery, meniscus surgery, widespread information on fitness for the elderly, etc.

You have many specialized fields slowly improving. The top focus changes as the previous top problems get solutions.


In general the problem is that when humans enter well into senescence, at some point your body just stops working altogether and it's at that point that basically anything that happens to you next will kill you. Or sometimes it will be nothing at all, and your heart will simply stop in your sleep one night.

This is why when somebody dies 'of old age' it's often not like you can just seem them slowly drifting away day by day. Rather they seem to be in perfectly good health, for their age at least, and then 2 weeks later, they're dead.


Conda doesn't do lock files. If you look into it, the best you can do is freeze your entire environment. Aside from this being an entirely manual process, and thus having all the issues that manual processes bring, this comes with a few issues:

1. If you edit any dependency, you resolve the environment from scratch. There is no way to update just one dependency.

2. Conda "lock" files are just the hashes of the all the packages you happened to get, and that means they're non-portable. If you move from x86 to ARM, or Mac to Linux, or CPU to GPU, you have to throw everything out and resolve.

Point (2) has an additional hidden cost: unless you go massively out of your way, all your platforms can end up on different versions. That's because solving every environment is a manual process and it's unlikely you're taking the time to run through 6+ different options all at once. So if different users solve the environments on different days from the same human-readable environment file, there's no reason to expect them to be in sync. They'll slowly diverge over time and you'll start to see breakage because the versions diverge.

P.S. if you do want a "uv for Conda packages", see Pixi [1], which has a lot of the benefits of uv (e.g., lock files) but works out of the box with Conda's package ecosystem.

[1]: https://pixi.sh/latest/


If you're going to do this, why not generate Pandoc ASTs directly? You can do so from a number of languages and they support (by definition) a superset of any given markup's features, with blocks to call out directly for things you can only do in Latex.

I assume the original question is asking about programmatic document generation, in which case working with a real AST is probably also a productivity and reliability win as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: