Hacker Newsnew | past | comments | ask | show | jobs | submit | shakna's commentslogin

Git was published with compatibility with a federated system supporting almost all of that out of the box - email.

Sure, the world has pretty much decided it hates the protocol. However, people _were_ doing all of that.


People were doing that by using additional tools on top of git, not via git alone. I intentionally only listed things that git doesn't do.

There's not much point in observing "but you could have done those things with email!". We could have done them with tarballs before git existed, too, if we built sufficient additional tooling atop them. That doesn't mean we have the functionality of current forges in a federated model, yet.


`git send-email` and `git am` are built into Git, not additional tools.

That doesn't cover tracking pull requests, discussing them, closing them, making suggestions on them...

Those exist (badly and not integrated) as part of additional tools such as email, or as tasks done manually, or as part of forge software.

I don't think there's much point in splitting this hair further. I stand by the original statement that I'd love to see federated pull requests between forges, with all the capabilities people expect of a modern forge.


I think people (especially those who joined the internet after the .com bubble) underestimate the level of decentralization and federation coming with the old-school (pre web-centric mainframe-like client mentality) protocols such as email and Usenet and maybe even IRC.

Give me “email” PR process anytime. Can review on a flight. Offline. Distraction free. On my federated email server and have it work with your federated email server.

And the clients were pretty decent, at running locally. And it still works great for established projects like Linux Kernel etc.

It’s just pain to set up for a new project, compared to pushing to some forge. But not impossible. Return the intentionality of email. With powerful clients doing threading, sorting, syncing etc, locally.


I'm older than the web. I worked on projects using CVS, SVN, mercurial, git-and-email, git-with-shared-repository, and git-with-forges. I'll take forges every time, and it isn't even close. It's not a matter of not having done it the old way, it's a matter of not wanting to do it again.

I guess we might have opposite experiences. Part of which I understand - the society moved on, the modern ways are more mature and developed… but I wonder how much of that can be backported without handing over to the centralized systems again.

The advantage of old-school was partially that the user agents, were in fact user agents. Greasemonkey tried to bridge the gap a bit, but the Web does not lend itself to much user-side customization, the protocol is too low level, too generic, offering a lot of creative space to website creators, but making it harder to customize those creations to user’s wants.


I'm older than the trees, but, younger than the mountains! Email all day, all the way. Young people are very fascinated and impressed by how much more I can achieve, faster, with email, compared with their chats, web 3.0 web interfaces, and other crap.

Yes, it takes time to learn, but that is true for anything worthwhile.


What I like about git-and-email-patches is the barrier to entry.

I think it's dwm that explicitly advertises a small and elitist userbase as a feature/design goal. I feel like mailing lists as a workflow serve a similar purpose, even if unintentionally.

With the advent of AI slop as pull request I think I'm gravitating to platforms with a higher barrier to entry, not lower.


What is a forge? What is a modern forge? What is a pull request?

There is code or repository, there is a diff or patch. Everything else your labeling as pull request is unknown, not part of original design, debatable.


Sorry to hear that you don't see the value in it. Many others do.

It's not what I meant.

GitHub style pull request is not part of the original design. What aspects and features you want to keep, and what exactly you say many others are interested in?

We don't even know what a forge is. Let alone a modern one.


But the number of outlandish requests in business logic is countless.

Like... In most accounting things, once end-dated and confirmed, a record should cascade that end-date to children and should not be able to repeat the process... Unless you have some data-cleaning validation bypass. Then you can repeat the process as much as you like. And maybe not cascade to children.

There are more exceptions, than there are rules, the moment you get any international pipeline involved.


So, in human interaction: When the business logic goes wrong because it was described with a lack of specificity, then: Who gets blamed for this?

I wasn't specific, because I'd rather not piss of my employer. But anyone who works in a similar space will recognise the pattern.

It's not underspecified. More... Overspecified. Because it needs to be. But AI will assume that "impossible" things never happen, and choose a happy path guaranteed to result in failure.

You have to build for bad data. Comes with any business of age. Comes with international transactions. Comes with human mistakes that just build up over the decades.

The apparent current state of a thing, is not representative of its history, and what it may or may not contain. And so you have nonsensical rules, that are aimed at catching the bad data, so you have a chance to transform it into good data when it gets used, without needing to mine the entire petabytes of historical data you have sitting around in advance.


In my job the task of fully or appropriately specifying something is shared between PMs and the engineers. The engineers' job is to look carefully at what they received and highlight any areas that are ambiguous or under-specified.

LLMs AFAIK cannot do this for novel areas of interest. (ie if it's some domain where there's a ton of "10 things people usually miss about X" blog posts they'll be able to regurgitate that info, but are not likely to synthesize novel areas of ambiguity).


They can, though. They just aren't always very good at it.

As an experiment, recently I've been using Codex CLI to configure some consumer networking gear in unusual ways to solve my unusual set of problems. Stuff that pros don't bother with (they don't have the same problems I face), and that consumers tend to shy away from futzing with. The hardware includes a cheap managed switch, an OpenWRT router, and a Mikrotik access point. It's definitely a rather niche area of interest.

And by "using," I mean: In this experiment, the bot gets right in there, plugging away with SSH directly.

It was awful with this at first, mostly consisting of a long-winded way to yet-again brick a device that lacks any OOB console port. It'd concoct these elaborate strings of shit and feed them in, and then I'd wander over and reset whatever box was borked again. Footgun city.

But after I tired of that, I had it define some rules for engaging with hardware, validation, constraints, and for order of execution, and commit those rules to AGENTS.md. It got pretty decent at following high-level instructions to get things done in the manner that I specified, and the footguns ceased.

I didn't save any time by doing this. But I also didn't have to think about it much: I never got bogged down in wildly-differing CLI syntax of the weirdo switch, the router (whose documentation is locked behind a bot firewall), and access point's bespoke userland. I didn't touch those bits myself at all.

My time was instead spent observing the fuckups and creating a rather generic framework that manages the bot, and just telling it what to do -- sometimes, with some questions. I did that using plain English.

Now that this is done, I get to re-use this framework for as many projects as I dare, revising it where that seems useful.

(That cheap switch, by the way? It's broken. It has bizarro-world hardware failure modes that are unrelated to software configuration or firmware rev. Today, a very different cheap switch showed up to replace it. When I get around to it, I'll have the bot sort that transition out. I expect that to involve a bit of Q&A, and I also expect it to go fine.)


Depends on what was missing.

If we used MacOS throughout the org, and we asked a SW dev team to build inventory tracking software without specifying the OS, I'd squarely put the blame on SW team for building it for Linux or Windows.

(Yes, it should be a blameless culture, but if an obvious assumption like this is broken, someone is intentionally messing with you most likely)

There exists an expected level of context knowledge that is frequently underspecified.


They switched the backend to Common Lisp in 2019, and at the time had two seperate Arc-to-JS compilers in development. [0]

The site may feel less changeable than many, but I would be very surprised if it is not "in-development".

[0] https://news.ycombinator.com/item?id=21550123


Cambridge Analytica was an experiment run by a marketing team. I wouldn't say marketing will always side on ethics.

Propaganda is, and always has been, a subset of marketing aimed at shifting public perception. It would be wild to assume it never happens.


> Cambridge Analytica was an experiment run by a marketing team. I wouldn't say marketing will always side on ethics

The argument isn't against ethics. It's about self interest. Amazon bought the Super Bowl ad to sell Nest units.

"Unwitting" is correct. There are no lizard people coordinating our march towards dystopia. Just individual people who will–like me–read this article, think we should do more, and then probably do nothing.

(If you want a realistic conspiracy, Amazon may have greenlit the spot with an eye towards an audience of one or two in D.C.)


I do not see the difference, between influencing policy by targetting "one or two", or a greater mass of people.

Both serve the same goals, in a different manner. Both require the same choices by marketing - active and with conscience aforethought.


    There are no lizard people coordinating our march towards dystopia. Just individual people who will–like me–read this article, think we should do more, and then probably do nothing.
There doesn't have to be an explicit conspiracy for a conspiracy to emerge. Conspiracies can be spontaneous, organic emergent behavior. For example, the killing of Ken McElroy; an entire community decided to spontaneously kill someone and then decided to cover up the crime collectively (and - also - spontaneously) - https://en.wikipedia.org/wiki/Ken_McElroy

It's very much possible for people to brand the surveillance state as cute; and for consent for a surveillance state to spontaneously emerge / be generated from the attempts of marketers trying to make the Ring dystopia cute.


After the last screwup, by the same company, why would you trust the data to stay on your device?

> Of the accounts impacted globally, we have identified approximately 70,000 users that may have had government-ID photos exposed, which our vendor used to review age-related appeals.

And by same company, I don't mean discord. I mean Persona.

https://discord.com/press-releases/update-on-security-incide...


> Making 50 SOTA AI requests per day ≈ running a 10W LED bulb for about 2.5 hours per day

This seems remarkably far from what we know. I mean, just to run the data centre aircon will be an order of magnitude greater than that.


Air conditioning for a whole data center services a whole data center, not one machine running a task for 1 min

Yes... But the machines in those data centres don't get there without the companies who put them there. You get no tasks for no minutes, without the infrastructure, and so the infrastructure does actually have to be part of the environmental impact survey.

Yeah, I remember being forced to write a cryptocoin, and the database it would power, to ensure that global shipping receipts would be better trusted. Years and millions down the toilet, as the world moved on from the hype. And we moved back to SAP.

What the majority does in the field, is always full of the current trend. Whether that trend survives into the future? Pieces always do. Everything, never.


No, its not a "required"... It means someone may have reasons not to use something, and so spec implementors need to allow for circumstances where it is not present.

Those reasons can be anything. Legal, practical, technological, ideaological. You don't know. All you know is not using it is explicitly permitted.


> You don't know. All you know is not using it is explicitly permitted.

In theory, if they are truly following the specification, you know they thought hard about all the consequences.

I think the pushback in the comments comes from the commonsense feeling that this... didn't happen here.


"permitted" is a pretty empty word in the given context. Because dropping such emails is equally "permitted". Sure, there will be no arrests made, but there will be consequences. And those are what this article is about.

If that's your line, then I am equally permitted to send random binary blobs along the way. Not a crime, so totally permitted. They'll just drop the connection.

Buuut I don't think that is at all relevant to the discussion at hand.


Grist. Let them sink happily into spreadsheets powering everything.

https://github.com/gristlabs/grist-core


Kessler bad.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: