I don't think it's legally required for vets to check chips whenever new "owners" take them in for a visit. I've been holding out hope for reuniting with my missing cat Salt, but wherever he is, he's happily in someone else's living room. And I doubt the microchip will bring him back anytime soon.
Sadly cat snatching is a real thing that's happened to me possibly twice. The first time was confirmed beyond a doubt; I had to bust out my cat from her back porch at 2am or so when I was roaming the neighborhood looking for him. The only reason I was even in the vicinity was that it was the last spot the GPS tracker reported before he went missing.
"Keep your pets indoors, then!" Yeah, yeah. The risks come with the territory. But my boy Pepper is still with me after a couple years, and I'm hoping a tag with "I have a happy home" followed by my number will keep would-be "do-gooders" away. (A lot of these crazy folks that snatch pets think they're doing the pet a favor by taking them.)
Miss you Salt.
Anyway, the point is, if vets were legally required to actually check the chips when they're brought in for appiontments, they'd quickly notice the discrepancy. They're the only entity in the world in a position to do something about it. But what vet is gonna try to take "your" pet away from you when you take them in just because of mismatched chips? Nobody, because pets are property, and that would be theft according to the law.
This is a widely-cited myth, and almost impossible to measure in practice. What does "all the local wildlife" even mean? Is the threat here that birds are going to go extinct because of cats? Not likely, and the burden of proof is on the people repeating this mistaken belief.
Using "think of the birds" as a justification for imprisoning your cat for their entire lives is also pretty crummy. It's called wildlife because they exist in the wilderness. Even if cats kill a large number of birds, so what? Those birds don't have a happy, loving home with emotional bonds to an actual human.
If you think this logic is flawed, explain why you're fine with flies dying but not birds. I bet you've swatted a few in your time.
> Free-ranging cats on islands have caused or contributed to 33 (14%) of the modern bird, mammal and reptile extinctions recorded by the International Union for Conservation of Nature (IUCN) Red List [1]
Cats are probably a leading cause of mortality in birds. [2] Domestic cats are not native to North America. The birds here would not have evolved to avoid them (and beyond that, domestic cat numbers are not limited by prey availability because they're pets bred and fed by humans).
You'll find plenty of studies with evidence that domestic cats are probably bad for bird populations. [3][4]
But to be fair, buildings/glass windows kill a lot of birds too. [5]
Suppose it's true that cats are bad for bird populations. The implication is that just because birds are dying, it's okay to snatch a cat. More than that, that cats should be imprisoned for their entire lives, when they naturally want to roam.
Someone can take one side of this ethical debate or the other, and both sides probably won't agree. I personally find it sad that people would place the well-being of birds above that of a wonderful, furry companion that clearly belongs to someone.
The logic also doesn't quite line up: I was hoping someone would try to justify why it's okay to kill flies but not birds, since that's the real counterargument to this one. Especially when they kill flies with their own hands.
So much of life boils down to "we're the apex species and we do what we want." But such is life. I find it difficult not to call out the absurdities when they appear, though.
To the topic at hand, how exactly is this quantified? I suspect that word "contributed" is doing a lot of work here. [2] seems to admit as much:
> True estimates of mortality are difficult to determine. However, recent studies have synthesized the best available data to estimated ranges of mortality to bird populations in North America from some of the most common, human-caused sources of bird mortality.
The numbers in [2] are admittedly pretty startling. But it looks like they come from one report labeled "2013a". Any info on where to find it, or what it even is? Otherwise it's easy to call [2] a citation when in fact no evidence whatsoever is being presente.
[4] is much better. https://wildlife.onlinelibrary.wiley.com/doi/10.1002/wsb.737 But cats are still only a contributory factor, not the main cause; the report says they're the second leading cause of admissions, not the first. So, high, and worth thinking about.
But again, the cost here is "removing, by force, someone's beloved pet." I'm not above saying that we should probably care about cats more than birds, because of the emotional bonds they form with humans. After all, that's why we're fine with flies being killed, right? No emotional bonds.
> The implication is that just because birds are dying, it's okay to snatch a cat.
I don't think anyone's implying that? It just seems foolish to let your cat roam about. Not only are they at risk of getting stolen, but the risks of getting injured/killed or sick (or poisoned) are so much higher than if you keep them at home.
Whenever I hear about someone who's distraught about an outdoor cat of theirs that died while outside, I feel super bad for the cat, and not quite so much for the owner. That death could have been prevented, trivially.
Once again: You kill flies. Sometimes dozens of them. Your conscience is clear. That's wildly selfish of you, yet you don't seem to care about the flies. Why not? They're just as much a part of the ecosystem as the birds.
Also, this entire discussion is off-topic. The point was for vets to verify microchips, something directly related to the article.
> I'm looking after for my cat's wellbeing, not some bird's
What a selfish way to look at things. So you think it's fine to bring invasive species into a new environment and let them damage the local ecosystem? Cool cool cool.
If you were truly looking after your cat's well-being, you'd keep them inside in the first place. Their attachment to roaming about is not as strong or essential as you seem to think it is.
Suppose someone were arguing that you should imprison your own child for their entire life, because every time they go outside, they kill ants. Would you still consider it selfish to disagree?
In my neighborhood some people let their cats run around loose. Then the local wildlife (coyotes) eats the cats, and the idiot cat owners whine that the city needs to "do something" about the coyotes.
Yes. Believe it or not, that's fine for cats. "Everybody else" is by far the biggest risk. Not cars, not animals.
It's always so frustrating when you've been doing something for 15 years, speak from experience, and then someone comes along and says "Well, that's bad!" Sure. Meanwhile, my cat comes home happy and healthy each night, unless "everybody else" decides to steal him in the guise of doing him a favor.
Verified microchips during vet appointments would cancel out this exploit.
It’s not fine for the cat. Or for the outdoors. There’s the whole parallel thread about that. But also keep your cat inside so they’re not roaming into my yard. It’s wild that outdoor cat “owners” are so willing to co-opt everybody else’s property as part of the cat’s habitat.
Vets already have enough to deal with you'd be more likely to end up with undesirable outcomes vs what you want. People would not take the animal to the vet. People would try to destroy the chip by whatever method they happen to read in facebook. People would try to maliciously make changes to the database. Etc etc
I'm not so sure. The people who snatch cats off the street think that they're doing the cat a favor. They assume the original owners won't even notice, let alone care if the cat goes missing. And they justify it with "Well, they shouldn't have let them out anyway."
The brutal reality is that pounds are overflowing with lost animals. Statistics are on your side that if you snatch any given cat that you see, you'll likely be doing it a favor. But cats with collars are a different story. If people see that they're owned, they should keep their hands off. Unfortunately that doesn't stop some fanatics.
It might be useful. The Lion optimizer uses 1-bit values to represent forward or backward. NNs can pick up on patterns like that in very strange ways. Of course, those are 1's, not 0's, so maybe the benefit disappears when multiplying by zero. But it's important to challenge assumptions like "well, let's get rid of the negative half of 0" before you test experimentally whether it's useful or not. NNs are nothing if not shockingly weird when you try to make them.
Is it? Stopping is a matter of ground swell support contacting representatives and saying "please don't". Enough people do it to enough receptive reps and they'll vote no.
Passing new ones that "you like" requires lawyers to write laws, get those laws in front of reps, get them to agree to try and pass it, stake some of their reputation on pushing it, get the ground swell to support it -- which might be difficult when the current law is "dont scan messages", you can easily say "hey dont scan anything! support that!" vs "hey scan somethings sometimes", cause many people will call that a slippery slope. I don't see how they are at all the same process.
Stopping legislation means organizing a sufficient number of no votes.
Passing it means organizing a sufficient number of yes votes.
They are the same process and they require exactly the same work. They take place at the exact same moment in time and space, although they are mutually exclusive.
You're free to describe things however you want, but your descriptions won't change the underlying reality.
> Passing it means organizing a sufficient number of yes votes.
EU Parliament can't propose legislation, only vote on proposals from the Commission. We'd have to convince the Commission to propose a law to prevent themselves from trying to pass this bullshit over and over.
> and actually good and original stuff gets rejected
This seems to be the key part. Are you sure that's true?
In other news, (a) apparently you can now submit URLs with anchors to HN, previously a perennial problem; (b) this submission anchors to a comment that just says "I will try this. Suggestions welcome" with no further context.
Ironically, (b) was exactly why (a) was disallowed for the longest time. Anchors are usually a mistake by the submitter, since whatever's being anchored to usually has a permalink. Except Github. Hello, Github comments.
In the academic circles I frequent, it's not true. Any one journal might reject the good stuff, but it doesn't take more than a few applications to find a journal who recognizes it, and the cost of producing the research is so high that with the current career incentives it'd be ridiculous not to continue submitting. That does mean that journal "quality" matters less than you might think, but I don't think anyone's surprised by that notion either.
Errors the other direction are more common. I'll state that as an easily verified fact, but people like fun stories, so here's an example:
One professor I worked with had me write up a bunch of case studies of some math technique, tried to convince me that it was worth a paper, paid somebody else to typeset my work, and told me to compensate him if I wanted my name on the "paper." I didn't really; it was beneath any real mathematician; but there now exists some journal which has a bastardized, plagiarized version of my work with some other unrelated author tacked on available for the world to see [0], and it's worth calling out that nothing about the "paper" is journal-worthy. It's far too easy to find a home for academic slop, and I saw that in every field I spent any serious amount of time in.
No it's fine, it thoroughly amused a HN nerd like me. I've been keeping track of how HN works for well over a decade, and noticing small changes like this is something that's genuinely gratifying. The mods will no doubt be by to clean up the url shortly.
I'm just relieved you can submit anchored URLs now. I once stayed up for a few hours trying to submit some work I made as a github comment only to be disappointed that it would always redirect to the toplevel issue.
It seems they don't test for that, since they use the second-best human solution as a baseline.
And that's the right way to go. When computers were about to become superhuman at chess, few people cared that it could beat random people for many years prior to that. They cared when Kasparov was dethroned.
Remember, the point here is marketing as well as science. And the results speak for themselves. After all, you remember Deep Blue, and not the many runners-up that tried. The only reason you remember is because it beat Kasparov.
> The only reason you remember is because it beat Kasparov
There is an additional fascinating aspect to these matches, in that Kasparov obviously knew he was facing a computer, and decided to play a number of sub-optimal openings because he hoped they might confound the computer's opening book.
It's not at all clear Deep Blue would have eked out the rematch victory had Kasparov respected it as an opponent, in the way he did various human grandmasters at the time.
This is supposed to test for AGI, not ASI. ARC-AGI (later labelled "1") was supposed to detect AGI with a test that is easy for humans, not top humans.
Thanks! The dictionary should be more or less finished in a few months. If you or anyone else might find it helpful for studying Japanese, feel free to use it, copy it, and adapt it however you like.
Sparked a controversial subthread elsewhere here. I don’t think this counts as doxxing, but some people apparently see it that way. It was an entertaining read though.
I’m not sure it’s possible to have different priorities without being stupid or ignorant of history. Once you concede a certain right, such as a right to privacy, you rarely if ever get it back. Most people seem not to care about this, despite ample evidence that it’s something worth caring about. Stupid is the obvious term for it, though obtuse could work as well.
Of course, I don’t blame them. They haven’t lived in a context where they need to care. All of the reasons they’ve heard to care have come from stories of people who lived before them. But ignoring warnings for no good reason is still dumb.
A better thing to engage with is whether we can meaningfully change the situation. It might still be possible, but it requires an effective immune response from everybody on this particular topic. I’m not sure we can, but it’s worth trying to.
> They haven’t lived in a context where they need to care.
You might believe you don't need opsec, and then new laws are passed, or your national supreme court overturns the case that gave you your rights, or someone invades; and now suddenly you're wanted for anything from overstaying a visa, outright murder, or simply existing.
USA, right now, peoples lives are being destroyed because the wrong people got their data. Lethal consequences exist in Russia, Ukraine, Israel, Palestine, Lebanon, Iran.
Certain professions per definition: Journalists, Lawyers, Intelligence, Military.
Certain Ethnicities. (Jewish, Somali) ; Faiths...
It doesn't need to be quite this dramatic though. But you might accidentally have broken some laws and don't even know about it yet. Caught a fish? Released a fish? Give the wrong child a bowl of soup [1]. Open the door, refuse to open the door. Signed a register; didn't sign a register. The list of actual examples is endless. The less people know about you, the less they can prosecute.
[1] A flaw in the Dutch Asylum Emergency Measures Act (2025) that would have criminalized offering even a bowl of soup to an undocumented person. The Council of State confirmed this reading. A follow-up bill was needed to fix it.
There is no world where a totalitarian government’s law enforcement ambitions on some object-level question are thwarted by the same government’s enforcement of privacy law. Countries with GDPR that are thinking of rounding up and kicking out the refugees know perfectly well who and where the refugees are.
The law is irrelevant in that case but the actual situation is not. If people have never put their personal information online, the bad government can't get it from online. A new phone coming out during the time of the bad government, that says the government requires you to enter your name and address, will not be received as well as if it comes out during good government times.
Making the point that people tend to engage in short term thinking. The reception of the same law, product, or practice will be colored by the current government as opposed to potential future ones.
You're not entirely wrong; ultimately if they put enough resources towards it they can probably catch quite a number of people. But governments have limited resources and really don't track everyone all the time. Not even in 2026 are they able to do that yet. It helps if you maintain some level of opsec. If they really want to get you, they can get close, but see eg Ed Snowden; who managed to stay ahead of the US government just long enough to reach relative safety (FSVO).
I have the right to my own senses, my own observations, my own memories. I have the right to photograph what I can see with my eyes, and to write down what I can remember. Unless enjoined by a specific duty of care (doctor/patient, attorney/client, security clearance, etc) I have the right to discuss my memories with others. This obtains even when using electronic tools and even when working in association with others.
I don’t intend to give up or accept limitations on these rights because you consider yourself to have “privacy rights” or ownership interests in my records, my memories, my perceptions, or the reality in front of me. I find the notion of the government or another person interfering in this process, the perception and recollection of reality, to be creepy and totalitarian by itself.
In 1984, it is not only that the government is aware of Winston, but that it routinely tampers with or destroys evidence of the past & demands to control the perception of the present. I do not think we should let a government do that, even for a good reason like “protect your privacy” any more than we should let it destroy general purpose computing “for the children.”
I'm actually fine with that; so long as that is restricted to your own senses, observations, and memories; and doesn't somehow spill over and somehow pertain to mine. Basically the typical freedom to swing your fists ends at the tip of my nose argument. This is probably a solvable problem between reasonable people; give or take.
It can remain legal to operate a security camera while being illegal to upload unencrypted footage to any third party. I'm not worried about individuals, only about big business and the government.
> This obtains even when using electronic tools and even when working in association with others.
I think it is reasonable to place limits on public "speech" (ex uploading videos of people) without interfering with private (in the case of electronics E2EE) communications.
There are many people rights people don't have and they're okay with that and even support not having the right to stab people, not having the right to steal from a store, not having the right to take nude pictures of children... What if this one is like that?
But all work isn't done by LLMs at the moment and we can't be sure that it will be so the question is ridiculous.
Maybe one day it will be.. And then people can reevaluate their stance then. Until that time, it's entirely reasonable to hold the position that you just don't
This is especially true with how LLM generated code may affect licensing and other things. There's a lot of unknowns there and it's entirely reasonable to not want to risk your projects license over some contributions.
I use them all the time at work because, rightly or wrongly, my company has decided that's the direction they want to go.
For open source, I'm not going to make that choice for them. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.
For my own open source projects, I'm not interested in using LLM generated code. I mostly work on open source projects that I enjoy or in a specific area that I want to learn more about. The fact that it's functional software is great, but is only one of many goals of the project. AI generated code runs counter to all the other goals I have.
Basically all of my actual programming work has been done by LLMs since January. My team actually demoed a PoC last week to hook up Codex to our Slack channel to become our first level on-call, and in the case of a defect (e.g. a pagerduty alert, or a question that suggests something is broken), go debug, push a fix for review, and suggest any mitigations. Prior to that, I basically pushed for my team to do the same with copy/paste to a prompt so we could iterate on building its debugging skills.
People might still code by hand as a hobby, but I'd be surprised if nearly all professional coding isn't being done by LLMs within the next year or two. It's clear that doing it by hand would mostly be because you enjoy the process. I expect people that are more focused on the output will adopt LLMs for hobby work as well.
I suspect this is more true than most people think. Today's bad code will be cleaned up by tomorrow's agents.
The other factor that gets glossed over is that llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility. When I do code with llms, a big part of it is demonstration, i.e. pseudocoding a pattern/structure, asking the model if it understands, and then having it complete the pattern. I've had a lot of success with this approach.
> llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility
Right, this is the kind of discussion we're having on my team: suddenly all of the already good engineering practices like good observability, clear tests with high coverage, clean design, etc. act as a massive force multiplier and are that much more important. They're also easier to do if you prioritize it. We should be seeing quality go up. It's trivial to explore the solution space with throwaway PoCs, collect real data to drive your design, do all of those "nice to have" cleanups, etc. The people who assume LLM = slop are participating in a bizarre form of cope. Garbage in, garbage out; quality in, quality out. Just accept that coding per se is not going to be a profession for long. Leverage new tools to learn more, do more, etc. This should be an exciting time for programmers.
> It's clear that doing it by hand would mostly be because you enjoy the process.
This will not happen until companies decide to care about quality again. They don't want employees spending time on anything "extra" unless it also makes them significantly more money.
> It's clear that doing it by hand would mostly be because you enjoy the process.
This is gaslighting. We're only a few years into coding agents being a thing. Look at the history of human innovation and tell me that I'm unreasonable for suspecting that there is an iceberg worth of unmitigated externalities lurking beneath the surface that haven't yet been brought to light. In time they might. Like PFAS, ozone holes, global warming.
Ultimately you always have to trust people to be judicious, but that's why it doesn't make any changes itself. Only suggests mitigations (and my team knows what actions are safe, has context for recent changes, etc). It's not entirely a black box though. e.g. I've prompted it to collect and provide a concrete evidence chain (relevant commands+output, code paths) along with competing hypotheses as it works. Same as humans should be doing as they debug (e.g. don't just say "it's this"; paste your evidence as you go and be precise about what you know vs what you believe).
That's sounds like the perfect recipe for turning a small problem into a much larger one. 'on call' is where you want your quality people, not your silicon slop generator.
I say let people hold this stance. We, agentic coders, can easily enough fork their project and add whatever the features or refinements we wanted, and use that fork for ourselves, but also make it available for others in case other people want to use it for the extra features and polish as well. With AI, it's very easy to form a good architectural understanding of a large code base and figure out how to modify it in a sane, solid way that matches the existing patterns. And it's also very easy to resolve conflicts when you rebase your changes on top of whatever is new from upstream. So, maintaining a fork is really not that serious of and endeavor anymore. I'm actually maintaining a fork of Zed with several additional features (Claude Code style skills and slash commands, as well as a global agents.md file, instead of the annoying rules library system, which I removed, as well as the ability to choose models for sub-agents instead of always inheriting the model from the parent thread; and yes, master branch Zed has subagents! and another tool, jjdag)
That seems like a win-win in a sense: let the agentic coders do their thing, and the artisanal coders do their thing, and we'll see who wins in the long run.
> We, agentic coders, can easily enough fork their project
And this is why eventually you are likely to run the artisanal coders who tend to do most of the true innovation out of the room.
Because by and large, agentic coders don't contribute, they make their own fork which nobody else is interested in because it is personalized to them and the code quality is questionable at best.
Eventually, I'm sure LLM code quality will catch up, but the ease with which an existing codebase can be forked and slightly tuned, instead of contributing to the original, is a double edged sword.
"make their own fork which nobody else is interested in because it is personalized to them"
Isn't that literally how open-source works, and why there's so many Linux distros?
Code quality is a subjective term as well, I feel like everyone dunking on AI coding is a defensive reaction - over time this will become an entirely acceptable concept.
For a human to be able to do any customization, they have to dive into the code and work with it, understand it, gain intuition for it. Engage with the maintainers and community. In the process, there's a good chance that they'll be encouraged to contribute improvements upstream even if they have their own fork.
Vibe coders don't have to do any of this. They don't have to understand anything, they can just have their LLMs do some modifications that are completely opaque to the vibe coder.
Perhaps the long term steady state will be a goldilocks renaissance of open source where lots of new ideas and contributors spring up, made capable with AI assistance. But so far what I've seen is the opposite. These people just feed existing work into their LLMs, produce derivative works and never bother to engage with the original authors or community.
> Vibe coders don't have to do any of this. They don't have to understand anything, they can just have their LLMs do some modifications that are completely opaque to the vibe coder.
I spend time using my agent to better understand existing codebases and their best practices than I'd ever have the time/energy to do before, giving me a broader and more holistic view on whatever I'm changing, before I make a change.
Well, I would argue that if I didn't spend that time, then even a personal fork that I vibe coded would be worse, even for me personally. It would be incompatible with upstream changes, more likely to crash or have bugs, more difficult to modify in the future (and cause drift in the model's own output) etc.
I always find it odd that people say both that vibe coding has obvious and immediate negative consequences in terms of quality and at the same time that nobody could learn or be incentivized to produce better architecture and code quality from vibe coding when they would obviously face those consequences.
I think that in the long run, AI assisted coding will turn out to be better than handcrafted code. When you pay for every token, and code generation is quick, a clean, low entropy codebase with good test coverage gets you a lot more for your dollar than a dog's breakfast. It's also much easier to fix bad decisions made early on in a project's life, because the machine is doing all of the heavy lifting.
This also lines up with the history of automation in many other industries. Modern manufacturing is capable of producing parts that a medieval blacksmith couldn't dream of, for example. Sure, maybe an artisan can produce better code than an llm now, but AI assisted humans will beat them in the near future if they aren't already producing similar quality output at greater speed, and tomorrow's models will fix the bad code written today. The fact that there's even a discussion on automated vs hand written today means that the writing is almost certainly on the wall.
Most "artisanal" coders that are complaining are working on the n-1000th text editor, todo list manager, toy programming language or web framework that nobody needs, not doing "true innovation".
I mean, I do open PRs for most of my changes upstream if they allow AI, once I've been using the feature for a few weeks and have fixed the bugs and gone over the code a few times to make sure it's good quality. Also, I'm going to be using the damn thing, I don't want it to be constantly broken either, and I don't want the code to get hacky and thus incompatible with upstream or cause the LLMs to drift, so I usually spend a good amount of time making sure the code is high quality — integrates with the existing architecture and model of the world in the code, follows best practices, covers edge cases, has tests, is easy to read so that I can review it easily.
But if a project bans AI then yeah, they'll be run out of town because I won't bother trying to contribute.
>> but also make it available for others in case other people want to use it for the extra features and polish as well.
this feels like the place where your approach breaks down. I have had very poor results trying to build a foundation that CAN be polished, or where features don't quickly feel like a jenga tower. I'm wondering if the success we've seen is because AI is building on top of, or we're early days in "foundational" work? Is anyone aware of studies comparing longer term structural aspects? is it too early?
I've been able to make very clear, modular, well put together architectural foundations for my greenfield projects with AI. We don't have studies, of course, so it is only your anecdote versus mine.
> We, agentic coders, can easily enough fork their project and add whatever the features
Bold of you to assume that people won’t move (and their code along with it) to spaces where parasitic behaviour like this doesn’t occur, locking you out.
In addition to just being a straight-up rude, disrespectful and parasite position to take, you’re effectively poisoning your own well.
Since when is maintaining a personal patch set / fork parasitic? And in what way does it harm them, such that they should move to spaces where it doesn't happen, as a result? Also, isn't the entire point of open source precisely to enable people to make and use modifications of code if they want even if they don't want to hand code over? Also, that would be essentially making code closed source — do you think OSS is just going to die completely? Or would people make alternative projects? Additionally, this assumes coders who are fine with AI can't make anything new themselves, when if anything we've seen the opposite (see the phenomenon of reimplementing other projects that's been going around).
Additionally, if they accept AI contributions, I try, when I have the time and energy, make sure my PRs are high quality, and provide them. If they don't, then I'll go off and do my own thing, because that's literally what they asked me to do, and I wasn't going to contribute otherwise. I fail to see how that's rude or parasitic or disrespectful in any way except my assumption that the more featureful and polished forks might eventually win out.
Its only parasitic if you are tricking users into thinking you are the original or providing something better. You could be providing something different (which would be valuable) but if you are not, you are just scamming users for your own benefit.
I have no intention of tricking anyone into thinking I'm the original! I do think I offer improvements in some cases, so in cases where the project is something I intend for other people to ever see/use, I do explain why I think it is better, but I also will always put the original prominently to make sure people can find their way back to that if they want to. For example, the only time I've done this so far:
> just like almost all transportation is done today via cars instead of horses.
That sounds very Usanian. In the meantime transportation in around me is done on foot, bicycle, bus, tram, metro, train and cars. There are good use cases for each method including the car. If you really want to use an automotive analogy, then sure, LLMs can be like cars. I've seen cities made for cars instead of humans, and they are a horrible place to live.
Signed, a person who totally gets good results from coding with LLMs. Sometimes, maybe even often.
As someone who enjoys working with AI tools, I honestly think the best approach here might be bifurcation.
Start new projects using LLM tools, or maybe fork projects where that is acceptable. Don't force the volunteer maintainers of existing projects with existing workflows and cultures to review AI generated code. Create your own projects with workflows and cultures that are supportive of this, from the ground up.
I'm not suggesting this will come without downside, but it seems better to me than expecting maintainers to take on a new burden that they really didn't sign up for.
That would only be a world where the copyright and other IP uncertainties around the output (and training!) of LLMs were a solved and known question. So that's not the world we currently live in.
The ruling capital class has decided that it is in their best interest for copyright to not be an obstacle, so it will not be. It is delusional to pretend that there is even a legal question here, because America is no longer a country of laws, to the extent that it ever was. I would bet you at odds of 10,000 to 1 that there will never be any significant intellectual property obstacles to the progress of generative AI. They might need to pay some fines here and there, but never anything that actually threatens their businesses in the slightest.
There clearly should be, but that is not the world we live in.
even if this was true or someday will be (big IF), is it worth looking for valid counter workflows? example: in many parts of the US and Canada the Mennonites are incredibly productive farmers and massive adopters of technology while also keeping very strict limits on where/how and when it is used. If we had the same motivations and discipline in software could we walk a line that both benefited from and controlled AI? I don't know the answer.
Sadly cat snatching is a real thing that's happened to me possibly twice. The first time was confirmed beyond a doubt; I had to bust out my cat from her back porch at 2am or so when I was roaming the neighborhood looking for him. The only reason I was even in the vicinity was that it was the last spot the GPS tracker reported before he went missing.
"Keep your pets indoors, then!" Yeah, yeah. The risks come with the territory. But my boy Pepper is still with me after a couple years, and I'm hoping a tag with "I have a happy home" followed by my number will keep would-be "do-gooders" away. (A lot of these crazy folks that snatch pets think they're doing the pet a favor by taking them.)
Miss you Salt.
Anyway, the point is, if vets were legally required to actually check the chips when they're brought in for appiontments, they'd quickly notice the discrepancy. They're the only entity in the world in a position to do something about it. But what vet is gonna try to take "your" pet away from you when you take them in just because of mismatched chips? Nobody, because pets are property, and that would be theft according to the law.
reply