Hacker Newsnew | past | comments | ask | show | jobs | submit | IgorPartola's commentslogin

The problem is that the difference between a low tech and a high tech diesel tractor is mostly emissions and some loss of efficiency. The difference between a low tech and a high tech electric car is a 25 mile range and a 250 mile range, a top speed of 35 mph and 100 mph, carrying capacity and so on.

I recently did a lawn tractor conversion from gas to electric and what I got was in my opinion significantly better and more reliable than a commercial option at 20% of the price but it is limited to 4mph. Scaling it to 5 would require a lot of custom fabrication and a much more expensive drive motor. Once this tech is significantly better and cheaper to the point of being a commodity it will be a different story. For now it just isn’t.


What a way to ruin goodwill with the very community they are trying to court. I am on a Pro subscription to use with Claude Code, but it sounds like the days of using it are numbered. I guess I will be trying the latest offering from OpenAI and Google tomorrow and if they are satisfactory I might just switch. Moreover, I have been recommending Anthropic's API solutions up to now to friends and clients. Based on this dumb move I will be now starting with this anecdote and then giving a very hedged recommendation.

Realistically the future of all this is that open models become good enough that LLM as a service becomes a commodity with a race to the bottom in terms of cost. Given where we are today I can easily see open weight models in 2-3 years making Anthropic and OpenAI irrelevant for everyday development work (I justify this like so: if my coding agent is 10x smarter than I am, how would I understand if it did all the right things? I want someone of roughly my intelligence for coding. I can see use cases for like independent pharma work or some such where supergenius level intelligence is justified, but for coding ability for mere mortals to reason about the code is probably more important).


> the very community they are trying to court

After all, we may be a just a data source and not their intended demographic all along.


The valuation is obviously based on the premise of their capturing the white collar economy. OpenAI's charter says so openly. And Chinese robots will come for blue workers next.

The economy, not the workers :) It feels like pretty soon white collar workers will be in a “You have nothing to lose but your chains” situation. Except we are not as fit as the proletariat of the past.

In my experience, Codex is better than Claude Code in every way and GPT-5.4 is on par or better than Opus 4.6 at every coding task I ask of it.

You're really not going to miss CC. And OpenAI actually had some foresight to invest massively in compute so they don't run into usage and rate limits like Anthropic does constantly. I couldn't even use CC for more than a couple complex tasks before I was out of extra usage or session usage. It was a maddening productivity killer and I just switched to Codex full time.


> I guess I will be trying the latest offering from OpenAI and Google tomorrow and if they are satisfactory I might just switch.

If Anthropic’s move is confirmed, my guess is other coding agents providers might end up making similar moves


This is the definition of cartel

Gpt xhigh isn't that bad..

Kimi K2.6 is supposedly good: https://www.kimi.com/blog/kimi-k2-6

I am on Google's $20/month plan, and I usually get about three half-hour coding sessions a week with AntiGravity using the Claude models. The limit using Gemini Pro models is much higher. I am retired so Google's $20 plan is sufficient for me, but I understand that people who are still working would need higher limits.

I am also on a $10/month plan with Nous Research for supplying open models for their open source Hermes Agent. I run Hermes inside a container, on a dedicated VPS as a coding agent for complex tasks and so far I find the $10/month plan is enough for about five to ten major tasks a month. I think it is also a good deal.


If I could get the equivalent of GPT-4 running locally, that would cover like 95% of what I need an LLM for. Tweak this dockerfile, gimme a bash script. I guess the context probably isn’t sufficient for the agent stuff, but I’m sure more context-efficient harnesses will be coming down the line

I have an old Mac Mini with 32G of integrated RAM, and the following works for me for small local code changes:

ollama launch claude --model qwen3.6:35b-a3b-nvfp4

In addition to not having an integrated web search tool, one drawback is that it runs more slowly than using cloud servers. I find myself asking for a code or documentation change, and then spending two minutes on my deck getting fresh air waiting for a slower response. When using a fast cloud service I can be a coding slave, glued to my computer. Still, I like running local when I can!


gpt 5.4 has been performing great in my harness.

I have codex and Gemini for spill over, they work good.

A self driving car should have no steering wheel. If it has a steering wheel it is a vote of no confidence from the manufacturer.

I don't really buy that. There are a lot of situations (e.g. being directed to park in a space at a fairgrounds, ski area, or whatever) that you can't reasonably expect AFAIK to be programmed into a car's computer. Even if a car can legitimately handle roads under most circumstances, they're not going to be able to handle everything.

"Because the Origin does not have manual controls, the NHTSA must issue an exception to the Federal Motor Vehicle Safety Standards to permit operation on public roads"

Too bad that project failed.

https://en.wikipedia.org/wiki/Cruise_(autonomous_vehicle)


I think their point was "it's not ready yet."

Throttle and yoke aren't a vote of no confidence from aircraft manufacturers. Some modes of operation are suitable for autopilot and some are not.

There is a reason that pilots get basically told the ins and outs of a specific plane. Imagine the outrage if people need to do month long training for a specific car just to be able to drive it (and not just a general "here is how cars roughly work and the laws of the road").

Would it be a vote of no confidence in Full Self Flying?

No, it would be an acknowledgement of the lack of perfection in human systems so far.

I mean, they kinda are.

Airline pilots aren't supposed to take a nap, and there are occasionally articles about the various things that have gone wrong because the pilots weren't paying attention.


That presents an interesting failure mode challenge.

Well we don't have any self driving cars outside of San Francisco. Only cars with advanced driver assistance.


Also in Vegas (Zoox), and China has their own competitive market of self-driving taxis.

How do you reverse such a car into your own driveway that's positioned in a funny way at an angle and an incline? What if you're parking off road for any reason? Like, you have to be able to manoeuvre your own vehicle sometimes.

The simpler explanation is that an industry insider who can publish a piece saying “helium shortage will mean the end of chip making as we know it” can get a lot more views and clicks than one who published “chip making will get mildly more expensive because one of the key ingredients is going to need to be sourced from farther away or from more expensive suppliers”. There is always an angle, whether it is clout, pumping the market, selling you something, etc. and when you are not an industry insider there is little you can do to understand where else you can buy the particular ingredient from so it sounds plausible.

Yeah I am worried about skill atrophy too. Everyone uses a compiler these days instead of writing assembly. Like who the heck is going to do all the work when people forget how to use the low level tools and a compiler has a bug or something?

And don’t get me started on memory management. Nobody even knows how to use malloc(), let alone brk()/mmap(). Everything is relying on automatic memory management.

I mean when was the last time you actually used your magnetized needle? I know I am pretty rusty with mine.


> an LLM is exactly like a compiler if a compiler was a black box hosted in a proprietary cloud and metered per symbol

Yeah, exactly.


Snark aside, this is an actual problem for a lot of developers in varying degrees, not understand anything about the layers below make for terrible layers above in very many situations.

It's called Operation Epic Fuckup for a reason.


Operation Epic Fail.


I used to use rebase much more than merge but have grown to be more nuanced over the years:

Merge commits from main into a feature branch are totally fine and easier to do than rebasing. After your feature branch is complete you can do one final main-to-feature-branch merge and then merge the feature branch into main with a squash commit.

When updating any branch from remote, I always do a pull rebase to avoid merge commits from a simple pull. This works well 99.99% of the time since what I have changed vs what the remote has changed is obvious to me.

When I work on a project with a dev branch I treat feature branches as coming off dev instead of main. In this case I merge dev into feature branches, then merge feature branches into dev via a squash commit, and then merge main into dev and dev into main as the final step. This way I have a few merge commits on dev and main but only when there is something like an emergency fix that happens on main.

The problem with always using a rebase is that you have to reconcile conflicts at every commit along the way instead of just the final result. That can be a lot more work for commits that will never actually be used to run the code and can in fact mess up your history. Think of it like this:

1. You create branch foo off main.

2. You make an emergency commit to main called X.

3. You create commits A, B, and C on foo to do your feature work. The feature is now complete.

4. You rebase foo off main and have to resolve the conflict introduced by X happening before A. Let’s say it conflicts with all three of your commits (A, B, and C).

5. You can now merge foo into main with it being a fast forward commit.

Notice that at no point will you want to run the codebase such that it has commits XA or XAB. You only want to run it as XABC. In fact you won’t even test if your code works in the state XA or XAB so there is little point in having those checkpoints. You care about three states: main before any of this happened since it was deployed like that, main + X since it was deployed like that, and main with XABC since you added a feature. git blame is really the only time you will ever possibly look at commits A and B individually and even then the utility of it is so limited it isn’t worth it.

The reality is that if you only want fast forward commits, chances are you are doing very little to go back and extract code out of old versions a of the codebase. You can tell this by asking yourself: “if I deleted all my git history from main and have just the current state + feature branches off it, will anything bad happen to my production system?” If not, you are not really doing most of what git can do (which is a good thing).


I am now wholly bought into the idea of having a feature branch with (A->B->C) commits is an anti-pattern.

Instead, if the feature doesn't work without the full chain of A+B+C, either the code introduced in A+B is orphaned except by tests and C joins it in; or (and preferably for a feature of any significance), A introduces a feature flag which disables it, and a subsequent commit D removes the feature flag, after it is turned on at a time separate to merge and deploy.


I treat each feature branch as my own personal playground. There should be zero reason for anyone to ever look at it. Sometimes they aren’t even pushed upstream. Otherwise, just work on main with linear history and feature flags and avoid all this complexity that way.

Just like you don’t expect someone else’s local codebase to always be in a fully working state since they are actively working on it, why do you expect their working branch to be in a working state?


I think you're somewhat missing the point - if the code from A and B only works if joined with C, then you should squash them all into one commit so that they can't be separated. If you do that then the problem you're describing goes away since you'll only be rebasing a single commit anyway.

Whether this is valuable is up to you, but IMO I'd say it's better practice than not. People do dumb things with the history and it's harder to do dumb things if the commits are self-contained. Additionally if a feature branch includes multiple commits + merges I'd much rather they squash that into a single commit (or a couple logical commits) instead of keeping what's likely a mess of a history anyway.


That is literally what I advocate you do for the main branch. A feature branch is allowed to have WIP commits that make sense for the developer working on the branch just like uncommitted code might not be self contained because it is WIP. Once the feature is complete, squash it into one commit and merge it into main. There is very little value to those WIP commits (rare case being when you implement algorithm X but then change to Y and later want to experiment with X again).


One downside of squash merging is that when you need to split your work across branches, so that they're different PRs, but one depends on the other, then you have to do a rebase after every single one which had dependencies is merged.


When that happens I essentially pick one of the branches as the trunk for that feature and squash merge into that, test it, then merge a clean history into main.


Let’s see if I get this wrong after 25 years of git:

ours means what is in my local codebase.

theirs means what is being merged into my local codebase.

I find it best to avoid merge conflicts than to try to resolve them. Strategies that keep branches short lived and frequently merging main into them helps a lot.


That's kind of the simplest case, though, where "theirs" and "ours" makes obvious sense.

What if I'm rebasing a branch onto another? Is "ours" the branch being rebased, or the other one? Or if I'm applying a stash?


"Ours" and "theirs" make sense in most cases (since "ours" refers to the HEAD you're merging into).

Rebases are the sole exception (in typical use) because ours/theirs is reversed, since you're merging HEAD into the other branch. Personally, I prefer merge commits over rebases if possible; they make PRs harder for others to review by breaking the "see changes since last review" feature. Git generally works better without rebases and squash commits.


Wow, interesting to see such a diametrically opposed view. We’ve banned merge commits internally and our entire workflow is rebase driven. Generally, I find that rebases are far better at keeping Git history clean and clearly allowing you to see the diff between the base you’re merging into and the changes you’ve made.


"Clean" is not the same as "useful". You have to be really, really disciplined to not make a superficially looking "clean" history which may appear linear but which is actually total nonsense.

For example, if one is frequently doing "fix after rebase" commits, then they are doing it wrong and are making a history which is much less useful than a seemingly more complicated merge based history. Rebased histories are only clean if they also tell a true story after the rebase, but if you push "rebase fixes" onto the end of your history, then it means that prior rebased commits no longer make any sense because they e.g. use APIs that aren't actually there. Giving up and squashing everything to one commit is almost better in this case because it at least won't throw off someone who is trying to make sense of the history in the future.

I think that rebasing has won over merges mostly because the tools for navigating git histories suck SO HARD. I have used Perforce at a previous job, and their graphical tools for navigating a merge based history are excellent and were really useful for doing code archeology.


Generally our pattern is that every PR gets rebased into sensible commits. So in a way we are doing "squash commits" but the method is an interactive rebase. This keeps our history very pretty and clean, and simultaneously easy to grok and navigate.

My favorite git GUI is Sublime Merge.


Yes, I prefer that approach as well because it allows the person who authored the change to do all the work of deciding how to resolve conflicts up front (and allows reviewers to review that conflict resolution) instead of forcing whoever eventually does the merge to figure everything out after the fact. It also removes conflicts from the history so you never have to think about them later after the rebase/merge process is finished.


> Git generally works better without rebases and squash commits.

If squash commits make Git harder for you, that's a tell that your branches are trying to do too many things before merging back into main.


I don't know. Even when I'm working on my own private repositories across several machines, I really, really dislike regular merges. You get an ugly commit message and I can never get git log to show me the information I actually want to see.

For me, rebasing is the simplest and easiest to understand, and it allows you to squash some of your commits so that it's one commit per feature / bug-fix / logical unit of work. I'll also frequently rebase and squash commits in my work branch too, where I've temporarily committed something and then fixed a bug before it's been pushed into main, I'll just reorder and squash the relevant commits into one.


I completely agree, since doing rebase our history looks fantastic and it makes finding things, cherrypicking and generating changelogs really simple. Why not be neat, it's cost us nothing and you can make yourself a tutorial on Claude if you don't understand rebasing pretty easily.


Don't do squash commits, just rebase -i your branch before merging so you only have one commit. It's pretty trivial to do.


> What if I'm rebasing a branch onto another?

Just checkout the branch you are merging/rebasing into before doing it.

> Or if I'm applying a stash?

The stash is in that case effectively a remote branch you are merging into your local codebase. ours is your local, theirs is the stash.


The thing is, you'll typically switch to master to merge your own branch. This makes your own branch 'theirs', which is where the confusion comes from.


Not me. I typically merge main onto a feature branch where all the conflicts are resolved in a sane way. Then I checkout main and merge the feature branch into it with no conflicts.

As a bonus I can then also merge the feature branch into main as a squash commit, ditching the history of a feature branch for one large commit that implements the feature. There is no point in having half implemented and/or buggy commits from the feature branch clogging up my main history. Nobody should ever need to revert main to that state and if I really really need to look at that particular code commit I can still find it in the feature branch history.


Yep. This is the only model that has worked well for me for more than a decade.


This is what I do, and I was taught by an experienced Git user over a decade ago. I've been doing it ever since. All my merges into main are fast forwards.


> ours means what is in my local codebase

Since it's always one person doing a merge, why isn't it "mine" instead of "ours"? There aren't five of us at my computer collaboratively merging in a PR. There is one person doing it.

"Ours" makes it sound like some branch everyone who's working on the repo already has access to, not the active branch on my machine.


That's between you and git.


a better (more confusing) example:

i have a branch and i want to merge that branch into main.

is ours the branch and main theirs? or is ours main, and the branch theirs?


I always checkout the branch I am merging something into. I was vaguely aware I could have main checked out but merge foo into bar but have never once done that.


  git checkout mybranch
  git rebase main
A conflict happens. Now "ours" is main and "theirs" is mybranch, even though from your perspective you're still on mybranch. Git isn't, however.


Ah that’s fair. This is why I would do a `git merge main` instead of a rebase here.


I have met more than one person who would doggedly tolerate rebase, not even using rerere, instead of doing a simple ‘git merge --no-ff’ to one-shot it, not understanding that rebase touches every commit in the diff between main and not simply the latest change on HEAD.

Not a problem if you are a purist on linear history.


not understanding that rebase touches every commit in the diff

it sounds like that's a problem for you. why would that be? i prefer rebase and fast forward, but i am fully aware that rebase rewrites all commits.


> Let’s see if I get this wrong after 25 years of git

You used it 5 years before Linus? Impressive!


Haha yes. You caught me :)

I was wondering when someone was going to point it out. I actually have only been using it since about 2009 after a brief flirtation with SVN and a horrible breakup with CVS.


Exactly this. But my question here is also: is there not a competitive advantage to a big enterprise that applies standards in a more intelligent way? You have a SaaS, I have a Fortune 500 company that could use your product but I cannot use it because my procurement process is as long and winding ad the Road to Hana. In the meantime my competitor has a smarter procurement process that takes into account the impact and risk involved in renting your software. Don’t they get a competitive advantage over me by having a better process and as a result getting better vendors?


Unfortunately in most cases the buyers have way more liability/risk using a small vendor than opportunity. Often this is coming from regulators in certain industries.

In scenarios where the company REALLY REALLY wants to buy the SaaS, they often will invest in the company, one of the reasons for which being to ensure they have the resources to go through all the red tape.


Or that API prices are inflated. We don’t get to see what their internal financials look like. My guess is that your guess is more correct but it is unclear what is actually happening.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: