You can find the warn notices here. [0]
From a quick script I wrote up, it seems that ~60% of layoffs noted on 1/26/2026 are from individual contributors
One transaction per minute is nothing at all when the transaction can be as simple as "did the person put that back on the shelf" with a 5 seconds clip.
Hum, yes it does? It's not because it's not a complex action that it's necessarily supported by the models.
It's not hard to imagine edge scenarios for which the models aren't trained, like a customer dropping an item, or putting an item back in a random shelf instead of the one it's intended for, or someone picking up that previously randomly placed item, etc.
Just a big assumption on your part when the more reasonable conclusion was just that it was not working and it was not a 5 second thing (hence why receipts were taking so long, etc).
$60k/yr still seems like a good deal for the productivity multiplier you get on an experienced engineer costing several times that. Actually, I'm fairly certain that some optimizations I had codex do this week would already pay for that from being able to scale down pod resource requirements, and that's just from me telling it to profile our code and find high ROI things to fix, taking only part of my focus away from planned work.
Another data point: I gave codex a 2 sentence description (being intentionally vague and actually slightly misleading) of a problem that another engineer spent ~1 week root causing a couple months ago, and it found the bug in 3.5 minutes.
These things were hot garbage right up until the second they weren't. Suddenly, they are immensely useful. That said, I doubt my usage costs anywhere near that much to openai.
Wildly different experience of frontier models than I have, what's your problem domain? I had both Opus and Gemini Pro outright fail at implementing a dead simple floating point image transformation the other day because neither could keep track of when things were floats and when they were uint8.
Low-level networking in some cloud applications. Using gpt-5.2-codex medium. I've cloned like 25 of our repos on my computer for my team + nearby teams and worked with it for a day or so coming up with an architecture diagram annotated with what services/components live in what repos and how things interact from our team's perspective (so our services + services that directly interact with us). It's great because we ended up with a mermaid diagram that's legible to me, but it's also a great format for it to use. Then I've found it does quite well at being able to look across repos to solve issues. It also made reference docs for all available debug endpoints, metrics, etc. I told it where our prometheus server is, and it knows how to do promql queries on its own. When given a problem, it knows how to run debug commands on different servers via ssh or inspect our kubernetes cluster on its own. I also had it make a shell script to go figure out which servers/pods are involved for a particular client and go check all of their debug endpoints for information (which it can then interpret). Huge time saver for debugging.
I'm surprised it can't keep track of float vs uint8. Mine knew to look at things like struct alignment or places where we had slices (Go) on structures that could be arrays (so unnecessary boxing), in addition to things like timer reuse, object pooling/reuse, places where local variables were escaping to heap (and I never even gave it the compiler escape analysis!), etc. After letting it have a go with the profiler for a couple rounds, it eventually concluded that we were dominated by syscalls and crypto related operations, so not much more could be microoptimized.
I've only been using this thing since right before Christmas, and I feel like I'm still at a fraction of what it can do once you start teaching it about the specifics of your workplace's setup. Even that I've started to kind-of automate by just cloning all of our infra teams' repos too. Stuff I have no idea about it can understand just fine. Any time there's something that requires more than a super pedestrian application programmer's knowledge of k8s, I just say "I don't really understand k8s. Go look at our deployment and go look at these guys' terraform repo to see all of what we're doing" and it tells me what I'm trying to figure out.
Yeah wild, I don't really know how to bridge the gap here because I've recently been continuously disappointed by AI. Gemni Pro wasn't even able to solve a compiler error the other day, and the solutions it was suggesting were insane (manually migrating the entire codebase) when the solution was like a 0.0.xx compiler version bump. I still like AI a lot for function-scale autocomplete, but I've almost stopped using agents entirely because they're almost universally producing more work for me and making the job less fun, I have to do so much handholding for them to make good architectural decisions and I still feel like I end up on shaky foundations most of the time. I'm mostly working on physics simulation and image processing right now. My suspicion is that there's just so many orders of magnitude more cloud app plumbing code out there that the capability is really unevenly distributed, similarly with my image processing stuff my suspicion is that almost all the code it is trained on works in 8bit and it's just not able to get past it's biases and stop itself from randomly dividing things that are already floats by 255.
They're playing stupid semantic games in order to claim there's no selling fees while still having selling fees. The fees were ostensibly shifted onto the buyer, except they're bundled into the sale price and cut from what the seller receives, so in effect nothing actually changed.
Before: Buyer pays £100, seller receives £100, seller later charged £5 fee, ends up with £95.
After: Buyer pays £100, eBay pockets £5 "buyer protection fee", seller receives £95 with "no fees".
Except you can price higher to include that cut, and the buyer protection fee is a lower percentage than the sales one was (between 7% and 2% VS I think 11%).
Only in the UK, and only on "private sellers". eBay is losing a lot of marketshare in the UK so they've taken drastic measures to try to get people listing again.
Makes sense. In the UK, their fees plus the encouraged (used to be mandatory as an option) Paypal payment option steals a very significant chunk of the purchase price from sellers.
In the last 5 years I've won multiple auctions for not-really-worth-shipping things like bikes, paid via Paypal, then had the buyers contact me to say the fees are too high, cancel the auction and deal separately in cash.
For anything that you're picking up in person anyway, very little reason to use ebay vs. FB marketplace.
> You won't pay final value fees or regulatory operating fees
Of course, they will likely find some other way to extract their fees.
It would be nice, however, if the final value fee went away for US non-professional sellers.
There does seem to be no indication (at least on the page you linked) of how they define "private seller", which also opens up the possibility of them defining it so narrowly that, say, only five UK residents ever qualify.
Even pretty small ones do. My wife has worked at Amazon as an account manager both for 1P and 3P sellers, some of those don't even make $100K a year on Amazon but still have an internal contact.
Depends on the category. My brother runs an Amazon store that nets more than that but his category is one of the strictest on the site and he gets no support.
I mean those programs aren't free for 3rd party sellers, so if he doesn't pay I'm not surprised, but it likely doesn't have much to do with the category.
What I've heard about having the "Premium" (pay) version of Amazon account manager: It's just another layer of the same of the usual awful seller support. Since the "manager" can't actually do anything, having one is worse than not having one.
reply