Yeah, plus it's a bit... single minded. A static single page site is _quite_ "agent ready". Scores 0 here. It's not like it'll need an MCP or whatever.
So AI generated code doesn't benefit from stable foundations maintained by third parties? Fascinating take I don't currently agree with. Whether it's AI or hand written, using solid pre-existing components and having as little custom code as possible is my personal approach to keep things maintainable.
Agreed, it is different in terms of there being no guarantee that a specific piece of software even has an exploit. If you don't want to break into a specific piece of software, or even a specific system, I would argue that the law of averages applies: If you just invest enough, you'll likely find _something_ worth exploiting.
In other terms, I feel the argument from TFA generally checks out, just on a different level than "more GPU wins". It's one up: "More money wins". That's based on the premise that more capable models will be more expensive, and using more of it will increase the likelihood of finding an exploit, as well as the total cost. What these model providers pay for GPUs vs R&D, or what their profit margin is, I'd consider less central.
But then again, AI didn't change this, if you have more money you can find more exploits: Whether a model looks for them or a human.
They are, wow. I had this age old Yen conversion wired into my brain: 100 Yen is one Euro. Boy did that change in the last decade or so, it's only half that now.
I typically install both systems on the same disk, different partitions. Then work with additional SSDs strictly for game storage. Only annoying bit is that some games _need_ to be on C, but very few in my experience. If you have enough space to shrink your Windows partition, that could work without waiting for an SSD. Though I guess the one OS per disk setup is ultimately cleaner.
Been dual booting for >20 years now. It's nice that some games work on Linux pretty well these days, and of course I had fun messing with Wine manually to get some stuff to work decades ago. But it really doesn't bother me too much to reboot when switching between gaming and literally anything else.
The issue that has occurred a few times is that some windows updates will decide that they 'own' the disk it's installed on or knows better than whoever is running the system, and overwrite any other boot manager with window's own and you may need to break out a live boot to recover it. Using a single isolated disc at OS install time (if you can have multiple physical drives) and using a motherboard boot selection hotkey means that risk likely goes away.
I use BIOS boot selection to dual-boot. MS has broken it twice. I turned off SecureBoot now and just don't run games that require it.
Apparently you can get a mobo with switchable BIOS config (or was it just a switchable SSD?) so the OS didn't even know that there's a second OS around. If there's no connection of the other OS then MS can't break it [as easily]!
IMO it must be malicious, because otherwise it would be caught with remedial testing. I can't believe MS don't include dual boot setups in their testing.
Microsoft got rid of QA years ago. If it was targeted sabotage they could break dual boot setups every single Patch Tuesday. It's just disrespect for users. Like how Copilot and other shovelware such as Candy Crush keep getting reinstalled every few updates, and privacy settings reset every once in a while. Dual booting is likely not even on their radar.
Many newer computers now have a rudimentary bootloader integrated in the EFI. Some are actually quite nice, allowing you to browse partitions to choose which image to boot. HPs have this. You just hit a key during uefi “post” and voilà.
The functionality is present on my new Lenovo laptop, various generations of HP elite/pro books/desks, old asus mobo and newer cheap gigabyte mobo, 7th gen intel nuc.
> It's nice that some games work on Linux pretty well these days
This description doesn't really do it justice. ~75% of top 100 games work well out of the box/with minimal tinkering according to https://www.protondb.com/dashboard (it varies a bit based on the rating scale)
Many work perfectly and many work even better than they do on Windows. Valve's work really changed the game over the past few years.
> Isolated QA should not exist because anything a QA engineer can do manually can be automated.
Well, sort of maybe, but it's not always economical. For a normal web app - yeah I guess. Depends on the complexity of the software and the environment / inputs it deals with.
And then there's explorative testing, where I always found a good QA invaluable. Sure, you can also automate that to some degree. But someone who knows the software well and tries to find ways to get it to behave in unexpected ways, also valuable.
I would agree that solid development practices can handle 80% of the overall QA though, mainly regression testing. But those last 20%, well I think about those differently.
> it's not always economical. For a normal web app - yeah I guess
What do you define as "normal"? I can't think of anything harder to test than a web app.
Even a seemingly trivial static HTML site with some CSS on it will already have inconsistencies across every browser and device. Even if you fix all of that (unlikely), you still haven't done your WCAG compliance, SEO, etc.
The web is probably the best example case for needing a QA team.
> And then there's explorative testing, where I always found a good QA invaluable.
Yes, I agree. We do this too. Findings are followed by a post-mortem-like process: - fix the problem
- produce an automated test
- evaluate why the feature wasn't autotested properly
> I think there's also a pretty good chance that if a robot that could mine the same cobalt with no human intervention appeared tomorrow, many folks would complain about "hard working cobalt miners in Africa losing their livelihood to automation".
Well, yeah? Just because the current work safety situation is bad, doesn't mean being out of a job couldn't be worse. I'd love a world where more automation meant less, safer, higher paying work for everyone. Our world never worked like that, to my knowledge, and I'm not sure it ever will.
> I'd love a world where more automation meant less, safer, higher paying work for everyone. Our world never worked like that, to my knowledge, and I'm not sure it ever will.
I'm not sure what you mean because that's literally what happened. The only remaining caveat is that it's not yet "everyone", but even that part is improving. If I was born in feudal Europe I would have spent my life planting, weeding, and de-pesting potatoes by hand instead of sitting at a computer in a climate-controlled office.
1. Check frequency (between every single time and spot checks).
2. Check thoroughness (between antagonistic in-depth vs high level).
I'd agree that, if you're towards the end of both dimensions, the system is not generating any value.
A lot of folks are taking calculated (or I guess in some cases, reckless) risks right now, by moving one or both of those dimensions. I'd argue that in many situations, the risk is small and worth it. In many others, not so much.
> Especially if they are earning 5k per year as the title suggests.
Not sure that's how the math goes. TFA mentions every employed worker has a team behind them, and is often successful in their job as a result.
Kinda fascinating. Here we are, usually dreaming about how one person could do multiple jobs. There they are, having multiple people do one job in the best (looking) way.
I'd consider shipping LLM generated code without review risky. Far riskier than shipping human-generated code without review.
But it's arguably faster in the short run. Also cheaper.
So we have a risk vs speed to market / near term cost situation. Or in other words, a risk vs gain situation.
If you want higher gains, you typically accept more risk. Technically it's a weird decision to ship something that might break, that you don't understand. But depending on the business making that decision, their situation and strategy, it can absolutely make sense.
How to balance revenue, costs and risks is pretty much what companies do. So that's how I think about this kind of stuff. Is it a stupid risk to take for questionable gains in most situations? I'd say so. But it's not my call, and I don't have all the information. I can imagine it making sense for some.
reply