This isn't TOR, though it's not completely unfounded that the definition of CSAM could be broadened in the future by legislators to include things that are, by current definitions, not CSAM, e.g. works of fiction that include scenes of abuse.
Wow it’s even worse than I thought. I thought that convictungly morhing would be the only problem. The nonsense and inconsistent arrowheads, the missing annotations, the missing bubbles. The “tirm” axis…
That this was ever published shows a supreme lack of care.
This passage from the post by the original creator of the diagramme summarises our Bruh New World:
"What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn't a case of being inspired by something and building on it. It's the opposite of that. It's taking something that worked and making it worse. Is there even a goal here beyond "generating content"?
That reminds me of the (earlier) Apple and people saying that Apple just copies from the competitors. Well, they took the good parts and improved the bad parts. That's the excellence level you can achieve when copying.
This here is just so cheap, I would not even dare to call it a copy.
What do we “need” more of? Here in France we need more doctors, more nurseries, more teachers… I don’t see AI helping much there in short to middle term (with teachers all research points to AI making it massively worse even)
Globally I think we need better access to quality nutrition and more affordable medicine. Generally cheaper energy.
Counter-argument: what if LLMs can help alleviate a doctor's work by providing quick diagnostic for simple cases?
How much time does a doctor spend writing prescriptions for cough-like symptoms?
How much time does an ophthalmologist spend measuring eye sight?
I totally agree that this is a bit of a radical opinion, and not everybody would be pleased with the idea of a program making diagnosis, so I am not fully advocating for it, but I think that we should not limit the potential of AI.
Also, to point out to France specifically. We need more teachers, yet new teachers are treated as commodities (you have to relocate to wherever the Education nationale tells you to go and in most cases, that means new teachers are relocated to difficult areas).
We need more doctors, yet the number of new doctors each year is fixed by the number of people that are allowed to pass the exam.
Wait, my job is not cushy. I think hard all day long, I endure levels of frustration that would cripple most, and I do it because I have no choice, I must build the thing I see or be tormented by its possibility. Cushy? Right.
How is that 1st world, there are plenty of people that "think hard" and deal with really hard problems in the "3rd World"
Give compiler engineering for medical devices a whirl for 14 hours a day for a month or so and let me know if you think it's "cushy". Not everything is making apps and games, sometimes your mistakes can mean life or death. Lots of SWE isn't cushy at all, or necessarily well paid.
Go get a bachelors and masters in EE while being eating just two bowls of rice and lentils everyday for 5 years and let me know if that's cushy.
As compared to risking life and limbs every day in a mine, breathing in cancerous powders, finding yourself with most of your joints fucked at 45, likely carrying PTSD from accidents happened to you or your colleagues... Yes, "hard thinking" looks pretty cushy in comparison.
Have you any idea how many people die every day on their workplace in manufacturing, construction, or mining; or how many develop chronic issues from agriculture...? And all for salaries that are a tenth of the average developer (in the developed world; elsewhere, more like a hundredth). Come on now.
Everyone has problems and everyone is entitled to feel aggrieved by their condition, but one should maintain a reasonable degree of perspective at all times.
I don’t think that the real dichotomy here. You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
The management has decided that the latter is preferable for short term gains.
> You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
It's actually worse than that, because really the first case is "produce 1x good code". The hard part was never typing the code, it was understanding and making sure the code works. And with LLMs as unreliable as they are, you have to carefully review every line they produce - at which point you didn't save any time over doing it yourself.
Look at the pretty pictures AI generates. That's where we are with code now. Except you have ComfyUI instead of ChatGPT. You can work with precision.
I'm a 500k TC senior SWE. I write six nines, active-active, billion dollar a day systems. I'm no stranger to writing thirty page design documents. These systems can work in my domain just fine.
> Look at the pretty pictures AI generates. That's where we are with code now.
Oh, that is a great analogy. Yes, those pictures are pretty! Until you look closer. Any experienced artist or designer will tell you that they are dogshit and don't have value. Don't look further than at Ubisoft and their Anno 117 game for a proof.
Yep, that's where we are with code now. Pretty - until you look close. Dogshit - if you care to notice details.
Not to mention how hard it is to actually get what you want out of it. The image might be pretty, and kinda sorta what you asked for. But if you need something specific, trying to get AI to generate it is like pulling teeth.
Since we’re apparently measuring capability and knowledge via comp, I made 617k last year. With that silly anecdote out of the way, in my very recent experience (last week), SOTA AI is incapable of writing shell scripts that don’t have glaring errors, and also struggles mightily with RDBMS index design.
Can they produce working code? Of course. Will you need to review it with much more scrutiny to catch errors? Also yes, which makes me question the supposed productivity boost.
The problem is not that it can’t produce good code if you’re steering. The problem is that:
There are multiple people on each team, you can not know how closely each teammate monitored their AI.
Somebody who does not car will vastly outperform your output. By orders of magnitude. With the current unicorn chasing trends, that approach tends to be more rewarded.
This produces an incentive to not actually care about the quality. Which will cause issues down the road.
I quite like using AI. I do monitor what it’s doing when I’m building something that should work for a long time. I also do total blind vibe coded scripts when they will never see production.
But for large programs that will require maintenance for years, these things can be dangerous.
One thing LLMs are really good at is translation. I haven’t tried porting projects from one language to another, but it wouldn’t surprise me if they were particularly good at that too.
as someone who has done that in a professional setting, it really does work well, at least for straightforward things like data classes/initializers and average biz logic with if else statements etc... things like code annotations and other more opaque stuff like that can get more unreliable though because there are less 1-1 representations... it would be interesting to train an llm for each encountered new pattern and slowly build up a reliable conversion workflow
This highly depends on your current skill level and amount of motivation. AI is not a private tutor as AI will not actually verify that you have learned anything, unless you prompt it. Which means that you must not only know what exactly to search for (arguably already an advanced skill in CS) but also know how tutoring works.
reply