No no no you don't understand, he was joking. You see, when trump says something I like it's earnest, but if he says something that makes me look bad then he's joking.
Repeated statements by Trump and his circle claiming he’ll run in 2028. Statements by Trump that his supporters won’t ever need to vote again. That little insurrection they tried on January 6th 2021. Their current weaponization and staffing of ICE by people with questionable backgrounds and morals and deploying them against their political enemies under the pretext of illegal immigration (Texas has a bigger problem than Wisconsin For what it’s worth). Constantly praising dictatorial leaders like Putin and Xi while threatening and talking shit about Democratic allies.
So whether or not metastasizes to that point, pretending like this concern has no grounding in actual actions taken and statements uttered is wild, because this playbook isn’t new and the intended direction seems more clear than not.
You can only tax people so much before it's too much.
effective tax rates
0% ... not realistic outside very unique circumstances.
25% ... feels fair to me.
33% ... still fair but yeah 1 out of every 3 days worked you start to feel that.
50% ... the border of fair and unfair. if i keep less than half of what i make, that feeling of fairness wears thin.
Now, when you are near that border of fair and unfair and then you see John Q Artist getting his whole list comped using tax money that pushes the somewhat fair into unfair territory real fast.
Now, we already have situations similar to this in most countries either from subsidies, gov't spending you don't agree with, corruption, waste, etc.
All of that should be reduced but when you see your neighbor living free while you slave away you feel that differently.
Which tax rates? We have dozens. What determines fairness is how the resources in our society are allocated once all is said and done. Income, tax rates, and even money itself is just an abstraction.
> if i keep less than half of what i make, that feeling of fairness wears thin.
How fairly you made that money in the first place and what you get in return in the form of government services makes all the difference.
> What determines fairness is how the resources in our society are allocated once all is said and done.
I propose allocating upfront the work, so that those who disagree don't have to contribute into the "done" part of those who allocate it in a weird way.
This is in no way a "if you're homeless just buy a house" argument, it's a "you can't have it both ways, pick a lane and stick to it" argument.
You want to unilaterally decide that you don't want to pay much tax on income, billionaires decide that they don't want to pay much tax on capital gains, yet both of you want to continue living in a society where you can buy cheap bread baked from flour milled from wheat grown on subsidized farms, heavily reliant on public infrastructure, and you want to drink clean water and drive on public roads, all of which is paid for through taxes that you want to opt out of, and somehow you don't see a problem with that?
You can't pick and choose parts of society that benefit you and opt out of your duties, that's not how society works. All of those parts that you don't see value in are essential to someone else.
For select megacorps that have the luxury of being in a business that lets them structure themselves that way, sure.
For the laboring peasantry it's a very different story (though the actual rates vary, this goes for most "tax havens"). Ireland in particular has a high VAT so if you spend a lot of your income on consumption (which most individuals do) you will get very screwed by that.
Yeah if you want to keep your edge you have to find other ways to work your programming brain.
But as far as output - we all have different reasons for enjoying software development but for me it's more making something useful and less in the coding itself. AI makes the fun parts more fun and the less fun parts almost invisible (at small scale).
We'll all have to wrestle with this going forward.
This argument has been used against every new technology since forever.
And the initial gut reaction is to resist by organizing labor.
Companies that succumb to organized labor get locked into that speed of operating. New companies get created that adopt 'the new thing' and blow old companies away.
> And the initial gut reaction is to resist by organizing labor.
Yeah like tech workers have similar rights to union workers. We literally have 0 power compared to any previous group of workers. Organizing of labour cant even happen in tech as tech has large percentage of immigrant labour who have even less rights than citizens.
Also there is no shared pain like union workers had, we all have been given different incentives, working under different corporations so without shared pain its impossible to organize. AI is the first shared pain we had, and even this caused no resistance from tech workers. Resistance has come from the users, which is the first good sign. Consumers have shown more ethics than workers and we have to applaud that. Any resistance to buying chatbot subscriptions has to be celebrated.
Labor organizing is (obviously) banned on HackerNews.
This isn't the place to kvetch about this; you will literally never see a unionization effort on this website because the accounts of the people posting about it will be [flagged] and shadowbanned.
Just a regular senior SDE at one of the Mag7. I can tell you everyone at these companies is replaceable within a day. Even within an hour. Even the head of depts have no power above them, they can be fired on short notice.
This website is literally a place for capitalists (mostly temporarily embarrassed) to brag about how they're going to cheat and scam their way to the top.
So race to the bottom where you work more and make less per unit of work? Great deal, splendid idea.
The only winners here are CEOs/founders who make obscene money, liquidate/retire early while suckers are on the infinite treadmill justifying their existence.
You should qualify your statement with "amongst the few people I talk to and the narrow spectrum of media I consume."
Also, do you mean minority of the total US population or minority of the voting population?
For one reference point I fully support ICE. And I think it's wild you have local and state politicians encouraging actions against federal agents who are enforcing federal law.
Seriously. I've known for a very long time that our community has a serious problem with binary thinking, but AI has done more to reinforce that than anything I can think of in modern memory. Nearly every discussion I get into about AI is dead out of the gate because at least one person in the conversation has a binary view that it's either handwritten or vibe coded. They have an insanely difficult time imagining anything in the middle.
Vibe coding is the extreme end of using AI, while handwriting is the extreme end of not using AI. The optimal spot is somewhere in the middle. Where exactly that spot is, I think is still up for debate. But the debate is not progressed in any way by latching on to the extremes and assuming that they are the only options.
The "vibe coding" term is causing a lot of brain rot.
Because when I see people that are downplaying LLMs or the people describing their poor experiences it feels like they're trying to "vibe code" but they expect the LLM to automatically do EVERYTHING. They take it as a failure that they have to tell the LLM explicitly to do something a couple times. Or they take it as a problem that the LLM didn't "one shot" something.
I'd like it to take less time to correct than it takes me to type out the code I want and as of yet I haven't had that experience. Now, I don't do Python or JS, which I understand the LLMs are better at, but there's a whole lot of programming that isn't in Python or JS...
I've had success across quite a few languages, more than just python and js. I find it insanely hard to believe you can write code faster than the LLM, even if the LLM has to iterate a couple times.
But I'm thankful for you devs that are giving me job security.
And that tells me you're on the dev end of the devops spectrum while I'm fully on the ops side. I write very small pieces of software (the time it takes to type them is never the bottleneck) that integrates in-house software with whatever services they have to actually interact with, which every LLM I've used does wrong the first fifteen or so times it tries (for some reason rtkit in particular absolutely flummoxes every single LLM I've ever given it to).
I pretty well span the devops spectrum from building/maintaining services to running/integrating/monitoring them in prod. LLMs are definitely better at the dev side than the ops side, no doubt about that. And when it comes to firewalld and many other sysadmin tools I agree it can often be faster to just hand type than to have the LLM do it. Even just writing Dockerfiles it's often faster to do it by hand than the LLM because the LLM will screw it up 6 to 12 times before getting it right, and usually "getting it right" is because I told it something like, "dude you can't mount, you need to copy." It's especially insanely stupid when it comes to rootless podman.
But that said, there are still plenty of ops-y situations where AI can be very helpful. Even just "here's a 125k lines of prod logs. Can you tell me what is going wrong?" has saved me lots of time in the past, especially for apps that I'm not super familiar with. It's (sometimes) pretty good at finding the needle in the haystack. The most common workflow I have now is to point an agent at it and while it's griding on it I'll do some hand greps and things. I've gotten to the bottom of some really tricky things much faster because of it. Sometimes it points me in the wrong direction (for example, one time it noticed that we were being rate-limited by the Cloudflare API, and instead of adding a single flag to the library calls it wrote it's own very convoluted queue system. But it was still helpful because at least it pinpointed the problem).
The other "small pieces of software" I find it very helpful for are bash functions or small scripts to do things. The handwritten solution is usually quick, but rarely as resilient/informative as it could be because writing a bunch of error handling can 5x or 10x the handwritten time. I will usually write the quick version, then point AI at it and have it add arg passing/handling, error handling, and usage info/documentation. It's been great for that.
As a former “types are overrated” person, Typescript was my conversion moment.
For small projects, I don’t think it makes a huge difference.
But for large projects, I’d guess that most die-hard dynamic people who have tried typescript have now seen the light and find lots of benefits to static typing.
I was on the other side, I thought types are indispensable. And I still do.
My own experience suggest that if you need to develop heavily multithreaded application, you should use Haskell and you need some MVars if you are working alone and you need software transactional memory (STM) if you are working as part of a team, two and more people.
STM makes stitching different parts of the parallel program together as easy as just writing sequential program - sequential coordination is delegated to STM. But, STM needs control of side effects, one should not write a file inside STM transaction, only before transaction is started or after transaction is finished.
Because of this, C#, F#, C++, C, Rust, Java and most of programming languages do not have a proper STM implementation.
For controlling (and combining) (side) effects one needs higher order types and partially instantiated types. These were already available in Haskell (ghc 6.4, 2005) at the time Rust was conceived (2009), for four years.
Did Rust do anything to have these? No. The authors were a little bit too concerned to reimplement what Henry Baker did at the beginning of 1990-s, if not before that.
Do Rust authors have plans to implement these? No, they have other things to do urgently to serve community better. As if making complex coordination of heavily parallel programs is not a priority at all.
I'm only writing 5-10% of my own code at this point. The AI tools are good, it just seems like people that don't like them expect them to be 100% automatic with no hand holding.
Like people in here complaining about how poor the tests are... but did they start another agent to review the tests? Did they take that and iterate on the tests with multiple agents?
I can attest that the first pass of testing can often be shit. That's why you iterate.
> I can attest that the first pass of testing can often be shit. That's why you iterate.
So far, by the time I’m done iterating, I could have just written it myself. Typing takes like no time at all in aggregate. Especially with AI assisted autocomplete. I spend far more time reading and thinking (which I have to do to write a good spec for the AI anyways).
One might argue the difference is that they are ignorant of the suffering caused by their behavior, and that the knowing and doing anyways is the moral problem, not just the doing.
Alternately, one might argue the difference is that they have no alternative to inflicting suffering, and that having the option to reduce suffering and choosing to inflict it anyways is the moral problem, not just inflicting it.
I don’t think that mammals are, in general, ignorant of the character of harm, violence, and death. Animals even kill to end suffering. Life is short, brutal, and violent. We do what we can to make it less so.
That does track with those who are most stridently Good and Moral and Kind and Right having some glaring blind spots when it comes to understanding the consequences of their actions.
Or crows that attack a member of the flock that misbehaved to a minor of the flock? (this is one of the animals that seem to have their own morals).
Anyway: humans should not project our sense of moral to animals.
And humans are no carnivores. Most likely we're omnivores (like our close animal relatives the primates: and they prefer fruit over meat any day, just like human babies).
Morality is a human construct and applies to humans, arguments that try to argue morality on the basis of applying naturalistic arguments to humans do exist, but I don’t think they have much credence in modern moral frameworks ?
I'm sure the concept of self-restraint exists in the animal kingdom among apex predators. Don't hunt too much or otherwise you will destroy your habitat.
This applies to humans too, and not just in the context of eating meat.
It does not. One predator eats all the prey, because if he doesn't, the other predators will. The next year they all starve. This is a documented effect. No reference to geopolitics intended.
Where do you get this from?
reply