I sidestep by using neovim as my environment for pretty much everything and you can bridge the SPICE virtio clipboard channel to Wayland. You can get clipboard sharing to work natively on wlroots compositors.
Considering that for the average office worker I know switching from outlook to outlook (new) is a major hurdle within the same ecosystem, I can only imagine what they were thinking coming up with a name.
That is a very fair point, there are quite a few businesses and government agencies where I live, which are very deeply entrenched in very complex, decade spanning VBA based workflows that need absolute and fully compatibility before a switch away from "MS 365 Copilot" could even be considered and the name may give false expectations.
Now, I really, very much dislike it that often discussions on sites like this one can be utterly derailed by someone bringing up an utterly unrelated overhyped topic, so feel free to dismiss this, but I could honestly see LLMs providing a potential path to smoothing out such issues. Some model have gotten rather robust when it comes to making targeted changes to pre-existing Excel files dating back to before I was using a computer, including handling very specific modifications to ancient macros across multiple sheets. Perhaps, this could be leveraged to some extent, though being honest and trying not to overhype, I suspect that similar to those planning to use agentic coding to rewrite decades old, tested, crucially important COBOL code in a more modern language, there are likely many edge cases that will be hard to properly cover and if such a solution isn't both absolutely reliable and seamless to the users, large scale adoption by such entities will likely be impossible in the short term.
> Strict limits on governmental regulation wherein any restrictions must be demonstrably necessary and narrowly tailored to a compelling public safety or health interest.
> Mandatory safety protocols for AI-controlled critical infrastructure, including a shutdown mechanism and compulsory annual risk management reviews.
Read: industry can do whatever we want, but the government also has to put up barriers to entry that favor large incumbents.
This has nothing to do with rights or even computing, it's just regulatory capture.
Annual risk management reviews definitely favor large incumbents. Large incumbents have the ability to hire and maintain compliance teams. That burden is definitely a barrier to entry to new competitors (though not an insurmountable one).
But it only applies to AI controlling critical infrastructure, you think this is an issue in practice?
I would think if a power plant deploys some AI model to optimize something or other, it would be on the plant operator to perform the reviews, regardless of who they get the AI from.
In practice, there will only be one or two "safe" AI vendors approved for such infrastructure. On one hand, that's probably a good thing. On the other hand, it's deeply anti-competitive and it's pretty much a recipe for indefinitely renewable contracts at arbitrary high prices that get passed on to taxpayers.
The shutdown mechanism would have existed anyway and a "risk management review" sounds exactly like the sort of toothless policy that's supposed to make people feel better without actually putting any limits or enforcement on the industry
It's not just "small business". If the barriers to entry are high enough, you can keep out pretty much any company that isn't already part of your oligopoly, pretty much indefinitely. That could be anything from a well funded subsidiary of another technology company to a foreign competitor.
Well, there probably are some in there. Data centre designers, comms experts, architects, electricians, etc. Lot of smaller organisations benefiting from the work.
You know if we're gonna pass laws to make it illegal for the government to interfere with the Torment Nexus, the least they could do is not gaslight us with the fucking name of the law. Just tell us the billionaires get to fuck the planet in the eye and the rest of us have to deal with it, at least it's honest that way.
Practically every law, and lobbying organization, is named for exactly the opposite of what it does. If I see the Puppies and Orphans Protection Act of 2028, I assume its purpose is to use puppies to strangle orphans. Proponents will point to the limitation on how many puppies you can use per orphan.
Similarly, if I see the People For X organization, I assume they are against X. The Committee for Green Spaces and Clean Air is guaranteed to be an oil company.
Once you develop that reflex, everything calms down. Though admittedly, I passed a sign for Fidos for Freedom. I'm not quite sure what Fidos Against Freedom does. I think they give dogs to disabled people, and they bark at you if you try to leave the house.
There is something that this tactic misses: when people try to do good things, the name of their organization or policy is usually pretty honest. In an environment like ours, though, that still means that your strategy of assuming the opposite meaning has something like a 95% expected success rate.
The second term for the "drain the swamp" president implies otherwise (it did take another cycle, but that arguably had more to do with covid than corruption).
I find it hard to imagine any evidence-based viewpoint in 2024 that would have led to a conclusion that Trump would be better for Gaza. The two party system doesn't give any room for choices on some issues, but that's hardly an argument that the two choices are equivalent overall.
Evidence? No. But 2024 wasn't an election (IMO) lost on failing to appeal to the centrists and R's. It was one lost by failing to energize the D's. I still assert that a lot of D's simply stayed home as opposed to "changed to R", and that's the most effective form of vote suppression: telling them that "both sides are the same, nothing matters so why bother?"
I've seen this claimed, but I'm not convinced narratives that emerge before another presidential election cycle hold up to scrutiny in the long run. The common narrative post 2012 was that Republicans needed to move to the left on immigration to stay viable, but that didn't happen and Trump won in 2016. The narrative post 2016 was that the Democrats needed to move right on social issues, and that didn't happen (at least not to the extent that people claimed they needed to) but Biden won in 2020. My perception post 2020 is that a lot of people felt that Biden won only because of people being unhappy with Trump's handling of covid, and but Biden wasn't able to last through the next cycle to another election to be able to potentially get more data on that theory.
You're not wrong that Gaza probably affected things, but the larger issue is that there was no primary at all. Nobody challenged Biden's viability until too late, and at that point the party coalesced around a single candidate almost immediately. I'd argue that even if people were happy with her on that one issue, there would still likely be plenty of others that they were not happy with, especially when she was essentially starting from behind due to the baggage left behind from the baggage of being the VP of the president who couldn't even retain the confidence of the party through the election (not to mention how much she was sidelines for the first 3.5 years of the administration).
>My perception post 2020 is that a lot of people felt that Biden won only because of people being unhappy with Trump's handling of covid,
I agree with that. COVID was the breaking point of breaking points and Trump fumbled it especially badly. I certainly agree Trump would have won 2020 had it not been for his handling of COVID.
>You're not wrong that Gaza probably affected things, but the larger issue is that there was no primary at all.
That was a factor too. I see Gaza and the lack of primaries as the same factor: maintaining an unpopular establishment that didn't energize the party. For better or worse (much much much worse), Trump does energize his install base.
The core issue these past 10 years is that "what analysts say" have diverged much further away from what the people actually want. So getting a pulse on the ground is much more important these days than traditional means of surveying and reporting opinions.
It’s a good thing that businesses can make investment plans with legible rules to follow. Too many communities are blocking data centers for no good reason, and this preempts NIMBYs and unreasonable local opposition.
“What about my water?”- not an issue in this area.
“What about my electric bill?”- we’re signing long term contracts with local power companies or building out our own capacity; we eat the marginal costs and don’t increase your bill.
“What about noise?”- we’re far enough away from the nearest person that they cannot hear us; fans are x decibels at y distance; not a problem.
“I saw on Facebook that data centers poison the water and spy on me”- seek help, you cannot block us from building out and giving you oodles of tax money for this nonsense reason.
I don’t think it counts as NIMBYism if you don’t want it in yours or anybody’s backyard, ever. I would describe that as principled opposition.
Also, what happens when we don’t need such enormous data centers anymore? How many communities in the U.S. are saddled with enormous dead malls while the developers walk away with zero liability?
There is an incredibly good reason not to have datacenters in montana - a whole lot of the additional load will be from colstrip - one of the dirtiest coal mines left in the United States.
This research presentation from Benn Jordan will hopefully change your mind on the noise issue and its consequences. I highly recommend it. https://www.youtube.com/watch?v=_bP80DEAbuo
Long term contracts are routinely broken in bankruptcy without some sort of surety bond if things go sideways. This leaves localities footing the bill on maintenance if things do not turn out.
So it should be renamed Right to Datacenter Act. And here I thought they were giving people power over their private computers and being surveilled on them…
Reminds me of some bill in my state about Right to Farm and when you looked deeper it was about rights for huge corporate hog farms to dump waste in the rivers. The slimiest corps always do this 1984 level double talk when they name their bills. It’s a dead giveaway. Citizens United, oh wow cool this is about protecting citizens!
They just want to fire up Colstrip again, 4 huge coal power generation plants that literally print money. If you’re burning shit for AI it’s fine now I guess.
At the very least, it's a bit weird to be calling it a Right to Compute if the actual goal is to enable investments. It's hard not to be concerned about whether it's even about establishing a right at all, or if that's entirely posturing to try to build support for something that isn't really about rights at all. At that point, it's hard to trust anything else they're saying about the motives, since they've established that they're willing to fudge things to make it harder to argue against.
The point isn't whether it's bad or good, but that it establishes a pattern of inconsistency.
Ah, management without managing. Its depressing and engaging at the same time. Depressing because palace intrigue is exhausting and fraught with peril. Engaging because I love explaining things to people and watching everything click into place for them (see the 1 of 10k xkcd comic).
> Perhaps better to let people go so that they can be productive elsewhere?
True. Joining thousands of other unemployed developers sending applications into a job posting for a nonexistent role online is very productive. Probably good for the economy too now that I think about it.
> What does it mean? Does clicking on a link counts as labor.
I think we might be seeing what happens when people are being paid too much to spend all day emailing each other and jockeying excel/gantt charts/org charts. Yeah for some definition of "work" I guarantee that a LLM could perform 3.25 years worth in four weeks.
> people are being paid too much to spend all day emailing each other
Hmm, this does not sound exactly right. Also, does anybody seriously think that communication is not work, or is not important? A number of really impactful things started from people emailing each other. (Hell, Linux kernel development is still much about people emailing patches each other.)
The problem with human labor is that, as an organization scales, the amount of work any individual in the system can do shrinks due to the coordination problem.
Coordination consumes a larger and larger amount of employee time to the point that, in the absolute largest organizations, the vast majority of employee time is internal coordination vs. actual improvement/selling of the customer offering.
So if you go from 100 employees to 1,000 employees, they can MAYBE do 4X the work. Not 10X like you'd think. And this effect gets even worse as you scale further.
So if an AI can do 10X more labor in a human day, and can coordinate instantaneously via a central context ledger (say a git repo), it doesn't just create 10X gains in productivity for large orgs. It creates a multiple of that 10X due to also removing the human coordination overhead.
Don't you think AI itself is something that adds coordination overhead? A 1000 strong team with AI agents will feel like 5000-person company where more than 30% are not even at exception level - i.e. they need to be pulled along.
This is why having less people and more agents actually makes sense but the coordination problem remains either way.
And you cannot escape it because it is simply mathematical.
The coordination problem absolutely can be escaped with technology, hence why productivity gains exist and why the economy grows and isn't a fixed pie over time.
Here's an easy non-AI example:
In the past, a 'computer' was literally a person [1]. If you needed to synthesize large amounts of data, you needed to split the task among a team of people writing things down and then a team of people to check their work after the fact and then a team of people to combine all the work and then a team to double-check the combined work.
Tasks that in the past would have taken a room full of people coordinating with pencils are absolutely done by 1 machine today (what we know as computers) that no longer needs to split that task and coordinate, which is exactly what will happen with 'agents' who can take on vastly more work per unit of time.
Look up Amdahl's Law and Universal Scalability Law.
The math doesn't care whether the nodes are people, CPUs or language models. If agent A's next action depends on what agent B decided, you've introduced a sequential dependency.
The point is that we don't need an equivalent number of nodes (agents) as we needed people.
The computer flattened the coordination dependencies of that room full of people by doing all the calculations by itself. As they get smarter, you can theoretically assume 1 agent could eventually run the entire US federal government.
In the historical [human] computer example; if 15,000 calculations needed to be done, a CPU doesn't need to wait on Bob to come back from lunch to do the next 20 calculations...and doesn't need to wait on Alice to combine his work with the 20 calculations done by Jane...and doesn't need Bill to wait for everybody to be done to double check Jane's work.
The CPU does all 15,000 calculations instantly, by itself. This will be similar with AI agents.
Note that Amdahl's rule doesn't capture the practical situation.
1) The purpose of algorithms is ultimately to create value, not compute some fixed value X. This is important as it gives flexibility to choose different value producing tasks where parallelism dominates over serial tasks, whenever the the latter becomes a bottleneck.
2) In terms of producing value, perfect accuracy or the best possible solutions are not always necessary. Many serial tasks can become very parallel tasks when accuracy or certainty do not have to be complete.
3) Solutions that are reusable changes the math further. No matter how serial a calculation is, if something is calculated that can be reused, that serial part becomes effectively order O(1), after calculation if reused exactly, but as neural network demonstrate, many serial tasks become very parallelized after training a model that can be reused for now a wide class of specific problems. Resulting in very amortized serial computing costs.
It doesn't matter how many steps something takes, if those steps are now in the past and the value is "forever" reusable.
4) The economics of serial and parallel computation are not static, but improve relative to economic value achieved. Meaning that demand for cheaper serial time and currency costs result in improved scaled up hardware that delivers cheaper serial costs. This may have less impact than the previous points, but over years makes a tremendous difference on top of all those points.
This can go on.
The point being Amdahl's law certainly applies to specific algorithms, but is not the dominant determinant of computing in general, and not useful application of computing to a significant degree, where problems can be strategically chosen, strategically weakened or altered, and can be strategically fashioned to create O(V) of value - to balance any O(S) cost of serial computing, via direct reuse and generalization.
In an organization, the number of sequential steps doesn't really scale with number of participants, does it? Rather with dependent steps of the tackled process; say, devise building permit request, await approval, purchase materials, move materials to site, hire workforce, etc.
Theoretically, each of those steps is parallelizable to some extent. Amdahl's law equivalent here would be that some delays are outside the reach of an organization to improve. For instance, a building permit will take the time it takes to be examined based on an external public administration.
While it might sound like the ultimate hack, it’s not.
I’ve been in that situation and died a little inside everyday. It’s not like being a rentier, because you still have to lose most of your day at the office and pretend to work, and be available in case some higher up needs something so you don’t get caught.
Some type of emailing is important, what most people do, however, is not. Same with meetings, calls etc. Most of it is filling the day so they don't get fired.
The amazing thing is that soon (actually already) we will be seeing people being paid way too much to prompt a LLM to email other people or respond to other peoples emails. And then turn these emails into presentations which will be turned into meeting transcripts again followed by emails.
The lingering question is if the intermediate LLM translation steps will actually make our communication more efficient - or just amplify the already inefficient parts.
Inefficiency all too often is celebrated by our society, as I wrote in 2010: https://pdfernhout.net/beyond-a-jobless-recovery-knol.html
"Also, many current industries that employ large numbers of people (ranging from the health insurance industry, the compulsory schooling industry, the defense industry, the fossil fuel industry, conventional agriculture industry, the software industry, the newspaper and media industries, and some consumer products industries) are coming under pressure from various movements from both the left and the right of the political spectrum in ways that might reduce the need for much paid work in various ways. Such changes might either directly eliminate jobs or, by increasing jobs temporarily eliminate subsequent problems in other areas and the jobs that go with them (as reflected in projections of overall cost savings by such transitions); for example building new wind farms instead of new coal plants might reduce medical expenses from asthma or from mercury poisoning. A single-payer health care movement, a homeschooling and alternative education movement, a global peace movement, a renewable energy movement, an organic agriculture movement, a free software movement, a peer-to-peer movement, a small government movement, an environmental movement, and a voluntary simplicity movement, taken together as a global mindshift of the collective imagination, have the potential to eliminate the need for many millions of paid jobs in the USA while providing enormous direct and indirect cost savings. This would make the unemployment situation much worse than it currently is, while paradoxically possibly improving our society and lowering taxes. Many of the current justifications for continuing social policies that may have problematical effects on the health of society, pose global security risks, or may waste prosperity in various ways is that they create vast numbers of paid jobs as a form of make-work."
Philosophy territory now... you wrote about technology making labor unnecessary 15 years ago - Aristotele did ~2000 years ago too (same text where he tried to justify slavery but nvm that): "For if every instrument could accomplish its own work, obeying or anticipating the will of others, [...] if, in like manner, the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves."
I bet in 2000 years they will still be writing about it - yeah, technology changes our lives (for better or worse).
It's pretty fascinating to look at the impacts this has had in the last 2000 years, or even just the last 200.
Take construction work. Incredible improvements through power tools, gasoline-powered mobile cranes, etc. The productivity per worker has exploded. A lot of this has been captured by induced demand: we build bigger, taller, grander. But the improvements aren't distributed equally. Which means that crafts that haven't seen much improvement are now more expensive in comparison to everything else. Which has contributed to our buildings having less elaborate facades and becoming more "bland"
The same in clothing. Clothing has become dirt cheap. Even the poorest people can afford new clothing multiple times a year. But in the same transition we have gone from everything being custom tailored to most things only kind of fitting, being made for variations of the most common body shapes. Not necessarily because tailored clothing has become much more expensive (though higher labor costs from higher average productivity haven't helped), but because every other step has become cheaper and tailoring hasn't.
I wonder what we will say about the trajectory of software in a couple decades
That's a great angle - will handcrafted software of the future become the equivalent of a tailored suit today? One might argue it already is, most companies and individuals do just fine using cloud/SaaS offerings and COTS apps. So on first glance it seems like automating software engineering would mainly benefit exactly those providers. The other side of the coin is that it also allows for cheaper/faster in-house DIY solutions and competition.
Yeah, I could see a world where it swings exactly the opposite way for software. Writing software for yourself is becoming cheap, but gathering requirements, getting alignment between stakeholders or marketing your software isn't getting much cheaper. Maybe everyone will end up with their own in-house solution? Or maybe we end up with configurable SAP-like behemoths, but instead of an army of expensive consultants configuring the software for your use case you have AI agents taking that part
I'm sure whatever path this takes will seems obvious in hindsight
I see how this can boost productivity...for those that today already produce value voluntarily. These will move one level higher. The rest with 100x the amount of performative work. Everyone will be busier created presentations and charts that no one needs and no one will read. Managers will ask for new presentations and reports every sync, and hours will be spent discussing things that don't actually matter.
reply