Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway27448's commentslogin

For those who have never heard of catalyst: https://developer.apple.com/documentation/uikit/mac-catalyst

> If you write software using GTK, Qt, or FLTK then you are writing Wayland software.

Why is it so complicated if it's just a common backend? Surely you don't need 1/10th the complexity just to render gnome or kde (never heard of fltk before).


Why is Wayland so complicated? I thought half the reason for breaking with X11 was to produce a simpler window server. I was flabbergasted when I realized that there were competing compositors for seemingly no benefit to anyone.

> Why is Wayland so complicated?

It's not particularly complicated, and certainly a lot simpler and cleaner than X11 in almost every way.

The reality of the situation is that there's sort of a hateful half-knowledge mob dynamic around the topic, where people run into a bug, do some online search, run into the existing mob, copy over some of its "arguments" and the whole thing keeps rolling and snowballing.

Sometimes this innocent, like OP discovering that UIs are actually non-trivial and there's different types of windows for different things (like in really any production-grade windowing system). So they share their new-found knowledge in the form of a list. Then the mob comes along and goes "look at this! they have a list of things, and it's probably too long!" and then in the next discussion it's "Did you know that Wayland has a LONG LIST OF THINGS?!" and so on and so forth.

It's like politics, and it's cyclic. One day the narrative will shift again.

The mob will not believe me either, for that matter, but FWIW, I've worked on the Linux desktop for over 20 years, and as the lead developer of KDE Plasma's taskbar and a semi-regular contributor to its window manager, I'm exposed to the complexity of these systems in a way that only relatively few people in the world are. And I'd rather keep the Wayland code than the X11 one, which I wrote a lot of.


Making each one implement input handling was also a dazzlingly bizarre design choice.

> I don't understand how we're still using fossil fuels.

These fit an energy niche that can't be replaced with any one thing. China is just now investing in an electric military, for instance. Shipping will remain difficult to electrify entirely (which is surmountable, but certainly not in production). Coal and natural gas plants provide on-demand power that is not straightforward to guarantee with renewable sources. And there are many (likely almost all) grids that are simply not up to the task of transmitting energy that used to be transmitted by physically moving fossil fuels. Air flight has no renewable alternative as of today—though, I suppose we technically do have renewable forms of jet fuel, it's extremely expensive.

& of course we will need byproducts for the forseeable future for fertilizer, materials, chip production, etc etc.

It'll take a couple generations. Of course we should be paying poor countries to not use fossil fuels, but instead we're trying to force switching back to fossil fuels ourselves for no explicable reason (as an american obv).


> to distract from internal scandals

This is ignoring fifty years of trying to start this war. Even blaming Israel doesn't entirely make sense. A large segment of capital in the US truly wants this war to happen (as foolish as that may seem to rational humans). It is not simply a distraction.

Or to put it another way, the Trump administration is characterized by dozens of scandals of bungled governance, each distracting from the next. Determining which is the "root" thing being distracted from is pointless.


I'm not sure there is a "normal" tendency to reach for AI. But there is certainly parallel in that, say, javascript and PHP have a reputation of being preferred by barely able people who make interesting and useful things with atrocious code.

I've seen rust codebases that would make you cry along with perfectly well architected applications written in both perl and php. You're just playing into common language silo stereotypes. A competent developer can author code in their language of choice whatever that may be. I'm not sure "reaching for AI" implies anything besides that some folk prefer that tool for their work. I personally don't have a tendency to reach for AI, but that doesn't somehow imply they or I are "lesser" because of it.

> You're just playing into common language silo stereotypes.

Yes, the stereotype is what I brought up on purpose.

> A competent developer can author code in their language of choice whatever that may be. I'm not sure "reaching for AI" implies anything besides that some folk prefer that tool for their work.

More relevantly, a competent developer can use AI just like one can use PHP. It buys enormous value in the short term.

> I personally don't have a tendency to reach for AI, but that doesn't somehow imply they or I are "lesser" because of it.

Yes, just like people who use PHP can make excellent programs. Nobody in this conversation implied anyone was lesser than another.


It does to executives who sign the checks to ai usage contracts

The implication being that execs want folks who "reach for AI" to meet some arbitrary contract targets? Sounds like optimizing for the wrong things but I've seen crazier schemes.

In my opinion the end goal of those execs pushing AI is the age old goal of seizing the means of production (of software in this case) by reducing the worker to a machine. It'll likely play out in their favor honestly, as it has many times in the past.


I don't know what an AI usage contract is but it sounds like corporate suicide.

An agent is still attached to an accountable human. If it is not, ignore it.

How do you figure out which is the case, at scale?

You don't.

The problem is that it acts as an accountability sink even when it is attached.

I've had multiple coworkers over the past few months tell me obvious, verifiable untruths. Six months ago, I would have had a clear term for this: they lied to me. They told me something that wasn't true, that they could not possibly have thought was true, and they did it to manipulate me into doing what they want. I would have demanded and their manager would have agreed that they need to be given a severe talking to.

But now I can't call it a lie, both in the sense that I've been instructed not to and in the sense that it subjectively wasn't. They honestly represented what the agent told them was the truth, and they honestly thought that asking an agent to do some exploration was the best way to give me accurate information.

What's the replacement norm that will prevent people from "flooding the zone" with false AI-generated claims shaped to get people to do what they want? Even if AI detection tools worked, which I emphasize that they do not, they wouldn't have stopped the incidents that involved human-generated summaries of false AI information.


> Increased speed only gets us where we want to be sooner if we are also heading in the right direction.

A proper capitalist system will tend toward the right direction as directed by the market yea? All of this neuroticism about AI doesn't matter.


Yes, the true market; shareholders.

What incentive would Iran have to lie? Their entire security model revolves around believable deterrence—apparently far more believable than either Israel or the US understood.

> The question of whether the world can assume its security on some religious rulings of some Ayatollas

I don't think much of the world has processed that Iran's ostensible lack of nuclear weapons is purely a matter of will and not capability.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: