Hacker Newsnew | past | comments | ask | show | jobs | submit | Chance-Device's commentslogin

“Get your phone number flagged by the authorities in minutes! Don’t wait, start today!”

It can't be worse than grok. In fact, OP, please tell me this is grok.

Flagged because…

interest in the Epstein legal case?

interest in the supposed personality of “success”?

interest in intelligence agencies?

interest in technical projects on HN?

What’s theoretically getting someone flagged, here, in your opinion? Your comment’s a little light.


Everyone is talking about moving to Linux lately, it’s a bit of a trend. I wish they’d stop, for one simple reason: I’ve been using Linux exclusively (when I’m not forced to use macOS by work) for several years now, and I rather enjoy the lack of malware, spyware and other bullshit on the platform.

If the general public comes over this situation might end. Desktop linux isn’t a target right now because its niche, I’d prefer that didn’t stop.

Oh well. Maybe nothing lasts forever.


While I sympathize with this angle, there's another side to this coin: if more people do the switch, maybe some applications will finally get linux versions.

I'm a Sunday photographer and quite like Lightroom and Photoshop (I know about the drama, but to me, I get enough value from them compared to Darktable and the GIMP to not switch just yet). It's the only reason I still have a windows pc hanging around the house.


I am in a similar boat; my media editing machine ruined windows 10 so that I can use Lightroom. But I would dearly love to ditch windows so I'm currently looking to try out running Lightroom under Winapps to see if it is usable. There's no way of passing the GPU through without something like SR-IOV so I'll have to see how it goes.

https://github.com/winapps-org/winapps


I was thinking of doing that, but since that would require me to switch the monitor and whatnot, it would be just like using two PCs. And since I only use my desktop for LR and not much else, jumping through the hoops with emulation doesn't make much sense.

How so? Winapps lets you run windows applications as if they were native to Linux, you interact with them the same way you would anything installed by apt/pacman/dnf etc. Unless I'm very much misunderstanding things (which I don't believe I am)

In the general case, I think you're right. WinApps seems to use RemoteApp functionality on windows to export just the window you're interested in from the virtualized guest vm to the host, which should behave mostly as a "native" app.

But you were talking about sr-iov, which is a whole different matter. Presumably, the goal is to have LR use that GPU for some of its functions. But LR doesn't support multiple GPUs: it does its computation on the same GPU that handles the output. For that, you need to connect the display to the passed-through GPU. Now, aside from intel, I don't think any mainstream GPU actually supports sr-iov, so you need to pass through the entire gpu to the guest VM (the host wouldn't see it anymore at all). This isn't how RemoteApp works, and I doubt WinApps handles this case.

I remember a project (Looking Glass?) that tried to somehow "bring back" the output to the host machine, but it didn't seem too robust at the time. I haven't followed it, so I have no idea if it's any better now, if it's still alive. If it does, this could possibly work if you had two GPUs (which I happen to have, since my CPU has an integrated GPU). But you'd still get the whole Windows desktop of the VM, not an RDP connection.


There's a lot of servers running Linux that are regularly targeted by malware.

There is a big difference in what software a desktop user runs versus what runs on a server, but the great thing about Linux is that you can keep just as much variation between your install and the average desktop user.

Your best bet for security is probably running OpenBSD, but within Linux, if you avoid common optional applications and services like Gnome, KDE, pulseaudio, systemd, etc., you'll have a significantly different attack vector. Avoiding Python and Node package managers and sticking to your distribution's package manager would be great, too.


Thanks, and that probably is a good security posture, but having to stop using everything good and switch to OpenBSD is exactly what I want to avoid!

Not that OpenBSD isn't good, it's just different priorities.

Better spread the Linux word because with enough users more developers will be attracted and the race good vs bad hackers in OSS will be won be the former. "Nothing is hidden under the sun". Closed source is made to push malware secretly.

> Closed source is made to push malware secretly.

That is factually incorrect flamebait. Closed source is made primarily due to a desire to retain control. While one can use control for malicious reasons, the predominant use is to make money.


You overestimate how influential HN is. Everyone on HN is talking about moving to Linux. Which means, uh, nothing really changed for the general public.

Folks on reddit and hackernews aren't normal people. Outside of this bubble few people have heard of linux. Hell so few people I know use firefox which makes me mad. You are safe from that fear.

Incidentally, anyone know what is going on with this image - “Cryo-EM map of a center slice of the ushikuvirus particle”: https://journals.asm.org/cms/10.1128/jvi.01206-25/asset/1357...

It’s one quarter of an image flipped horizontally and then vertically, you can see the patterns.

It’s a bit odd to do that? Shouldn’t it just be the original EM image?


https://www.sciencedirect.com/science/article/pii/S104784772... - there are similar results in this paper, too.

After a bit of digging - it looks like it's done to sharpen features as one of the standard steps in producing these images. Where there are rotational symmetries in the things they're looking at, they focus on the smallest unit, and then rotate accordingly. If you had a trilateral symmetry, or hexagonal structure, they'd rotate 3 or 6 times around the center.

You're not getting a real image of the thing, but apparently it's got data from those other segments mixed in with the rotations, so you're getting a kind of idealized structure, to make the details being studied pop out, but if you have some sort of significant deviation, damage, or non symmetric feature it'll show up as well.

It's called "imposed symmetry" https://discuss.cryosparc.com/t/what-is-actually-occuring-wh...

Neat stuff, cool thing to catch!


So kind of like taking a picture of a human, and then taking each half, flipping along the midline, and blending to get an idealized Symmetrical Human?

Humans aren’t symmetrical though.

This would more like zooming into one edge of a snowflake and then rotating it.


> Humans aren’t symmetrical though.

Perhaps you assumed a "radially" which wasn't part of my analogy? :p

Land animals have a pretty consistent trend of exterior bilateral symmetry which is very noticeable. (Naturally, a completely normal Hunam such as myself cannot speak for how it may work in places other than my home planet Dirt.)


I understood you meant bilateral symmetry. And yes, there are similarities, but we are not bilaterally symmetric. At least not to the extent where you can flip an image and have that look normal.

Even faces look weird when flipped that way (there have been studies on this effect too). And that’s before you get into the issue that it’s common to have differently shaped breasts, different sized hands or feet. Ears shaped differently. Non-uniform teeth. And so on and so forth.



According to this article the image is computed and not really directly captured https://www.chemistryworld.com/news/explainer-what-is-cryo-e...

I think that might just be the original and it simply is symmetrical to that degree. I found a few more examples of "cryo-em center slices" and I've yet to find one that doesn't have really strong symmetry down to the small dot patterns.

A different paper, this figure shows a number of cryo-em images, including a simulation, and they all show the same degree of pattern symmetry https://www.researchgate.net/figure/Central-sections-through...

First figure in this third paper also shows symmetry of small patterns https://journals.asm.org/doi/10.1128/jvi.00990-22


Thanks, those examples make it pretty clear.

I still think it’s super weird that it looks exactly like an EM image, but is generated. Anyway, good to know!


Rampant fraud in science papers has reached the point where hobbyists can point out obviously fake charts and graphics even in prestigious journals.

Publish or perish needs to end.


This isn't fraud ... see the informative comments nearby.

The majesty of nature.

Thank you for your service.


It does though. That’s a separate issue from the inevitable layoffs and any bugs introduced along the way, but he’s not wrong.


Speak for yourself. I think he's extremely wrong

I think if all you care about is the outcome then sure, you might enjoy AI coding more

If you enjoy the problem solving process (and care about quality) then doing it by hand is way, way more enjoyable


If you don’t care about outcome then all you’re doing is playing a video game.


Sure, but the headline wasn't "Google CEO says ‘vibe coding’ made software development ‘so much less like a video game.’" In fact since many people think video games are enjoyable, making software development less gamelike might make it less enjoyable.

(But would further gamification make it more enjoyable? No, IMO. So maybe all we learn here is that people don't like change in any direction.)


If writing code by hand is like playing a videogame, then vibe coding is like playing a slot machine

Argue about the value of video games all you like, I would still place them above slot machines any day


I think we’re mixing our metaphors here, what I mean is at the end of the day you write code to get some result you actually care about, or that matters for some material reason. Work is labor at the end of the day. If you don’t care about that outcome or optimizing for it, then you may as well play a video game or code golf or something. What you now want is a hobby.


> If you don’t care about that outcome or optimizing for it,

I do care about the outcome, which is why the thought of using AI to generate it makes me want to gouge my eyes out

In my view using AI means not caring about the outcome because AI produces garbage. In order to be happy with garbage you have to not care


It depends on how you use it. I was running 15 agents at once, 12 hours a day for a month straight because it was more optimal to add more, and that wasn't very enjoyable. Now I'm back to writing code the enjoyable way, with minor LLM assistance here and there.


This is a weird article. How many times in your career have you been handed a grossly under-specified feature and had to muddle your way through, asking relevant people along the way and still being told at the end that it’s wrong?

This is exactly the same thing but for AIs. The user might think that the AI got it wrong, except the spec was under-specified and it had to make choices to fill in the gaps, just like a human would.

It’s all well and good if you don’t actually know what you want and you’re using the AI to explore possibilities, but if you already have a firm idea of what you want, just tell it in detail.

Maybe the article is actually about bad specs? It does seem to venture into that territory, but that isn’t the main thrust.

Overall I think this is just a part of the cottage industry that’s sprung up around agile, and an argument for that industry to stay relevant in the age of AI coding, without being well supported by anything.


I sometimes wonder how many comments here are driving a pro AI narrative. This very much seems like one of those:

The agent here is:

Look on HN for AI skeptical posts. Then write a comment that highlights how the human got it wrong. And command your other AI agents to up vote that reply.


It has nothing to do with AI, the article is just plain wrong. You have to be either extremely dumb, extremely inexperienced or only working solo to not understand this.


A lot of negativity towards this and OpenAI in general. While skepticism is always good I wonder if this has crossed the line from reasoned into socially reinforced dogpiling.

My own experience with GPT 5 thinking and its predecessor o3, both of which I used a lot, is that they were super difficult to work with on technical tasks outside of software. They often wrote extremely dense, jargon filled responses that often contained fairly serious mistakes. As always the problem was/is that the mistakes were peppered in with some pretty good assistance and knowledge and its difficult to tell what’s what until you actually try implementing or simulating what is being discussed, and find it doesn’t work, sometimes for fundamental reasons that you would think the model would have told you about. And of course once you pointed these flaws out to the model, it would then explain the issues to you as if it had just discovered these things itself and was educating you about them. Infuriating.

One major problem I see is the RLHF seems to have shaped the responses so they only give the appearance of being correct to a reasonable reader. They use a lot of social signalling that we associate with competence and knowledgeability, and usually the replies are quite self consistent. That is they pass the test of looking to a regular person like a correct response. They just happen not to be. The model has become expert at fooling humans into believing what it’s saying rather than saying things that are functionally correct, because the RLHF didn’t rely on testing anything those replies suggested, it only evaluated what they looked like.

However, even with these negative experiences, these models are amazing. They enable things that you would simply not be able to get done otherwise, they just come with their own set of problems. And humans being humans, we overlook the good and go straight to the bad. I welcome any improvements to these models made today and I hope OpenAI are able to improve these shortcomings in the future.


I feel the same - a lot of negativity in these comments . At the same time, openai is following in the footsteps of previous American tech companies of making themselves indispensable to the extent that life becomes difficult without them, at which point they are too big to control.

These comments seem to be almost a involuntary reaction where people are trying to resist its influence.


precisely: o3 and gpt5t are great models, super smart and helpful for many things; but they love to talk in this ridiculously overcomplex, insanely terse, handwavy way. when it gets things right, it's awesome. when it confidently gets things wrong, it's infuriating.


I was expecting something about how to protect your consciousness from (or during) AI use, but I got a short 200 word note rehashing common sentiments about AI. I guess it’s not wrong, it’s just not very interesting.


Yeah if found it slightly ironic that an argument against using AI is made as an empty social media-style post. Ironically AI could have written a better one.


it'd be worse, just longer


It is very interesting because it tackles things people love to forget when using AI. A little over a decade ago it was scandal on how big tech companies are using peoples data, now people give it knowingly to them via all kinds of bullshit apps. So you have to repeat the obvious over and over, and even then it wont click for many.


So wild to think Cambridge Analytica was a scandal worthy of congressional hearings. LLMs are personalized persuasion on steroids.


I still feel "weird" trying to reason about GenAI content or looking at GenAI pictures sometimes. Some of it is so off-putting in a my-brain-struggles-to-make-sense-of-it way.


To me the answer was fairly obvious—default to using your own thinking first


More insane than specifically developing AIs to write software, creating competition from machines as well? As a group we’re not exactly rational.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: