You already trust third parties, but there is no reason why that third party can't be the very same entity publishing the distribution. The role corporations play in attestation for the devices you speak of can be displaced by an open source developer, it doesn't need to require a paid certificate, just a trusted one. Furthermore, attestation should be optional at the hardware level, allowing you to build distros that don't use it, however distros by default should use it, as they see fit of course.
I think what people are frustrated with is the heavy-handedness of the approach, the lack of opt-out and the corporate-centric feel of it all. My suggestion would be not to take the systemd approach. There is no reason why attestation related features can't be turned on or off at install time, much like disk encryption. I find it unfortunate that even something like secureboot isn't configurable at install time, with custom certs,distro certs, or certs generated at install time.
Being against a feature that benefits regular users is not good, it is more constructive to talk about what the FOSS way of implementing a feature might be. Just because Google and Apple did it a certain way, it doesn't mean that's the only way of doing it.
What skills are atrophying that would be useful in the future?
If you're letting LLMs do more than assisting, don't. That's my advice. But if like you're title they're just assisting you, then what skills are atrophying? You still review the code and understand it right? You still second guess the LLMs proposed solutions and look for better approaches right?
Articulating how LLM assistance is different than junior programmers writing code and assisting would be useful, everyone has different setups and workflows, so it's hard to say in my opinion.
Where is that HN thread where everyone was saying how bad it would be for Biden to ban Tiktok? Let's see how it can be used to manipulate the upcoming elections in the US this year. Except now they can't be banned, they're American!
This shouldn't have happened, but my advice is to have a backup of your PCs that can be easily restored, regardless of the OS. I've had boot device hardware failures, file systems corrupted,etc.. with Linux and Windows alike (not yet with little mac though, I assume it's just a matter of time).
why don't they work the same way PCs do with UEFI and secure boot? where users decide what certificates go in as trusted root, so they can install their own OS? I'm surprised there hasn't been any anti-trust suits over this by competitor ROM makers.
There are almost endless reasons why. It's like asking why would you want a self-driving car. Having a drone to transport things would be amazing, or to patrol an area. LLMs can be helpful with object identification, reacting to different events, and taking commands from users.
The first thought I had was those security guard robots that are popping up all over the place. if they were drones instead, and LLM talked to people asking them to do/not-do things, that would be an improvement.
Or an waiter drone, that takes your order in a restaurant, flies to the kitchen, picks up a sealed and secured food container, flies it back to the table, opens it, and leaves. It will monitor for gestures and voice commands to respond to diners and get their feedback, abuse, take the food back if it isn't satisfactory,etc...
This is the type of stuff we used to see in futuristic movies. It's almost possible now. glad to see this kind of tinkering.
You could have a program, not LLM-based but could be ANN, for flying and an LLM for overseeing; the LLM could give the program instructions to the pilot program as a (x,y,z) directions. I mean currently autopilots are typically not LLMs, right?
You describe why it would be useful to have an LLM in a drone to interact with it but do not explain why it is the very same LLM that should be doing the flying.
I'm not OP, I don't know what specific roles the LLM should be using, but LLMs are great with object recognition, and using both text (street signs,notices,etc..) and visual cues to predict the correct response. The actual motor control i'm sure needs no LLMs, but the decision making could use any number of solutions, I agree that an LLM-only solution sounds bad, but I didn't do the testing and comparison to be confident in that assessment.
An LLM that can't understand the environment properly can't properly reason about which command to give in response to a user's request. Even if the LLM is a very inefficient way to pilot the thing, being able to pilot means the LLM has the reasoning abilities required to also translate a user's request into commands that make sense for the more efficient, lower-level piloting subsystem.
That’s a pretty boring point for what looks like a fun project. Happy to see this project and know I am not the only one thinking about these kinds of applications.
We don't need a lot of things, but new tech should also address what people want, not just needs. I don't know how to pilot drones, nor do I care to learn how to, but I want to do things with drones, does that qualify as a need? Tech is there to do things for us we're too lazy to do.
You're considering "talking to" a separate thing, I consider it the same as reading street signs or using object recognition. My voice or text input is just one type of input. Can other ML solutions or algorithms detect a tree (same as me telling it there is a tree,yaw to the right), yes, can LLMs detect a tree and determine what course of action to take? also true. Which is better? I don't know, but I won't be quick to dismiss anyone attempting to use LLMs.
I don't think you understand what an "LLM" is. They're text generators. We've had autopilot since the 1930s that relies on measurable things... like PID loops, direct sensor input. You don't need the "language model" part to run an autopilot, that's just silly.
You see to be talking past him and ignoring what they are actually saying.
LLMs are a higher level construct than PID loops. With things like autopilot I can give the controller a command like 'Go from A to B', and chain constructs like this to accomplish a task.
With an LLM I can give the drone/LLM system complex command that I'd never be able to encode to a controller alone. "Fly a grid over my neighborhood, document the location of and take pictures of every flower garden".
And if an LLM is just a 'text generator' then it's a pretty damned spectacular one as it can take free formed input and turn it into a set of useful commands.
They are text generators, and yes they are pretty good, but that really is all they are, they don't actually learn, they don't actually think. Every "intelligence" feature by every major AI company relies on semantic trickery and managing context windows. It even says it right on the tin; Large LANGUAGE Model.
Let me put it this way: What OP built is an airplane in which a pilot doesn't have a control stick, but they have a keyboard, and they type commands into the airplane to run it. It's a silly unnecessary step to involve language.
Now what you're describing is a language problem, which is orchestration, and that is more suited to an LLM.
Give the LLM agent write acces to a text file to take notes and it can actually learn. Not really realiable, but some seem to get useful results. They ain't just text generators anymore.
(but I agree that it does not seem the smartest way to control a plane with a keyboard)
My confusion maybe? Is this simulator just flying point a to b? Seems like it’s handling collisions while trying to locate the targets and identify them. That seems quite a bit more complex than what you are describing has been solved since the 1930s.
LLMs can do chat-completion, they don't do only chat completion. There are LLMs for image generation, voice generation, video generation and possibly more. The camera of a drone inputs images for the LLM, then it determines what action take based on that. Similar to if you asked ChatGPT "there is a tree in this picture, if you were operating a drone, what action would you take to avoid collision", except the "there is a tree" part is done by the LLMs image recognition, and the sys prompt is "recognize objects and avoid collision", of course I'm simplifying it a lot but it is essentially generating navigational directions under a visual context using image recognition.
Yes it can be, and often is. Advanced voice mode in chatGPT and the voice mode in Gemini are LLMs. So is the image gen in both chatGPT and Gemini (Nano Banana).
"You don't need the "language model" part to run an autopilot, that's just silly."
I think most of us understood that reproducing what existing autopilot can do was not the goal. My inexpensive DJI quadcopter has an impressive abilities in this area as well. But, I cannot give it a mission in natural language and expect it to execute it. Not even close.
People keep forgetting that it's possible to legally migrate, work for awhile, and so on, and then "become illegal" due to deadlines or administration issues.
An example every tech worker should understand is H1-B, where as an added bonus your employer can make you illegal.
the migration was legal. you're not an "illegal" when you drive with an expired license are you? so quotes is appropriate when using the term as a title instead of a verb.
> you're not an "illegal" when you drive with an expired license are you?
You are. Why do you think licenses have expiration dates? It legally authorises you to perform specific activity within specific timeframe. Any activity without license is illegal.
By same logic you can't stay in the house you legally rented previously.
No, you're being intentionally oblivious to justify something negative. You have done something illegal, that is not in contest. But people are not labeled "illegal" when their license expires. They're called out based on the specific thing they did. You can say "this person migrated illegally", that's different than saying "this person is an illegal", as if their very existence and presence is illegal, that's the insinuation and you're intentionally avoiding that. The fact that migrating illegally is is indeed illegal, and that illegal migrants, or those who stay here illegally must be removed has not been contested by anyone serious. You're advocating a stance that goes further than that and dehumanizes these people. You should lookup the videos of wailing children in detection camps with no heating in winter, migrants being strangled to death and beating in black sites, even US citizens being abducted and removed from the country - that's the propaganda you're supporting by claiming the people themselves are illegal as opposed to they committed something illegal and need to face lawful consequences. You don't need to be cruel and inhumane to enforce the law (in fact this is specifically prohibited by the constitution). There are people that enjoy and revel in the inhumanity and cruelty, I hope you're not on that side of things. It might cost a lot, but it is reasonably possible to locate, lawfully process (courts/lawyers) and remove every person that is not present in the US lawfully.
> By same logic you can't stay in the house you legally rented previously.
If you did, you'd be called a squatter, not "an illegal". Even squatters who take over someone's home have rights. Everyone gets due process. You foolishness is that you think because they're migrants, however they're treated won't affect you. I don't care what demographic group you're in, you'll be called an illegal soon enough. Words matter, the whole law is just a bunch of words.
Normally I wouldn't dignify the emotional word salad with a response, but it is important to state few things.
You conceal substance beneath a pile of semantic shenanigans. If someone stays in the country illegally, their presence in the county is illegal and law enforcement on that matter is warranted. You can call them saints if you like, it still doesn't make their presence legal. No matter if they entered the country legally and overstayed their visas, or plainly entered the country illegally. No matter how much leftist media make emotional appeals and frame it as "child dying" or any other sorts of manipulations you are trying to parrot as well - it remains illegal.
There are NO US citizens detained or "abducted" by ICE, provided they comply with due procedure for establishing their legal status. You are lying. There are possibly cases where ICE had to do checks on people who decline to confirm their status, which warrants further investigation.
I appreciate your concern regarding myself being called illegal, but let me assure you I am totally fine and will be totally fine, even being not a US citizen.
They're not just going after the so-called "illegal aliens", something made clear after the numerous extrajudicial killings by ICE officers recently, such as the one that occured yesterday.
Lol no, guns don't just magically go off when in a holster. Yes mechanical failures do happen, but it requires very specific types of impact in very specific ways that cannot happen when in a holster and are so rare as to happen on decade timescales with tens of thousands of the gun. Also I saw zero evidence of that guys gun going off in the video, the first shot heard is the shot coming out of the ICE goon's gun that he is pointing at that guy, who then also mag dumps him while he is on the ground.
The Sig Sauer P320, which is what Alex Pretti had, is notorious for unintentionally discharging. Various law enforcement agencies and militaries have stopped using it for that reason.
"the firearm may discharge when it is dropped and the back of the slide hits the ground at a 33-degree angle"
That is pretty hard to accomplish while its in a holster unless the guy was suplexed and his entire spine turned to jello giving the gun a multi-foot uncushioned drop.
"misfire was due to "a partial depression of the trigger by a foreign object combined with simultaneous movement of the slide"
Which is irrelevant when in a shielded holster like this guy has.
On top of all this, even had the gun went off, which I have found zero evidence to support, how would that guy know who's gun went off to start with? Guns don't light up with a bunch of LEDs to show you it has been fired. If you aren't staring directly at the gun, which isn't really possible in the scenario that played out, you wouldn't know whos gun went off. And even if someone was staring at the gun and saw it go off, how does a holstered gun that nobody is holding represent any sort of threat? You think the guy is controlling his gun with his mind powers?
I don't even know why im bother argueing with you because this entire thing is ludicrous. I find it hard to believe you have watched any of the video of the incident at all and came to this conclusion.
If it misfired it likely misfired as it was being taken, not while in his holster.
If you’re detaining someone who has a gun and a gun goes off it’s incompetent, maybe negligent, but not murder to react by shooting the guy who had the gun.
I don’t think anyone can draw definitive conclusions from the videos.
How is that not murder? In your scenario the guy is still innocent and he was shot to death because of ICE being scared by their own incompetence. If someone claps their hands and I reflexively mag dump you on the street, am I not guilty of murder?
Obviously because murder requires intent. It might be negligent homicide though.
There’s a big difference between someone randomly clapping their hands and an agent seeing/hearing that a detainee has a firearm, then hearing the firearm discharge as they’re struggling to restrain him.
> If someone claps their hands and I reflexively mag dump you on the street, am I not guilty of murder?
Comparing hearing a clap to a GUNSHOT is wild.
Ninety nine percent of people including you and everyone on HN would, if involved in a scuffle with an aggressive armed man would respond to a sudden gun shot by shooting the armed guy.
We’re talking about the restrained guy who had been trying to help a woman and not once during the whole encounter had a gun in or near his hands? No, I would not murder that man, and I hope others wouldn’t either.
The guy that was trying to physically interfere with an arrest, and that was now resisting arrest, that you were fighting with, and had a gun near his left/our right hand?
Yes you would respond to sudden gunshots with gunfire.
You are surrounded by people with guns, it could be any one of them that took a shot at anything else. It is a pretty massive leap to assume the guy being manhandled on the ground is the one shooting. That close to a gunshot you would have no idea where it came from sound unless you directly saw the gun firing, and if they did they would know it wasn't the guy without a gun in his hand.
The person who starts shooting him has full visibility of the gun the entire time.
Even if he doesn't realize it is a misfire, why would he believe that it was Pretti who shot? How can you reasonably believe a dude that is dogpiled with a gun not in his control is the shooter?
Again, the officer that begins the shooting can literally see Pretti is disarmed. He has no gun. He watches the other agent take his gun off of him.
A more reasonable take in that situation would be thinking that some other protestor has decided to start shooting at them, not that the guy dogpiled by a half dozen agents and visibly fuckin' disarmed is the one doing it.
I am not a gun control person. I think we'll never realistically get guns away from criminals, and as long as that's the case, law-abiding citizens should be allowed to have firearms to be on even footing. Full stop.
But if we can't hold out law enforcement agencies, however nominal in nature they are, to high enough standards that they don't create the entire situation that causes them to kill someone who was never a threat to them, then they shouldn't be armed. Because we can't trust them not to slaughter US citizens.
Well, that's an interesting take. Even if a holstered weapon did discharge (no idea how likely this is for the specific weapon in question), why would someone suspect they are being fired at by a person with a holstered weapon? Poor/no training is the most charitable explanation.
The only person suggesting the gun went off while holstered was the sibling comment by ‘AngryData’. After ICE discoverers the gun and yells “gun! Gun!” the Sig discharges into the ground (visible in some of the videos) before he is shot 3 times.
You saw the videos, the guy only had a phone in hand, he got tear gased, pinned to the ground, and then they unloaded their guns on him. Stop lying about what you saw, or we'll start to believe you're actually pro-murder.
Phone was in his right hand (our left) and gun was holstered near his other hand. The gun went off into the ground as P320s are known to do when they removed it from him and officers reacted.
It's fascinating how Trump voters are able to reshape their reality to fit the Party's official line. All these years I thought Orwell was exaggerating...
it's usually the balance and middle that is most beneficial. you can't deny the value LLM code generation and research provides. But the extreme of using only LLMs or mostly LLMs. or not using LLMs at all is self-harming.
So far, LLM generated code hasn't lived up to my standards. I'll use it for things that aren't critical as-is, but mostly I use it as a reference, an example, a starting point. Essentially, where in the past I'd find a code base that does things and I'd try to do something similar, now I let the LLM generate the code base. There are to questions it helps me answer:
1) What are the possible ways of solving problems?
2) What are the pros and cons of each approach?
That said, there are people successfully deploying apps that are entirely vibe coded. How many fail or succeed, that I don't know. But there are enough, and you can't deny the evidence.
This is what I find amusing, we're in tech, how can you ask that? We've been rubbing stones to start fire too, but if you stick to that and refuse to use electricity, that's self-harming because you lose out on all the benefits.
You can't keep doing things the same way, that's now how technology as an industry works, the whole point is to come up with newer and better things. If you're a user of technology, then you can think like that and keep using old tech until the natural balance of things forces you.
I fully expect LLMs to be obsolete in a few decades, and I'm now wondering if people then will say how LLMs have been serving them fine for a few decades.
"just fine" isn't good enough in tech, "better" is always the goal.
> that's self-harming because you lose out on all the benefits.
This is some big stretch in reasoning. Four years ago we were not harming ourselves by not having AI and writing code ourselves.
At most you're hurting your ability to compete with "vibe coders" whose metric so far has been lines of code and $ spend per day running agents, not successful products.
Have you considered how relying on AI affects your own programming skills?
I think you have to keep in mind that what you write is more important than how you write it. Can you write better programs with this tool or not? All the time I used to spend on stack over flow, asking people questions, scouring through bad documentation and source code, now AI can do all that for me. Now, if you're a 10x engineer or whatever the term is these days, and you're telling me you have no need for research, for PoC code, for critical analysis of your code, then power to you. But most people aren't in that boat.
> Have you considered how relying on AI affects your own programming skills?
Yes, about as much as it did before but now I have more time to think more deeply about solutions instead of cosmetic things, syntactic nonsense,tedious b.s.,etc... It's like asking how using an IDE with auto-complete affects your programming skills when you could be writing in vim or notepad. Doing tedious things, knowing how to beg right on the correct forum, or hunting down the best library, the best doc,etc.. these are not programming skills. Like I said in my original comment, I'm not using it to write the code for me, I'm using it to do things that have nothing to do with solving the problem at hand.
With AI's assistance, you can become a better problem solver. heck, you might even be good at spotting mistakes other people make, just by force of habit learned from correcting mistakes AI makes. It's just a utility, take all the benefits from it and discard things you don't like about it. Nothing forces to use the code it generates. think of it as a junior programmer that's doing PRs for menial tasks for you, and that's the best case scenario. It's just a really good google search that gives you the results you're looking for instead of jumping through hoops to get something of lesser quality. Many times I've mistrusted the AI and did things the old way, and its solution contained more nuance, details and considerations I failed to glean with my initial attempts.
Before AI, the story of people copy pasting things from stack overflow, including bugs, and including good code that gets buggy in certain situations was a major trend. AI does that, except better both in terms of result quality, and in helping you avoid pitfalls. But imho, you still shouldn't trust it, you still have to vet everything it does and stop using it if that effort takes more time and it would have doing things manually.
I don't understand this, it's actually baffling. Why was the question being asked to begin with let along a whole post being made about this? If they have a legal request from a law enforcement agency of any country they operate in, they either comply or see executives in prison.
Is how bitlocker works not well known perhaps? I don't think it's a secret. The whole schtick is that you get to manage windows computers in a corporate fleet remotely, that includes being able to lock-out or unlock volumes. The only other way to do that would be for the person using the device to store the keys somewhere locally, but the whole point is you don't trust the people using the computers, they're employees. If they get fired, or if they lose the laptop, them being the only people who can unlock the bitlocker volume is a very bad situation. Even that aside, the logistics of people switching laptops, help desk getting a laptop and needing to access the volume and similar scenarios have to be addressed. Nothing about this and how bitlocker works is new.
Even in the safer political climates of pre-2025, you're still looking at prosecution if you resist a lawful order. You can fight gag-orders, or the legality of a request, but without a court order to countermand the feds request, you have to comply.
Microsoft would do the same in China, Europe, middle east,etc.. the FBI isn't special.
Sure, I don't disagree but that isn't what this discussion is about. It's about a lawful publicized request. For microsoft, they don't need any leverages, they can just use a FISA order, they can force you to keep it a secret. Their leverage is federal prison.
You already trust third parties, but there is no reason why that third party can't be the very same entity publishing the distribution. The role corporations play in attestation for the devices you speak of can be displaced by an open source developer, it doesn't need to require a paid certificate, just a trusted one. Furthermore, attestation should be optional at the hardware level, allowing you to build distros that don't use it, however distros by default should use it, as they see fit of course.
I think what people are frustrated with is the heavy-handedness of the approach, the lack of opt-out and the corporate-centric feel of it all. My suggestion would be not to take the systemd approach. There is no reason why attestation related features can't be turned on or off at install time, much like disk encryption. I find it unfortunate that even something like secureboot isn't configurable at install time, with custom certs,distro certs, or certs generated at install time.
Being against a feature that benefits regular users is not good, it is more constructive to talk about what the FOSS way of implementing a feature might be. Just because Google and Apple did it a certain way, it doesn't mean that's the only way of doing it.
reply