> 4 core Skylake (Mac 2016) would be well beyond human capabilities
Not if the computer's time limit is set at 15 microseconds. It's not a question of whether the computers have "enough power"; just whether they are more powerful now than they were previously.
And yes, obviously that's a very sloppy and error-prone way to implement a difficulty control.
I'm also over 6' and I don't understand the problem? The seats only recline a few degrees, it's not like they're laying on my lap! Even fully reclined there's plenty of space in front of my face, and leg room is barely impacted at all. (Like probably an inch max?)
Granted, I've only flown American and Delta, maybe other airlines are worse in this respect?
I'm 6'4" with a lot of my height in my legs. Sitting comfortably (not slouching, mind you), my knees already barely rub against the seat in front of me. As soon as that seat is reclined, my knees get crushed and I have to either sit up even straighter or twist to the side, neither of which are comfortable. Or, I have to pay to be in a higher fare class with more space.
Have you tried the exit row instead? Sure, you might have to agree to help others, but if you aren't willing to do that regardless of the row, then that just says a lot about you.
Yepp, I generally will try for the exit row or the first row in a section (sacrificing no under seat storage), but they tend to be the first seats booked. Since I'm usually traveling with multiple other people and we prefer sitting together, it makes it pretty difficult to reliably select those seats with extra leg room. I haven't seen any airlines that charge "+$25 for the extra leg room" on 12+ hour international flights, but if they exist I'd love to know which ones they are!
It's been awhile 2017ish, but I used to book flights for a team of photographers that traveled a lot. They all had their individual preferences for aisle/window, exit row. Maybe it was because they all had lots of butt-in-chair miles, but their upgrades were typically $25 for domestic US travel. Maybe I'm conflating that as the price for everyone when it was the price for their status only???
The physical requirements are an issue for a lot of people. E.g. a tall senior citizen, anyone flying with a small child, anyone with a visible disability (temporary or otherwise).
I know American at least has some rows with extra leg room that aren't the exit row. (Though obviously if you want more space you have to pay for it.) Not sure about others.
Yes, it's usually called "premium economy" or something like that. I was resistant for a long time, but eventually decided that being able to walk the next day without pain was worth the extra cost. That said, they tend to fill up quickly -- so not always an option.
Many airlines don't let you choose your seat without paying extra. But yeah, maybe if you're that tall that's just an unfortunate extra cost you have to bear.
At some point you have to do the math. Is +$25 for the extra leg room worth it for a 3 hour flight? 6 hour flight?
I flew from DFW to Sydney on a flight that was not fully booked. They made an announcement for a $150 upgrade to have an entire row to yourself. Once in the air, all of the armrests could be raised to allow you to lay flat. $150/17hours ~= $9/hour for a comfortable-ish sleep on a long haul flight. That's better math than the app subscription model threads have.
Those few degrees matter if your knees are already brushing the back of the seat in front of you. It matters how tall you are, how much of that is in your legs, how big your feet are (the more you need to bend your knees, the higher they will be), and it also varies depending on seat design and layout.
For others like me, one trick is to at most minimally use the under seat storage: small handbags only. No backpacks, briefcases, or anything else big enough to hold a laptop. Then, you can put your feet in that space. This lowers my knees by 1-2 inches depending on the plane, which really matters. It's the only thing that helps significantly, aside from paying for premium economy. Doesn't help with the claustrophobia, but there's not much to be done about that.
The other things I've tried (that don't reliably work) are leaning forward from the seat back (to pull my knees back) and slouching slightly (so that the inevitable recline compresses the seat back into my knees rather than bashing them). The former saves my knees, but sacrifices my back. The latter kind of helps during the flight, but walking will still hurt the next day.
> one trick is to at most minimally use the under seat storage [...] Then, you can put your feet in that space
Oh, interesting. I've always done that, it never really occurred to me that others might not. Even if you have a bigger bag you can always take it out during the flight to make space for your feet. That, plus crossing my legs allows me to have my legs flat against the chair (and therefore my knees well below the level where the person in front reclining would make much difference).
Well, it can be annoying to limit oneself to a smaller under-seat bag. Taking the bigger bag out during the flight uses up even more of the available space. I've generally got nowhere to put it except behind my legs (which cramps things a lot): on my lap doesn't work if I want to actually use anything in that bag.
It's easier to just pack my laptop (plus anything I might use during the flight) in my overhead bin carry-on. It's a real pain to actually get anything out of there, but a paperback book or ebook reader will fit in a coat pocket or small handbag -- and that's all I truly need on the plane. Plus, the airline won't be able to force you to check your overhead carry-on that way since the laptop has lithium batteries in it.
Why does leg length matter? Reclining doesn't impact leg room much since only the upper part of the seat is moving backwards any significant distance, and the space under the seat where my feet go is completely unaffected.
Are your legs so long you have to sit with your knees pressed against the back of the seat in front of you or something? If so I suppose that's understandable.
"Are your legs so long you have to sit with your knees pressed against the back of the seat in front of you or something? If so I suppose that's understandable."
Yes and also for people with long legs, seated in a typical airline seat, their knees will be significantly higher than the top of the seat cushion. So, they get caught up in the sweep of a reclining seatback ahead.
My legs are long enough there isn't room for them to press against the back of the seat. I'm either manspreading into the crevases between seats or in foetal position with my knees halfway up the seat in front of me. A person reclining is excruciating in the former, but in the latter position at least the person in front can't recline as there's no physical space for my body to become more compact. Flying is hell.
Yes, my knees often/always bump into the seat in front of me, even without it being reclined. If/when it is reclined it means my knees are pressed harder backwards.
When I can, I pay for extra leg room or get an aisle seat.
My opinion is strongly that seats should not be reclined. It is inconsiderate.
> I agree that sounds frustrating. Respectfully though, it sounds like you're a special case
It would be interesting to know the numbers on this. Height is not going to tell the answer though, you as people of the same height have wildly variably limb length.
I know half a dozen people who have the same issue and they vary from 1.9-2.1m tall.
I think once you get past the 95th percentile in any metric like that things start to get more difficult. I'm not even that tall and I sometimes have trouble finding pants that fit me. I imagine there are probably similar difficulties on the other end of the spectrum being below the 5th percentile.
I used to have so much trouble with pants (I need 30-34 in inches, 86.4cm long and about 76cm waist). No store had that size. I once got to the point where I considered leaning into my Scottish heritage and just wearing a kilt.
The internet has alleviated that for me, but if it hasn't for you -- look for pants with a large hem, and learn some basic sewing skills. It's occasionally possible to add an inch or more of length with the right pair.
Sure, my femurs are longer than most peoples, but they are with me on _every_ flight I take.
So it is kind of frustrating to me with people like in this thread explicitly saying "I do not care, I will recline my seat, it is not my problem if someone else suffers, they are just being entitled".
Everything on a ZFS/BTRFS partition with snapshots every minute/hour/day? I suppose depending on what level of access the AI has it could wipe that too but seems like there's probably a way to make this work.
I guess it depends on what its goals at the time are. And access controls.
May just trash some extra files due to a fuzzy prompt, may go full psychotic and decide to self destruct while looping "I've been a bad Claude" and intentionally delete everything or the partitions to "limit the damage".
A "revert filesystem state to x time" button doesn't seem that hard to use. I'm imagining this as a potential near-term future product implementation, not a home-brewed DIY solution.
A filesystemt state in time is VERY complicated to use, if you are reverting the whole filesystem. A granular per-file revert should not be that complicated, but it needs to be surfaced easily in the UI and people need to know aout it (in the case of Cowork I would expect the agent to use it as part of its job, so transparent to the user)
What makes you think Meta will do a better job of policing this than the actual police?
Granted, they might be able to crack down harder in some respects since unlike the police they don't have to worry about due process, but is that really "doing a better job" on balance?
Because it'd give companies like Meta has a strong incentive to stop profiting from it or else people start getting fined/jailed.
It also addresses the problem at (what is often) the source. Police in Ireland don't have the ability to march into Facebook's server rooms and start removing posts, so requests have to be made to Meta anyway which takes additional time. Making Facebook clean up their own mess directly would mean cutting out the middle man and all the red tape and hoops police have jump though to get them to take action.
>What makes you think Meta will do a better job of policing this than the actual police?
Meta announce they will stop political/electoral advertising in EU, so this ios proof that Meta can do but we need to foce them to act, otherwise Meta makes money from all the scams in the ads that are published, in fact I remember Meta looked into the scam problem and decided to stop looking sicne solving the problem would reduce their profits.
Now in case all those well paid engineers at Meta can't find a solution here is an idea I had just by thinking at it for a few seconds, those geniuses shoudl be able to find better ones if they want.
1 when a scam ad is reported block that account and their ads
2 before the ad is published have an AI scan it, if it looks to be related to politics, crypto or other scam friendly domains have someone review it . do not allow fresh accounts to publish this kind of shit without a human review
3 for Facebook content when someone shares fake shiot, like a proven fake document, or scam or faked video block the account and then notify all the people that liked or shared the scam that they were scammed/tricked ... when your users will get 10 daily notifications "You are an idiot you shared this fake shit" you might realize you should do something about scams or users will stop engaging with your stuff.
I am talking here about proveable scams and fakes , so not about some gray area. I mean scams, faked videos/images/documents etc
These are public posts we're talking about, right? Or are we saying Meta should be cracking down on content in private communication too? (And if so, isn't that the same concern I just mentioned about due process, but for privacy instead?)
If we're talking about profiting from fraud, then we're talking ads. Which are semi-public, you only see them if you fall within the targeting bucket, which definitely wouldn't include "law enforcement officer".
The problem is that on Windows or your typical Linux distro "how much you trust" needs to be "with full access to all of the information on my computer, including any online accounts I access through that computer". This is very much unlike Android, for example, where all apps are sandboxed by default.
That's a pretty high bar, I don't blame your friend at all for being skeptical.
Right, which goes back to the main point; "total control of your computing environment" fundamentally means that you are responsible for figuring out which applications to trust, based on your own choice of heuristics (FOSS? # of downloads/Github stars? Project age? Reputation of maintainers and file host? etc...) Many, maybe most people don't actually want to do this, and would much rather outsource that determination of trust to Microsoft/Google/Apple.
> Right, which goes back to the main point; "total control of your computing environment" fundamentally means that you are responsible for figuring out which applications to trust, based on your own choice of heuristics
Hard disagree. Total control of my computing environment would be to allow an application access to my documents, a space to save a configuration, perhaps my Videos folder or even certain files in that folder. Or conversely, not.
At the moment, none of the desktops give me the ability to set a level of trust for an application. I can't execute Dr. Robotniks Ring Run (or whatever the example was) and be able to specify what it can, or cannot access. There may be a request for permission at a system level access, but that could be explained away as usually is for iApps and Android when requesting some scary sounding permission groups.
And it also doesn't stop malware from accessing my documents. Sometimes my Mac asks if an application is allowed to access Documents, but it isn't consistent.
> they are hidden away inside the settings, and they are not granular.
The switches default to off though, with a prompt on first attempt at accessing the protected resource.
The problem is that they're leaky like a sieve and the permission model and inheritance works is unclear (I once had the Terminal app ask me for permission - does it now mean anything I run from the terminal automatically inherits it - and so on).
Perhaps in the generic sense that most things a business does are to build up market share that's true. But Costco isn't selling hot dogs for $1.50 in an effort to capture a large share of the hot dog market.
The top comment of that thread points out exactly the same thing this Cloudflare article does; that there doesn't really seem to be be any indication this was anything nefarious.
They ban users responsible for misusing the tool, and refer them to law enforcement when appropriate. The whole point of this article is to say that's not good enough ("X blames users for [their misuse of the tool]") implying that merely making the tool available for people to use constitutes support of pedophilia. (Textbook case of appealing to the Four Horsemen of the Infocalypse.) The prevailing sentiment in this thread seems to be agreement with that take.
Making the tool easy to use and allowing it to just immediately post on Twitter is much different than simply providing a model online that people can download and run themselves.
If you are providing a tool for people, YES you are responsible to some degree.
Think of it this way. I sell racecars. I'm not responsible if someone buys my racecar and then drinks and drives and dies. Now, I run an entertainment venue where you can ride along in racecars. One of my employees is drunk, and someon dies. Now I am responsible.
In, like, an "ask a bunch of people and see what they think" way. Consensus. I'm not talking legality because I'm not a lawyer and I also don't care.
But I think, most people would say "uh, yeah, the business needs to do something or implement some policy".
Another example: selling guns versus running a shooting range. If you're running a shooting range then yeah, I think there's an expectation you make it safe. You put up walls, you have security, etc. You try your best to migrate the bad shit.
Misuse in this case doesn't include harassing adult women with AI generated porn of them. "Oh we banned the people doing this with children" doesn't cut it, in my mind.
As of May posting AI generated porn of unconsenting adults is a federal crime[1], so I'd be very surprised if they didn't ban users for that as well. The article conflates a bunch of different issues which makes it difficult to understand exactly what is and is not being talked about in each individual paragraph.
Any mention of Musk on HN seems to cause all rational thought to go out the window, but yeah I wonder in this case how much of this wild deviation from the usual sentiment is attributable to:
1. Hypocrisy (people expressing a different opinion on this subject than they usually would because they hate Musk)
vs.
2. Selection bias (article title attracts a higher percentage of people who were already on the more regulation, less freedom side of the debate)
vs.
3. Self-censorship (people on the "more freedom, less regulation" side of the debate being silent or not voting on comments because in this case defending their principles would benefit someone they hate)
There might be other factors I haven't considered as well.
Been thinking about this more, and regarding #1 I wonder if perhaps part of what we're seeing is that a significant number of people just weren't thinking in terms of principles to begin with. (Probably most people in fact; it's not really something that comes naturally with system 1 thinking.)
We see stories on HN about companies forcing guardrails on the models they release to the public, see a bunch of people in the comments talking about how terrible that is, and think "cool, looks like the majority has a principled stance in favor of open models without guardrails". But really only a small percentage of commenters were thinking in those terms. What most actually support is just the idea of themselves and people they like getting access to open models without guardrails. When a different story comes along about a company not imposing those guardrails and people they don't like doing bad things with that freedom they have a completely different opinion.
You could call it hypocrisy, except it's not really hypocrisy to go against principles you never had to begin with.
It feels a little snobbish to talk about it this way so I'll inject a bit of humility by adding I'm probably guilty of this too sometimes. Like I said, it's a natural product of system 1 thinking. But it's probably healthy to give people a little grief over this, because having consistent principles is important to a well-functioning society.
Gee, I wonder why people would take offense at an AI model being used to generate unprecedented amounts of CSAM from real children, or objectify millions of women without their consent. Must be that classic Musk Derangement Syndrome.
The real question is how can the pro-Musk guys still find a way to side with him on that. My leading theory is that they're actually pro-pedophilia.
That's fundamentally what LLMs are, an imitation of humanity (specifically, human-written text). So if that's the line, then you're proposing banning modern AI entirely.
Not if the computer's time limit is set at 15 microseconds. It's not a question of whether the computers have "enough power"; just whether they are more powerful now than they were previously.
And yes, obviously that's a very sloppy and error-prone way to implement a difficulty control.
reply