If they were using this compression for storage on the cache layer, it could allow more videos closer to where they serve them, but they decide the. Back to webm or whatever before sending them to the client.
I don't think that's actually what's up, but I don't think it's completely ruled out either.
That doesn't sound worth it, storage is cheap, encoding videos is expensive, caching videos in a more compact form but having to rapidly re-encode them into a different codec every single time they're requested would be ungodly expensive.
The law of entropy appears true of TikToks and Shorts. It would make sense to take advantage of this. That is to say, the content becomes so generic that it merges into one.
Tried 3/4 of the tools, and none helped me reattach neovim.
Ended up using dtach. Needs to be run ahead of time, but very direct and minimal stdin/stdout piping tool that's worked great with everything I've thrown at it.
https://github.com/crigler/dtach
Whenever I use LLM-generated content, I get another LLM and pre-bias it by asking if it's familiar with common complaints about LLM generated content. And then I ask it to review the content and ask for it to identify those patterns in the content and rewrite it to avoid those. And only after that do I bother to give it a first read. That clearly didn't happen here. Current LLM models can produce much better content than this if you do that.
There's also dynamodb-fips.us-east-1.amazonaws.com if the main endpoint is having trouble. I'm not sure if this record was affected the same way during this event.
> Initially, we were unable to add capacity to the metadata service because it was under such high load, preventing us from successfully making the requisite administrative requests.
It isn't about spinning up ec2 instances or provisioning hardware. It is about logically adding the capacity to the system. The metadata service is a storage service, so adding capacity necessitates data movement. There are a lot of things that need to happen to add capacity while maintaining data correctness and availability (mind at this point, it was still trying to fulfill all requests)
I agree that financial viability is critical to the long-term prospects of a technology. It must deliver an ROI above other options. I'd recommend getting off the sidelines and jumping in to see what's happening. At the least, you'll have another perspective to inform your position. It's a pretty minimal investment to try it out.
You’re right to think that I probably do sound more like a cautious observer than I actually am. For what it’s worth, I’ve been experimenting with AI tools on the side (mostly in coding and writing workflows), and I’m planning to dive deeper soon, especially around integrating agents into my SaaS.
The post was more about the hype and attention surrounding AI, which can feel mentally exhausting at times, mostly because of how fast everything is moving. Not a complaint, really. If anything, that might be a good sign. I totally get why people are excited, it just takes effort to stay grounded in the middle of it all.
Appreciate the comment! Hopefully next time I’ll be jumping in with war stories instead of sideline takes.
I learned several languages before Python, but the one that made it the most difficult was Ruby. After having done Ruby for years and becoming really familiar with idioms and standard practices, coming to python feels like trying to cuddle with a porcupine.
List (or set, or dict) comprehensions are really cool... one level deep. The moment you do 'in...in' my brain shuts off. It takes me something like 30 minutes to finally get it, but then ten seconds later, I've lost it again. And there are a bunch of other things that just feel really awkward and uncomfortable in Python
There's a pretty easy trick to nested comprehensions.
Just write the loop as normal:
for name, values in map_of_values.items():
for value in values:
yield name, value
Then concatenate everything on one line:
[for name, values in map_of_values.items(): for value in values: yield name, value]
Then move the yield to the very start, wrap it in parentheses (if necessary), and remove colons:
[(name, value) for name, values in map_of_values.items() for value in values]
And that's your comprehension.
Maybe this makes more sense if you're reading it on a wide screen:
[ for name, values in map_of_values.items(): for value in values: yield name, value]
^^^^^^^^^^^^^^^^^
/------------------------------------------------------------------------------/
vvvvvvvvvvvvv
[(name, value) for name, values in map_of_values.items() for value in values ]
I wrote it on one line to make the example make sense, but most Python comprehensions are plenty readable if they're split into many lines:
new_list = [
(name, value)
for name, values in map_of_values.items()
for value in values
]
The normal loop needs either a yield (which means you have to put it inside a generator function, and then call and list() the generator), or you have to make a list and keep appending things into it, which looks worse.
agree on the appending to a list. that’s an obvious code smell of “i didn’t know how to do idiomatic python iteration when i wrote this”.
i disagree that the format of the list comprehension above is easier to read than a generator function personally.
however, it’s definitely easier to read than most multi-loop list comprehensions i’ve seen before (even with HN formatting sticking the x.items() on a new line :/ ).
so i see what you’re getting at. it is definitely better. i just find it weird that the the tuple definition comes before the thing i’m doing to get the tuple. sometimes i have to think “backwards” while reading it. i don’t like thinking while reading code. it makes it harder to read the code. and that “thinking” effect increases when the logic is complicated.
Wow, this is nice. I've been doing Python for quite a few years and whenever I needed a nested comprehension I'd always have to go to my cheatsheet. Now that I've seen how it's composed that's one less thing I'll need to lookup again. Thank you.
It's a fairly clunky idiom and there's no reason to use it if you prefer more explicit code.
I can see the attraction for terse solutions, but even the name is questionable. ("Comprehension?" Of what? It's not comprehending anything. It's a filter/processor.)
i have a rule of thumb around “one liner” [0] comprehensions that go beyond two nested for loops and/or multiple if conditions — turn it into a generator function
def gen_not_one_y_from_xs(xs):
for ys in xs:
for y in ys:
if y != 1:
yield y
# … later, actual call
not_one_ys = list(gen_y_from_xs(my_xs))
the point of the rule being — if there are two nested for loops in a comprehension then it’s more complicated than previously thought and should be written out explicitly. list comprehensions are a convenience shortcut, so treat them as such and only use them for simple/obvious loops, basically.
edit — also, now it’s possible to unit test the for loop logic and debug each step, which you cannot do with a list comprehension (unless a function is just returning the result of a list comprehension… which is… yeah… i mean you could do that…)
[0]: 9 times out of 10, multiple for loop comprehensions are not one liners and end up sprawling over multiple lines and become an abject misery to debug.
Yes, it feels wasteful compared to my previous driving, in specific circumstances. I drove a manual transmission and made it a goal to not touch the brakes at all for red lights (given acceptable traffic conditions - sometimes people would just be annoyed or confused if I did it every time). I'd see the light from far back, understand its timing, and how many cars were stopped behind it and gauge when I should disengage the clutch and coast. It frequently worked out that I was arriving at the stopped car in front of me just as they were starting to move forward. It feels particularly satisfying not to lose all your momentum just to have to accelerate again from a dead stop.
Overall, however, regenerative braking probably is better given the conditions don't always allow for this game.
I think this is a solvable problem with a control alteration. There should be a zone right at the start of pushing down the accelerator pedal that is "don't brake without brake input, but don't accelerate"
I don't think that's actually what's up, but I don't think it's completely ruled out either.
reply