Hacker Newsnew | past | comments | ask | show | jobs | submit | planckscnst's commentslogin

If they were using this compression for storage on the cache layer, it could allow more videos closer to where they serve them, but they decide the. Back to webm or whatever before sending them to the client.

I don't think that's actually what's up, but I don't think it's completely ruled out either.


That doesn't sound worth it, storage is cheap, encoding videos is expensive, caching videos in a more compact form but having to rapidly re-encode them into a different codec every single time they're requested would be ungodly expensive.

The law of entropy appears true of TikToks and Shorts. It would make sense to take advantage of this. That is to say, the content becomes so generic that it merges into one.

Storage gets less cheap for short-form tiktoks where the average rate of consumption is extremely high and the number of niches is extremely large.

If you like this tool, you might also be interested in reptyr, which lets you reparent a process to a different tty.

https://blog.nelhage.com/2011/02/changing-ctty/



Tried 3/4 of the tools, and none helped me reattach neovim.

Ended up using dtach. Needs to be run ahead of time, but very direct and minimal stdin/stdout piping tool that's worked great with everything I've thrown at it. https://github.com/crigler/dtach


have you tried diss, shpool or abduco?

also vmux appears to be specifically tailored to vim/neovim

https://github.com/yazgoo/vmux


Whenever I use LLM-generated content, I get another LLM and pre-bias it by asking if it's familiar with common complaints about LLM generated content. And then I ask it to review the content and ask for it to identify those patterns in the content and rewrite it to avoid those. And only after that do I bother to give it a first read. That clearly didn't happen here. Current LLM models can produce much better content than this if you do that.


There's also dynamodb-fips.us-east-1.amazonaws.com if the main endpoint is having trouble. I'm not sure if this record was affected the same way during this event.


I don't remember an event like that, but I'm rather certain the scenario you described couldn't have happened in 2017.

The very large 2017 AWS outage originated in s3. Maybe you're thinking about a different event?

https://share.google/HBaV4ZMpxPEpnDvU9


Sorry the 2015 one. I misremembered the year

https://aws.amazon.com/message/5467D2/

I imagine this was impossible in 2017 because of actions taken after the 2015 incident


Definitely impossible in 2015.

If you're talking about this part:

> Initially, we were unable to add capacity to the metadata service because it was under such high load, preventing us from successfully making the requisite administrative requests.

It isn't about spinning up ec2 instances or provisioning hardware. It is about logically adding the capacity to the system. The metadata service is a storage service, so adding capacity necessitates data movement. There are a lot of things that need to happen to add capacity while maintaining data correctness and availability (mind at this point, it was still trying to fulfill all requests)


I’m referring to impact on other services


I agree that financial viability is critical to the long-term prospects of a technology. It must deliver an ROI above other options. I'd recommend getting off the sidelines and jumping in to see what's happening. At the least, you'll have another perspective to inform your position. It's a pretty minimal investment to try it out.


You’re right to think that I probably do sound more like a cautious observer than I actually am. For what it’s worth, I’ve been experimenting with AI tools on the side (mostly in coding and writing workflows), and I’m planning to dive deeper soon, especially around integrating agents into my SaaS.

The post was more about the hype and attention surrounding AI, which can feel mentally exhausting at times, mostly because of how fast everything is moving. Not a complaint, really. If anything, that might be a good sign. I totally get why people are excited, it just takes effort to stay grounded in the middle of it all.

Appreciate the comment! Hopefully next time I’ll be jumping in with war stories instead of sideline takes.


I learned several languages before Python, but the one that made it the most difficult was Ruby. After having done Ruby for years and becoming really familiar with idioms and standard practices, coming to python feels like trying to cuddle with a porcupine.

List (or set, or dict) comprehensions are really cool... one level deep. The moment you do 'in...in' my brain shuts off. It takes me something like 30 minutes to finally get it, but then ten seconds later, I've lost it again. And there are a bunch of other things that just feel really awkward and uncomfortable in Python


There's a pretty easy trick to nested comprehensions.

Just write the loop as normal:

    for name, values in map_of_values.items():
        for value in values:
            yield name, value
Then concatenate everything on one line:

    [for name, values in map_of_values.items(): for value in values: yield name, value]
Then move the yield to the very start, wrap it in parentheses (if necessary), and remove colons:

    [(name, value) for name, values in map_of_values.items() for value in values]
And that's your comprehension.

Maybe this makes more sense if you're reading it on a wide screen:

    [              for name, values in map_of_values.items(): for value in values: yield name, value]
                                                                                   ^^^^^^^^^^^^^^^^^
           /------------------------------------------------------------------------------/
     vvvvvvvvvvvvv
    [(name, value) for name, values in map_of_values.items()  for value in values                   ]


I write Python every day and I would have just write the "normal" loop which is the only one that is really readable in your examples, at least to me.


I wrote it on one line to make the example make sense, but most Python comprehensions are plenty readable if they're split into many lines:

    new_list = [
        (name, value)
        for name, values in map_of_values.items()
        for value in values
    ]
The normal loop needs either a yield (which means you have to put it inside a generator function, and then call and list() the generator), or you have to make a list and keep appending things into it, which looks worse.


(not the parent)

agree on the appending to a list. that’s an obvious code smell of “i didn’t know how to do idiomatic python iteration when i wrote this”.

i disagree that the format of the list comprehension above is easier to read than a generator function personally.

however, it’s definitely easier to read than most multi-loop list comprehensions i’ve seen before (even with HN formatting sticking the x.items() on a new line :/ ).

so i see what you’re getting at. it is definitely better. i just find it weird that the the tuple definition comes before the thing i’m doing to get the tuple. sometimes i have to think “backwards” while reading it. i don’t like thinking while reading code. it makes it harder to read the code. and that “thinking” effect increases when the logic is complicated.


> i just find it weird that the the tuple definition comes before the thing i’m doing to get the tuple.

I'll have to give you that one, after ten years of professional development it still feels backwards to me as well.

I think we can blame the math people for that one: {x + 2 ∣ x ∈ N, x < 10}


That's indeed much better on many lines. I don't know why some people insist on making long lines.


Wow, this is nice. I've been doing Python for quite a few years and whenever I needed a nested comprehension I'd always have to go to my cheatsheet. Now that I've seen how it's composed that's one less thing I'll need to lookup again. Thank you.


It's a fairly clunky idiom and there's no reason to use it if you prefer more explicit code.

I can see the attraction for terse solutions, but even the name is questionable. ("Comprehension?" Of what? It's not comprehending anything. It's a filter/processor.)


The first example is a generator. If you wanted to keep that just use `()` instead of `[]`:

    ((name, value) for name, values in map_of_values.items() for value in values)


i have a rule of thumb around “one liner” [0] comprehensions that go beyond two nested for loops and/or multiple if conditions — turn it into a generator function

    def gen_not_one_y_from_xs(xs):
        for ys in xs:
            for y in ys:
                if y != 1:
                    yield y

    # … later, actual call
    
    not_one_ys = list(gen_y_from_xs(my_xs))
the point of the rule being — if there are two nested for loops in a comprehension then it’s more complicated than previously thought and should be written out explicitly. list comprehensions are a convenience shortcut, so treat them as such and only use them for simple/obvious loops, basically.

edit — also, now it’s possible to unit test the for loop logic and debug each step, which you cannot do with a list comprehension (unless a function is just returning the result of a list comprehension… which is… yeah… i mean you could do that…)

[0]: 9 times out of 10, multiple for loop comprehensions are not one liners and end up sprawling over multiple lines and become an abject misery to debug.


The loops are in standard order, with the yield value up front. Write the loops, surround with brackets, move the value up to the first line.


Buildroot is easy to use even without vendor support


Yes, it feels wasteful compared to my previous driving, in specific circumstances. I drove a manual transmission and made it a goal to not touch the brakes at all for red lights (given acceptable traffic conditions - sometimes people would just be annoyed or confused if I did it every time). I'd see the light from far back, understand its timing, and how many cars were stopped behind it and gauge when I should disengage the clutch and coast. It frequently worked out that I was arriving at the stopped car in front of me just as they were starting to move forward. It feels particularly satisfying not to lose all your momentum just to have to accelerate again from a dead stop.

Overall, however, regenerative braking probably is better given the conditions don't always allow for this game.


I think this is a solvable problem with a control alteration. There should be a zone right at the start of pushing down the accelerator pedal that is "don't brake without brake input, but don't accelerate"


You can also zoom in


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: