I think a good usecase for recover is in gRPC services for example. One wouldn't want to kill the entire service if some path gets hit leading to a panic while handling one request.
Corporate gRPC services are written with "if err != nil" for every operation at every layer between the API handler and the db/dependencies, with table-driven tests mocking each one for those sweet sweet coverage points.
I would love a community norm that errors which fail the request can just be panics. Unfortunately that's not Go as she is written.
One thing that `if err != nil { return err }` lets you do, which panic/recover doesn't, is annotate errors with context. If you're throwing from 5 layers deep in the call stack, and two of those layers are loops that invoke the lower layers for each element of a list, you probably really want to know which element it was that failed. At that point, you have two options:
1. Pass a context trace into every function, so that it can panic with richer meaning. That's a right pain very quickly.
2. Return errors, propagating them up the stack with more context:
for i, x := range listOfThings {
y, err := processThing(x)
if err != nil {
return fmt.Errorf("thing %d (%s) failed: %w", i, x, err)
}
}
Go panic isn't really usable as catch/rethrow because it can only be done at function scope. To make them useful for that pattern, you need a scoped `try { }` block where you can tell what part failed and continue from there. Either that, or you need lots and lots of tiny functions to form scopes around them.
You don't need "lots and lots" of tiny functions, most of the time it's totally fine to just let the exception propagate as is. When you do need to add information, you will have to use a function, yes, it's an unfortunate feature of golang. Same with defers, the only way to scope them is to wrap them in a function, it's stupid, but this is golang for you
That’s mostly just a stacktrace. You can add other information that wouldn’t be in a stacktrace, but you shouldn’t do it by string concatenation because then every instance of the error is unique to log aggregators. Instead, you need to return a type implementing the error interface. Which is not all that different from throwing a subclass of exception.
I mean yes, but I don't really like hand writing exception tracebacks via error wrapping.
That said... I did like a clever bit I did where you can use a sentinel error to filter entire segments of the wrapped errors on prod builds. A
Dev build gives full error stacks.
Yes that is common. I was more talking about the case where someone perhaps introduces a bug causing a nil pointer dereference on some requests, so the panic is not explicitly called in code. In which case you would definitely want the recover in place.
Some are of the opinion that that should be handled a layer up, such as a container restart, because the program could be left in a broken state which can only be fixed by resetting the entire state.
Given that you can’t recover from panics on other goroutines, and Go makes it extremely easy to spawn over goroutines, often times it’s not even an opinion, you have to handle it a layer up. There’s no catchall for panics.
This is a major pain in the ass. I was trying to solve the problem of how do you emit a metric when a golang service panics, the issue is that there is no way to recover panics from all goroutines so the only way to do that reliably is to write a wrapper around the ‘go’ statement which recovers panics and reports them to the metrics system. You then have to change every single ‘go’ call in all of your code to use this wrapper.
What I really want is either a way to recover panics from any goroutine, or be able to install a hook in the runtime which is executed when an unhandled panics occurs.
You can kind of fudge this by having the orchestration layer look at the exit code of the golang process and see if it was exit code 2 which is usually a panic, but I noticed that sometimes the panic stack trace doesn’t make it to the processes log files, most likely due to some weird buffering in stdout/stderr which causes you to lose the trace forever.
Graph DBs are generalized relationship stores. SQL can work for querying graphs, but graph DB DSLs like Cypher become very powerful when you're trying to match across multiple relationship hops.
For example, to find all friend of a friend or friends of a friend of a friend:
`MATCH (user:User {username: "amanj41"})-[:KNOWS*2..3]->(foaf) WHERE NOT((user)-[:KNOWS]->(foaf)) RETURN user, foaf`
Vector clocks are very cool. Having read through how they were initially used in Riak, I was blown away that such an implementation could scale. I guess this is why Cassandra took a different approach?
Vector clocks are certainly cool but fundamentally premised on the idea of having multiple 'live' versions of a value at once. Amazon's original Dynamo paper required conflict resolution at the application level, which is a very strange framework to build applications on. (Notably DynamoDB has moved away from this, I believe to Last Write Wins.) Cassandra takes the latter approach by default as well, I believe.
yes there's that idiosyncrasy, as well as client ideally needing to read the previous clock from the DB before writing an update for that key unless it's ok with the write being viewed as concurrent. Plus the extra memory overhead to store the clocks in the client.
Wasn't Madoff's plan the whole time to pocket investor money for himself, whereas Sam's was to arrogantly (and illegally) gamble it with the thought being he could return the principal from Alameda back to FTX in future? Genuine question, I'm not super informed on these.
Sam paid and loaned to himself $2.2B (and another $1B to top execs) out of the $8B that was lost. That doesn't include the Bahamas real estate; trabuccos yacht; celebrity endorsements; stadium naming; or the private planes to deliver amazon packages to the bahamas.
The idea that Sam was just a risky investor is the story Sam wants to tell. But he already told that story in court, and it was rejected.. because the evidence doesn't support it.
No, SBF did directly steal from FTX users. He used an intermediary (Alameda) to do so. The end result is the same: he stole from customers to enrich himself, his family and his friends.
I'm sure both SBF and Madoff originally thought they would be able to make enough money to cover up the theft.
From what I read Madoff didn't seem to, he never actually tried to invest the money and couldn't really have expected his ponzi scheme to become a permanent engine of the economy or anything..
I don't really think that has the moral conclusion people try to imply though. Thinking they could get away with it as a permanent secret gap that no one need ever know about is actually a worse moral character.
Madoff was supposedly on the level for some 25-31 years. He started business as a broker-dealer and money manager in 1960. Madoff claimed that he started just putting fund contributions in a personal account in 1991, but prosecutors think he was at it sometime in the 1980s, maybe since Black Monday in 1987?
From what I’ve read, it’s not exactly clear that Madoff had a plan, it seems more like he lied to a few people about being able to make them a lot of money, and that snowballed into him taking a lot of “investments” that he didn’t know what to do with.
From what I’ve seen about SBF, it does seem like he has a very high risk tolerance, was betting big to win big, and thinking the reward was worth the risk.
In the short term, sure, but SBF liked to make big risks and it seems inevitable that he would have been on the losing side of a bet that he'd made with money that was supposed to be safer than a treasury bond.
There's no difference in intent described there. Sam's plan was to pocket investor money for himself (and maybe return some of it, eventually, if his gamble paid off)
I'm pretty sure Madoff returned some money before being found out, too.
A concern I've had with past studies has been the dosage of processed foods. It's better to avoid them altogether, but I haven't seen many seeking to find limits on how one can safely have X percent of their diet be from ultra-processed foods and still be healthy.
It seems like the biggest correlation between overall health and processed food intake is that processed foods are calorie dense and lead to weight gain. If excess calories are coming from sugars and unhealthy fats, and one is consuming large amounts of bad additives, that will surely lead to bad outcomes.
What I would like to see is isocaloric studies, where individuals get plenty of fiber, micronutrients and a good macronutritional balance from "healthy" foods, but are also allowed some reasonable percentage of their daily intake to be processed.
Maybe it's the guilt of feeling like I'm a weaker person for being addicted to having desserts or chips on a regular basis, but I also feel like they are pleasures of modern life that are worth enjoying in moderation. I mainly strive to eat healthy for all my core meals, and allow myself ultra-processed snacks, while not gaining weight.
There are guidelines for how much saturated fat, sugar, and other things to have in your diet per day (on average). These are based on many health studies over time.
I try to stay at or below the numbers on average and I do treat myself sometimes. But, I can't very much because it would lead to overeating things that would drive up my cholesterol and cause other bad things. Have to eat lots of fruit and vegetables.
A problem is that most people eating ultra processed foods blow those category numbers away. I was looking at the label on some food this week. One serving size, which is about half of what an American would usually eat, was over half the saturated fat and sugar someone should have per day. For people who eat processed food for most meals, it's easy to go many times these numbers which has a lot of long term consequences.
I have Crohn’s disease and while there’s still a lot that isn’t understood, it’s clear that it’s linked to highly processed food. Crohn’s is more prevalent in western populations and it’s rising in places that are adopting western diets. There are studies showing that patients who don’t see results from medication can still achieve remission through a strict Mediterranean diet.
Obesity is a huge problem and the spotlight on it is warranted, but it’s also a very simple issue to understand and deal with. The effects of ultra processed foods are more complex and far less understood.
I have IBD and definitely believe there is an association. But it still goes back to my question about "is there a safe limit". I think in particular for IBD, IBS, Crohn's etc, my suspicion is that there are two mechanisms at play with ultraprocessed diets:
1. Additives + microplastics in modern diets doing bad things to the microbiome.
2. Fiber intake is being reduced by ultaprocessed diets. I think the lack of pre+pro-biotics is a big deal, and ultraprocessed foods contribute to their reduced intake.
The Vision Pro is hardly differentiated from the top end Meta VR model. There are a few UX enhancements, and I think they'll take the lead in the space eventually. But I wouldn't say this is a big innovation in the VR space from what I've seen.
The iPhone was hardly different from other touch screen phones at the time, except for the UX enhancements. And it turns out good UX changes the world.
Pay attention to how people talk about the experience of using Vision Pro vs other headsets. The eye tracking interface is widely praised and described by third parties (not just Apple) as feeling like magic.
I haven't used it yet, but I can imagine what using a computer would be like if I didn't have to actually point my mouse anywhere and instead it could effectively read my mind about where I want to click. IMO this is the interface revolution that will become ubiquitous over the next 5-10 years, and Apple is once again leading the charge.
> The Vision Pro is hardly differentiated from the top end Meta VR model.
I think their approach is unique. They are branding it as a replacement for your computer. The one and only device you need to be productive. And don't underestimate the value of not needing controllers.
The idea is fine, but the inevitable outcome here is that Apple and Google do the same thing in their respective smartphones and this device (if it's even good at all) is obviated.
Good. It’s disappointing how bad Siri is. I don’t need it to be an LLM trained to know everything in the encyclopedia galactica… I just need Siri to understand basic sentence structure so i don’t have to reword something five times before giving up because i don’t know the magic word order.