Hacker Newsnew | past | comments | ask | show | jobs | submit | dastbe's commentslogin

They invented a language to avoid you imperatively updating infrastructure, but that's not what CDKTF does; it just makes it easier to materialize that declarative output.

It also makes it easier to reason about that output as you can avoid awkward iteration in your declarative spec.


my expectation is that they would either sell crucial RAM at such a low volume and/or such a high price that it would do more damage to the brand than sunsetting it and returning to it when the slowdown occurs.


litestream makes very few consistency guarantees compared to other datastores, and so I would expect most any issues found would be "working as intended".

at the end of the day with litestream, when you respond back to a client with a successful write you are only guaranteeing a replication factor of 1.


By "replication factor of 1" you mean your data is stored on local disk only, right? That matches my understanding: Litestream replication is asynchronous, so there's usually a gap of a seconds or two between your write being accepted and the resulting updated page being pushed off to S3 or similar.


Yes. the acknowledgement you're getting in your application code is that the data was persisted in sqlite on that host. There's no mechanism to delay acknowledgement until the write has been asynchronously persisted elsewhere.


An interesting comparison here is Cloudflare's Durable Objects, which provide a blocking SQLite write that doesn't return until the data has been written to multiple network replicas: https://blog.cloudflare.com/sqlite-in-durable-objects/#trick...


I wonder if it would be possible to achieve this using a SQLite VFS extension - maybe that could block acknowledgment of a right until the underlying page has been written to S3?


> the last place couldn’t because datadog apparently bills sidecar containers as additional hosts so using sidecar proxy would have doubled our datadog bill.

that seems like the tail wagging the dog


the problem is that they want to apply a number of stateful/lookaside load balancing strategies, which become more difficult to do in a fully decentralized system. it’s generally easier to asynchronously aggregate information and either decide routing updates centrally or redistribute that aggregate to inform local decisions.


My cheeky answer to "how should this be regulated?" is that sports betting isn't materially different from other high-risk private investments, so it should only be available to accredited investors. Imagine if fanduels/draftkings had to verify assets and income before taking a single bet?!


I was interested in this, so perusing I found https://govinfo.library.unt.edu/ngisc/reports/2.pdf which estimates in the late 90s

"Estimates of the scope of illegal sports betting in the United States range anywhere from $80 billion to $380 billion annually, making sports betting the most widespread and popular form of gambling in America."

which seems surprising even at the low end.

similarly from https://www.americangaming.org/new-aga-report-shows-american... in 2022

"AGA’s report estimates that Americans wager $63.8 billion with illegal bookies and offshore sites at a cost of $3.8 billion in gaming revenue and $700 million in state taxes. With Americans projected to place $100 billion in legal sports bets this year, these findings imply that illegal sportsbook operators are capturing nearly 40 percent of the U.S. sports betting market."

I think what would be more interesting to me is estimates on the unique number of citizens betting. Is it up? If so, how appreciably?


Go? I haven't see this particular unit before.


Force of habit, o is octet, the French equivalent of bytes.


I believe it originates as the French translation of Gb


the article is a bit breathless, which seems par for the course for security blogs these days. And while "containers are not a security boundary" is evergreen and something AWS has been trumpeting since the beginning, they IMO should also try and make it a bit harder for your to get access to the host credentials.

I do know the ECS team highly indexes on maintaining backwards compatibility and minimizing migrations wherever possible, but this seems like a case where it's warranted.


why would they have access?


Why wouldn’t they have access?!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: