Yeah I'm honestly not sure why people would be surprised at this comparison - Whole Foods and Trader Joe's are specialty stores. Nobody goes into them just to buy a head of lettuce.
Trader Joe's has their own in-store brands for tons of products, which has spawned various guides on what should/shouldn't be bough there (https://www.thepennyhoarder.com/save-money/what-to-buy-at-tr... for example). CVS, on the other hand, is a far more general store. They both sell food, but they're not in the same market segment - so the comparison is strange.
>Nobody goes into them just to buy a head of lettuce.
wtf are you talking about? I do this (with other greens and vegetables). If anything, the people shopping at WF/TJ don't want a head of lettuce because it's a stupid product, but I think you're using that as an example...but using head of lettuce just makes me believe you don't understand the demographic.
I don't live in the US, but I've been to Whole Foods when visiting and certainly felt like a perfectly normal grocery store, not too different from most other grocery stores I've been to. What makes it a specialty store and why wouldn't you go there to buy a head of lettuce?
Honestly? That would be awesome. I already have security cameras set up like that (although not wireless) - just home devices that do things for me and report back to a central server in my home. That server can connect to the internet (when I want it to), grab new firmware for the cameras, then disconnect and send out the firmware.
That's pretty much all I want from home automation as well. I see the value in being able to say "Alexa, turn down the lights" and having AWS do the voice recognition - but given that I can't trust services to stay more than a few years, I'm not going to invest in it.
Yes, and those people might visit a website, which asks for...shudder...cookies. If you can show that the cookies do something nefarious, I'd be interested. Do you think they general population would even get to the point of installing an APK?
"Consider what that means for Mossad"
At this point, you can't even prove that the APK does anything nefarious - and it would be dangerous for the Mossad if it did, because the challenge is literally to decompile the APK.
Level 5: You've downloaded Droid4X extra because of it, installed Java and everything and then you come back on HN to look if somebody is already on the next challenge (in order to save time) and then you start again, but with the new challenge :)
Does anyone have insight on why they're making this change? All they say in this post is "In our effort to continuously improve customer experience". From my point of view as a customer, I don't really see an experiential difference between a subdomain style and a path style - one's a ".", the other's a "/" - but I imagine there's a good reason for the change.
First to allow them to shard more effectively. With different subdomains, they can route requests to various different servers with DNS.
Second, it allows them to route you directly to the correct region the bucket lives in, rather than having to accept you in any region and re-route.
Third, to ensure proper separation between websites by making sure their origins are separate. This is less AWS's direct concern and more of a best practice, but doesn't hurt.
I'd say #2 is probably the key reason and perhaps #1 to a lesser extent. Actively costs them money to have to proxy the traffic along.
I think they should explain this a bit better. That said
For core services like compute and storage a lot of the price to consumers is based on the cost of providing the raw infrastructure. If these path style requests cost more money, everyone else ends up paying. It seems likely any genuine cost saving will be at least partly passed through.
I wouldn't underestimate #1 not just for availability but for scalability. The challenge of building some system that knows about every bucket (as whatever sits behind these requests must) isnt going to get any easier over time.
Makes me wonder when/if dynamodb will do something similar
Yeah you basically do. Sure you can reroute the traffic internally over the private global network to the relevant server, but that's going to use unnecessary bandwidth and add cost.
By sharding/routing with DNS, the client and public internet deal with that and allow AWS to save some cash.
Bear in mind, S3 is not a CDN. It doesn't have anycast, PoPs, etc.
In fact, even _with_ the subdomain setup, you'll notice that before the bucket has fully propagated into their DNS servers, it will initially return 307 redirects to https ://<bucket>.s3-<region>.amazonaws.com
I'm not sure you understand how anycast works. It would be very shocking if Amazon didn't make use of it and it's likely the reason they do need to split into subdomains.
Anycast will pull in traffic to the closest (hop distance) datacenter for a client, which won't be the right datacenter a lot of the time if everything lives under one domain. In that case they will have to route it over their backbone or re-egress it over the internet, which does cost them money.
Google Cloud took a different approach based on their existing GFE infrastructure. It does not really seem to have worked out, there have been a couple of global outages due to bad changes to this single point of failure, and they introduced a cheaper networking tier that is more like AWS.
I don't think that's true. Route53 has been using Anycast since its inception [0].
The Twitter thread you linked simply points out that fault isolation is tricky with Anycast, and so I am not sure how you arrived at the conclusion that you did.
Got it, thanks. Are there research papers or blog posts by Google that reveal how they resume transport layer connections when network layer routing changes underneath it (a problem inherent to Anycast)?
I do understand how it works and can confirm that AWS does not use it for the IPs served for the subdomain-style S3 hostnames.
Their DNS nameservers which resolve those subdomains do of course.
S3 isn't designed to be super low latency. It doesn't need to be the closest distance to client - all that would do is cost AWS more to handle the traffic. (Since the actual content only lives in specific regions.)
Added to my comment, but basically S3 is not a CDN - it doesn't have PoPs/anycast.
They _do_ use anycast and PoPs for the DNS services though. So that's basically how they handle the routing for buckets - but relies entirely on having separate subdomains.
What you're saying is correct for Cloudfront though.
They could do that, but they have absolutely no incentive to do so - all it would do is cost them more. S3 isn't a CDN and isn't designed to work like one.
Currently all buckets share a domain and therefore share cookies. I've seen attacks (search for cookie bomb + fallback manifest) that leverage shared cookies to allow an attacker to exfiltrate data from other buckets
The only obvious thing that occurs to me is that bringing the bucket into the domain name puts it under the same-origin policy in the browser security model. Perhaps there are a significant number of people hosting their buckets and compromising security this way? Not something I have heard of but it seems possible. Makes me wonder if they are specifically not mentioning it because this is the reason and they know there are vulnerable applications in the wild and they don't want to draw attention to it?
I can't read what you're replying to, but it absolutely bothers me. The current scheme has this completely random double reversal in the middle of the URL; it would have been so trivial to just make it actually big-endian, but instead we have this big-little-big endian nonsense. Far too late to change it now, but it is ugly and annoying.
Probably because they want to improve the response time with a more precise DNS answer.
With s3.amazonaws.com, they need to have a proxy near you that download the content from the real region.
With yourbucket.s3.amazonaws.com, they can give an IP of an edge in the same region as your bucket.
I would guess cookies and other domain scoped spam/masking 'tricks'? I've never tried but perhaps getting a webpush auth on that shared domain could cause problems
At 32 seconds into the video, you can see a few frames of a Google Sheet where one column is the name of the sport (you can see "Quidditch" quite a few times) and a second column is a rule, like "The Chasers are there to try and keep posession of the Quaffle...". Looks like one row per rule. Teh filename is "AKQA AI SPORT".
No idea if that's what they used for the actual training, though.
First, amazing post, I want to see a blog post of someone's day entirely like this.
> That isolation is great though because I went straight back to enjoying my beer.
Second, I find the isolation to be a bit leaky - rarely am I immediately back to my normal operation. Maybe I need to reimplement my VM.
Third, isolation in theory is great, but in practice requires far too many resources. I agree that the "internal virtual machine" metaphor is fantastic, but if everyone is running it to fit in with society...we're wasting a ton of brain space at the societal level.
I think you'd get far more traffic if you linked to a page that actually has information (https://code.headmelted.com/) instead of to the list of builds!
In any case, this is awesome and I'll definitely give it a try
Windows will complain if you download and try to run an unsigned exe. While APT packages can be signed, wget|sh can't be, so it's comparatively easier for someone to trojan the website and distribute malware.
I've written this type of script before - I didn't need too much in the way of debugging. Mine (similar to GP, it seems) didn't actually do the registration, just save the current state of known divs or even the whole page. Then it would send me an update if that changed.
So, I didn't need to debug what happens when the class shows "open" - I just saved the div that said "closed" and sent myself an email/text any time it didn't say exactly that.
You could test it by having it sign up for undesirable classes like underwater basket weaving that had plenty of openings. The terminal interface was kind of gross to script, but the underlying data was pretty easy.
You basically had to just send the correct number of arrow key presses to get the cursor to the correct field, send the digits, and then send the enter key. Parse the data that comes back, and add the routine to cursor over to the "add course" prompt when it says there is an availability. The script was totally gross looking but it worked.
Clickbait title - the "this" referenced is Instagram, and other social media accounts.
"...hiring managers are more likely to check your Instagram account."
"...38% search for social media accounts..."
"When evaluating a candidate, I check for a Twitter profile to see what types of articles are shared, where he or she gets news, what content is of value to the candidate, and how he or she engages with other people"
Trader Joe's has their own in-store brands for tons of products, which has spawned various guides on what should/shouldn't be bough there (https://www.thepennyhoarder.com/save-money/what-to-buy-at-tr... for example). CVS, on the other hand, is a far more general store. They both sell food, but they're not in the same market segment - so the comparison is strange.