Hacker Newsnew | past | comments | ask | show | jobs | submit | dpryden's commentslogin

I think you might be thinking of https://cdsmith.wordpress.com/2011/01/09/an-old-article-i-wr...

It says:

> I give the following general definitions for strong and weak typing, at least when used as absolutes:

> Strong typing: A type system that I like and feel comfortable with

> Weak typing: A type system that worries me, or makes me feel uncomfortable


I was actually expecting a discussion of the PC Magazine humor page, and was disappointed when I clicked through.

It did make me go pick up an old copy of PC Magazine from 1995 from the other side of the room and start leafing through it, though.


I recall from my time in Google Geo years ago that the idea of integrating Search and Maps was a big part of the "New Maps" release that happened around 2014. The rumor I heard was that someone (possibly even Larry himself) wanted to be able to have interactive maps directly on the search results page, so that the navigation from a search query to a map wouldn't involve even a page reload. So the big Maps frontend rewrite actually ended up merging MFE into GWS, the web search frontend server. I recall seeing maps hosted at google.com/maps around that time, but I don't know if that was ever launched fully or if it was just an experiment.

In any case, though, my understanding is that the technical capacity for this has existed for nearly 10 years now, just behind a configuration setting. So it's possible that this change is just a code cleanup. It's also possible that someone is trying to increase the percentage of searches that have location information, that doesn't seem terribly far-fetched either, and I can imagine lots of ways people could try to rationalize it as actually benefiting users. (Whether it actually does benefit users is of course debatable.)


It is absolutely bizarre to me how half-assed Google is with integrating its products.

I have a week of events coming up in Google Calendar each with a different event location. Why can't I see a map of all those event locations alongside the calendar with all the same event details listed? Why can't I associate a Google Calendar event with a specific album or set of photos in Google Photos and see those in the map and calendar as well?

This is why I'm building https://visible.page with my brother. We have all these capabilities of visualizing data on the web, yet no one has actually put them together in a convenient and consumer friendly way to visualize any type of information together in one place.

All these big tech companies seem to just give up on any kind of significant innovation as soon as they reach a certain level of monopoly on their market. Twitter, Spotify, Facebook, Google, etc. I can think of a dozen significant feature experiments they could try that would make my daily life better using those tools yet they don't.


> It is absolutely bizarre to me how half-assed Google is with integrating its products

The answer can be summed up in one word: "privacy".

There are two forces at play here. One side wants privacy. When they give data to Google Calendar, they don't want Google Maps or Ads know about it. The other side (your opinion above) wants more integration between services.

In this political climate, the privacy side has an edge. This means if Google Photos want to access data on Google Calendar to provide the integration you asked above, they will have to jump through multiple quarters of privacy reviews, with a very high odd of being shutdown.

> All these big tech companies seem to just give up on any kind of significant innovation as soon as they reach a certain level of monopoly on their market

After I see how the sausages are made, I think claims like these are naive. It's worth learning more about the factors at play before criticizing something. More often than not, the agents are acting pretty rationally based on the situation.


> privacy

Do Not Give google credit for privacy.

first, showing maps for a location it is already showing on the screen... the data is already all there. it is pure and simple calendar team didn't want to bother using maps team's api. nothing else. nobody had a meeting and decided against it because of user privacy.

second, no matter the product, the only integration all of them MUST have is to both advertising and profile. those two internal apis respectively serve ads against your profile (ssp) and add events to your profile to later target ads.

so no, absolutely nothing on google deserve the privacy argument.


The privacy argument doesn’t make sense to me. The addresses are already in Google Calendar. They don’t need to be saved into a different service to be viewed anonymously in Google Maps. You can already do it in Google Calendar for one event/address at a time.

Yes there are business/internal-politics reasons why some obvious features or experimentation doesn’t happen, but those aren’t necessarily good reasons beyond short term benefit to specific individuals at a company.

But I do think some of it can generally be blamed on large companies losing their ability to be nimble due to the inherent friction of the politics and logistics that build up as an organization grows.


FWIW, I worked on the integration with calendar and maps - the GP comment is exactly right, it was due to privacy concerns. The terms of service for Workspace say that user data can never be used for anything not related to Workspace, so moving any user data from Workspace to another service has to be done very carefully.

In the example of this integration, allowing it to open in the sidebar was okay because it was a user action, and there is some data anonymization that happens (I don't recall the details, this was a few years ago).

But we couldn't share a list of your appointments with maps ahead of time to allow them to generate the view you describe, because there wasn't a way that guaranteed that the data wouldn't be associated back with the original user.


I don't think privacy has anything to do with it. Google Maps doesn't need to capture any user data to implement OP's suggestion. Google Calendar just needs to render a map with a set of locations marked on it using Google Maps. It doesn't need to tell Google Maps what or who the locations are for. This is something Google Calendar should already be able to accomplish using a public API. All other aspects of the feature could be implemented as part of the Google Calendar service without any further integration with Google Maps.

Further, I don't think users are generally against services using the information - which the user has presumably already provided intentionally - to better serve them. The problem is when that information is shared with third parties or used for purposes which are not obviously in the users' best interests. IMO, any user data stored externally should be subject to an opt-in permissions system which strictly defines how the data can be used. That doesn't stop companies like Google from being able to offer me useful services that I might actually be interested in. The notion that privacy discourages innovation is just silly.


After I see how the sausages are made, I think claims like these are naive. It's worth learning more about the factors at play before criticizing something. More often than not, the agents are acting pretty rationally based on the situation.

All of these concerns could be trivially addressed by leaving them up to the user. Add the necessary controls to the user account page, pick default settings biased in favor of privacy, and allow users to change them if they prefer.


IMO you’re spot on. The catch being that between showing an ad and matching photo locations, the former has a near straight impact on the bottomline while the latter is murkier. When both are going through reviews, that’s a lot of weight difference in the arguments and we’ll see more of one that the other.


    The answer can be summed up in one word: "privacy"
I don't understand this.

Once Google has my data, how does it affect my "privacy" if Google Service A shares it with Google Service B?

I'm somewhat privacy conscious, but I don't understand the concern there. I assume that once I give them my data, they're already doing whatever with it internally.


It's amazing to me that people have already forgotten that Google had in fact already successfully done that with Google Inbox. It's not that they weren't able to do it.

It's that in their infinite wisdom they shut it down. Just like they shut down hangouts in their infinite wisdom.


what even was project inbox? at most five people used it.

hangout is now integrated in meet, which is integrated in gmail.

it's google doing a microsoft/apple and trying to be the leader in video calls/remote work/remote classes by forcing people to have it ready just by having the gmail app.

just like apple with facetime (but they have no idea how to expand on it) or Microsoft adding teams to windows status bar, you like it or not.


Both AOL and Google had, around the same time, secondary mail interfaces that provided extra features. Google's was Inbox; I've forgotten the name of AOL's. They were quite similar to each other, with each slightly better than the other in some ways. Both sites were slower* to load than AOL's and Google's standard email interfaces. Neither reached the market penetration or current-account conversion management wanted., and those of us who used them were sad to see them go.

* Google eventually added so many features to Gmail that they had to add a progress bar during page load.


An example of poor google integration that bugs me from time to time - when you search for a geographic feature, the info panel shows a great preview map with the outline of the feature. E.g. https://www.google.com/search?q=rhine+river

If you click into google maps, the outline is gone. Searching "Rhine River" just puts a marker at one point along the river.


This is not the case for me. I just now searched in mobile Chrome for "Lakeview Chicago" and the mini-map static image has a purple outline around the neighborhood. Clicking on that took me to Google maps with the neighborhood outlined in a red dotted line (which is harder to see, but obscures less of the other features/labels on the map). This was on Android, in the maps app, just now, but I've seen the same thing in a desktop browser.


Ah, you're right. It looks like the issue I'm complaining about only happens for "line" features - e.g. a river, or a road (https://www.google.com/search?q=route+66).


FWIW, OpenStreetMap can do it. I went to https://osm.org, entered "Rhine" into the search and clicked on the first result. Deeplink: https://www.openstreetmap.org/relation/123924


oh wow, it's actually worse for me: there's no marker at all, just a map of western europe: https://www.google.com/maps/place/Rhine+River/@49.34645,7.87...


Innovation, oh my, sometimes it feels like the fat ones (and, by proxy, everyone else) are living in some alternate fantasy world where the mantra "you're not gonna need it" is taken to the extreme, so they're not even trying.

The pendulum should swing back to complex and more complicated interfaces sometime — but right now these are the dark times where, for example, Netflix, this huge, popular movie and show library, doesn't even have a way to find out exactly what movies with some actor or director it has available. It's hard for me to wrap my head around that.

Your project does look useful and on point though!


The rumor/theory I have heard about Netflix is that increasing discoverability too much would allow people to see two negative traits of Netflix: How often things come and go from the platform (which other apps like Criterion Collection embrace), and just how limited their library actually is at a given time.

Scroll through recommendations. It looks like they have hundreds of great movies for you to watch! And yes, technically they do. But look how many times they try suggesting the same movies in different categories, inflating the view in a way to make the library seem bigger. One movie might show up "Because you liked comedy..." then "Because you watched <comedy movie>" then "Light-hearted movies".

TLDR money and masking their poor library quality.


I wonder if AppleTV's atrocious single-line onscreen keyboard fits into this picture of making things less discoverable, or if it's just an extreme of form over function.


Definitely not, because Apple gives users the ability to type search in on an iPhone or iPad instead of using the apple TV remote. They also let you do voice-to-text, which is nice.


It is entirely possible to both provide a useable onscreen interface and the iPhone connection option.


Whatever the reason (and I can think of many) it just shows how Apple is past the point of caring for their users.


I’m about to enable the new Facetime Live Transcription feature in iOS 16 so my wife can have conversations with her father, who is rapidly losing his hearing. For this reason (and I can think of many) I strongly disagree.


Fair enough, but that’s also a cool new feature that drives sales.

I meant it more like, why wouldn’t they fix this objectively bad input mechanism? It would take tiny effort but it wouldn’t improve their sales or they might even calculate that it drives usage of iPhones and therefore good for them even though it’s bad for the users.

For the record I own both an Apple TV and an iPhone, inasmuch one can pretend to own these devices.


I had to look up this linear keyboard they created, pretty unique.

Seems like it can be changed.

Settings > General > Keyboard, then switch from “Automatic” to “Grid”.


Wow! Thanks!

For some reason I wasn’t getting automatic updates but after a manual update I can now see and set this option.

I retract my earlier statements, thank you Apple devs!


Your app looks beautiful. This is something that I've wanted to build some time. Would love to help out if possible.


This makes perfect sense product wise, if I'm searching "bakery" on my mobile phone I probably want the ones around me and not the generic location-agnostic google search of it, just like I would if I was searching on map. Matter of fact, this is actually something I do a couple times a month, search then clic the maps tab to see localized results then from them click the website result to find their webpage.

As a techie I hate any direct change to the user-agnostic absolute search, but as a user I get it.


> if I'm searching "bakery" on my mobile phone I probably want the ones around me

And yet for me, even in google maps on my iphone, when I search for bakery, the first one is almost always one that's ~40 miles away, and the closest one is almost always the second in the list. The rest of the list is definitely not sorted descending by distance. If I've searched for a _particular_ ABC bakery, I get other bakeries commingled in the list even if I know damn well there are other ABC bakeries closer than those.


The first one is the one that put the most coins into the AdWords slot, I'd guess.


I live in the UK. I recently searched for “pizza” and the top result was in Thailand.


This behavior works exactly the way you would expect in Apple Maps. A search for a bakery returns relevant nearby results.

The fact that Google doesn’t see the blatantly obvious problem, or that they try to argue that the users are wrong is a textbook case of why Apple has been doing OK in the market downturn while Google’s business continues to crash. Apple prioritizes their core products and human interface design, Google prioritizes short-term (advertising) revenue, while neglecting their core products in favor of the latest shiny thing.


Somehow DuckDuckGo has taken this to absurd extremes. Almost any search that doesn’t get many natural hits shows branches of my local government toward the bottom of the first page of results.


I have seen this too, also on bing. Not just government though, sometimes it manages to find a local house for sale instead.


You do realize that duckduckgo is primarily a frontend for bing right?

https://en.wikipedia.org/wiki/DuckDuckGo#Search_results


What we see is likely the attempt to squeeze even more juice from advertising over which Google virtually have a monopoly. Google is trying to continue its exponential growth while relying on selling advertisements. The market had already been saturated and optimised to crazy levels. Smart thing would be to expand to other sources of revenue, but other projects inside Google fail. As they are failing to compete internally for resources against that crazily optimised source of revenue.

It is doubtful that Google can overcome that internally. Perhaps regulators should break up the monopoly in advertisement and search.


> if I'm searching "bakery" on my mobile phone I probably want the ones around me

Only when you're using a phone? Only if you're not at home? What if you want to find out what a bakery is?

(Apologies for rapid fire, I'm not having a go at you, just curious)


> Only when you're using a phone?

No, eg when I'm at the office, and we talk about where to go eat and I type restaurant, or I need a new stapler and I type office supply, etc ...

> Only if you're not at home?

Not really, eg "movie theater" or "flower shop" come to mind for things I would request while at home

> What if you want to find out what a bakery is?

I would type what is a bakery or define bakery ?

I'm a long time tech user, I miss the days of keyword centric search as I felt I could more easily communicate to the search engine what I wanted, but let's be honest those days have passed, most people type sentence and thus the engine interpret sentences


There isn't a necessity for an "or"

One could show a map preview of local results, which can be expanded as well as generic search results below/aside/...


Or a header along the lines of

We're showing you local results. To search the internet for "bakery" click here

It'd be great if they did that for anything personalized as well while they're at it


This is achievable with geolocation based on IP address, which is how it works on, e.g. a desktop web browser.


Not in my country - unless your ISP is in the business of selling customer PII to advertisers (coughvirgincough) your IP geolocation will often be a completely different city.

Of course, personally if I wanted to search for nearby bakeries on my phone I'd have just opened the google maps app....


Less than half the population has decent geolocation by IP. Most people the IP address will only identify the country or even nothing at all.

Not much use if you want to search bakery's.


Coming from the CDN land this isnt true. We didnt put too much effort in to precision, but on the order of 99% of IP addresses get down to metro area. Cheap commerical providers like Maxmind get to the right postcode on the order of 90-95% of the time. Building your own latency and peering maps bridges that gap to 99% or better. Simply based on network topology and latency we should be able to get you down to post code or general area of a city.


Google is my ISP. My geolocated IP is accurate within a 15 mile radius. It doesn't matter if I have location services turned off or I'm using my desktop, searching "bakeries near me" finds them without issue.

I suspect that isn't all just one big coincidence.


Google has what 3 or 4 cities where they operate as an ISP, each with a pretty small footprint. It's no surprise anyone knows where you are.

A cable or telephone company has generalized coverage measured in states; some of them organize their network and customer IPs by small geographies, but sometimes all of southern california is in a single pool of IPs.


"Achievable" is quite charitable from my experience. With the previous ISP I would get located in a city some 2000kms away, sometimes the scam ads would detect my location as null.

Maybe it's more effective in places like the US.


No, I’m randomly placed 2 states away. A solid day of driving.


funny how that works. I never ever allow location access to anything Google or any website for that matter, and have a muscle memory to hit deny when the browser prompts me. The other day I was searching something and then clicking my bookmarked Google News and suddenly all news were UK specific, and my search results fro "heatpumps" were are UK companies and products.. I was confused until I noticed that my work VPN chose a UK endpoint because the NL one where I am had higher latencies. So, Google heavily tailors the results based on where it thinks you're at. Also, I was delighted to know that inspire all the tracking Google probably does on me, it was easily fooled to think I was in the UK :-)


IP-based location is mostly usable for country. I've rarely found it gets the city right, often it doesn't even get the county right.


It gets really annoying when you are trying to search for some specific term in English and google keep guessing that you wanted something that sounds similar in your native tongue.


I have links to google.com/maps in my IRC logs dating back from June 2014, so this absolutely tracks.

I actually remember google.com/maps being launched at IO in 2014 -- the presentation had a broken link in it for the new version of Maps, and a few of us DoS SRE watching the livestream were able to hack together a config change in a few minutes to fix it without waiting for a urlmap push :)


> It's also possible that someone is trying to increase the percentage of searches that have location information, that doesn't seem terribly far-fetched either, and I can imagine lots of ways people could try to rationalize it as actually benefiting users.

Could you speak more to how this kind of thing figuratively plays out? With privacy on most of our (tech-focused) minds, I’m mostly curious how openly an initiative like this is/would be carried out. Would you imagine it as a buried lede or as a very transparent, explicit OKR?


It's easy to rationalize it as benefiting the users, so I'd imagine it's an explicit OKR, maybe even a few levels up in the org.

Like, one thing I've wanted on occasion is the ability to search for brick and mortar stores in a given radius who have the thing I want -- either because I want to physically inspect it before committing to a purchase or because for whatever reason the time/cost of shipping wouldn't be practical.

That sort of query is hard for Google to serve right now though for reasons including the lack of relevant location information in both the search results and the queries whose user behavior would help drive relevance rankings for those location-specific results.

Location information is a bit of a double-edged sword too though, even ignoring privacy concerns. I have to spoof my location and change my search language to get some results because of aggressive filtering happening behind the scenes. If a given query doesn't match Google's current understanding of the user then the right results existing in the corpus often won't imply that the user is able to find them with _any_ search operators.


With the document policy changes over the last 5 years, most decisions are now very opaque. Google TTLs everything except Docs and code history & reviews, at this point: emails, chats, bug reports, ...

There's probably a tech debt focused OKR for this work, but some other teams probably has OKRs that indirectly benefit from the data, and they're probably providing staffing support, tied to the tech debt OKR. OKRs are for telling people why you're great, if you're at the bottom of the pyramid, and for giving the rank-and-file some direction, if you're at the top. The top level OKRs are usually very precise and very vague at the same time.

So there's probably an OKR in search to improve the quality of the location signals. It can be vague on how. Plus, having more and better data filters into your downstream systems, so even without an OKR for the data you know it will make your models more powerful.


I remember the spiffy demo where the thumbnail in search results morphed into the full Maps UI without reloading.

But unification had started even earlier than that. Pretty much since Larry became CEO again, he pushed this mantra of "One Google", which brought the infamous Kennedy redesign across all services, as well as more of them available under the google.com host (e.g. maps as discussed here, but also flights and more). One of the ideas behind the latter was that you had to log into your Google account just once, which gradually made it all the way to YouTube(!). I vaguely recall other factors, such as compensating for the increased latency from going HTTPS everywhere, but also discussions about securing and hardening cookies.

As far as I know, google.com/maps has been around the entire time, but perhaps now it might be simply the canonical URL in a larger number of cases.


Funny, because there is a crummy form of Google maps present into he SERP, and it behaves completely differently from actual Google maps. It constantly annoys me, usually when searching for a business, that something that looks exactly like google maps, in Google, doesn't behave the same as google maps.


100%! I always ascribe it to some PM somewhere, but when I click on the "search maps" I would _love_ to be taken to the "real Google Maps".

The search maps is just a terrible experience, half implemented, doesn't do what I want, even down to little things.

My hack is to pick directions, which will get me to Google Maps, then cancel directions, this loses all state, but you're still in the location you want and can usually then just click the business you were looking for.


This reminds me of how Google integrated Maps into Calendar as a sidebar a while ago, a move that I absolutely hated. And instead of providing a preference setting to disable it, you have to “hide” the sidebar in a non-intuitive way [0]. I had to search to figure it out.

0: https://www.howtogeek.com/695504/how-to-stop-google-calendar...


The K&R book is primarily focused on Unix and similar operating systems, and they didn't have threads when the book was written.

The first edition of K&R was published in 1978.

Wikipedia says that OS/360 had a notion of threads as early as 1967, although the distinction between threads and processes wasn't as clearly defined at that time.

In 1978, threading and concurrency was still an area of academic research. There are some great papers from Dijkstra and Lamport from that era, but much of the research at that time considered each concurrent "thread" to be a separate program; that is, each process was conceptually single-threaded, and if you wanted to do multiple things you made multiple processes for them.

The idea of multiple mini-processes inside a single process came later and took a while to stabilize into its modern form. POSIX didn't standardize thread-related APIs until 1995 and Linux didn't get modern POSIX threads support until Linux 2.6 in 2003.


This article is naive to the point of being flat-out wrong, since it makes extremely naive assumptions about how a garbage collector works. This is basically another C++-centric programmer saying that smart pointers work better than the Boehm GC -- which is completely true but also completely misleading.

I'm not saying that GC is always the best choice, but this article gets the most important argument wrong:

> 1. Updating reference counts is quite expensive. > > No, it isn't. It's an atomic increment, perhaps with overflow checks for small integer widths. This is about as minimal as you can get short of nothing at all.

Yes, it is. Even an atomic increment is a write to memory. That is not "about as minimal as you can get short of nothing at all".

Additionally, every modern GC does generational collection, so for the vast majority of objects, the GC literally does "nothing at all". No matter how little work it does, a RC solution has to do O(garbage) work, while a copying GC can do O(not garbage) work.

Now, that's not to say that GC is automatically better. There are trade-offs here. It depends on the workload, the amount of garbage being created, and the ratio of read to write operations.

The article says:

> I've already stated I'm not going to do benchmarks. I am aware of two orgs who've already run extensive and far-reaching experiments on this: Apple, for use in their mobile phones, and the Python project.

I can counterpoint that anecdata: Google extensively uses Java in high-performance systems, and invented a new GC-only language (Go) as a replacement for (their uses of) Python.

The right answer is to do benchmarks. Or even better yet, don't worry about this and just write your code! Outside of a vanishingly small number of specialized use cases, by the time GC vs RC becomes relevant in any meaningful way to your performance, you've already succeeded, and now you're dealing with scaling effects.


> [...] and invented a new GC-only language (Go) as a replacement for (their uses of) Python.

That's not true. Go was invented with the intention of replacing C++ at Google. That didn't really work out, and in practice Go became more of a replacement of Python for some applications at Google.

Also there are some indications that Go didn't gain traction necessarily on the merits of the language itself, but more on the starpower of its authors within Google.

(I mostly agree with the rest of what you wrote.)


>Even an atomic increment is a write to memory. That is not "about as minimal as you can get short of nothing at all".

Atomic reference count may trigger cache flush in other CPUs/stall waiting for them to do that, so it's not so minimal indeed.


This is similar to the old debate about the meaning of "frontend" and "backend". I maintain that something cannot be a "frontend" or a "backend" in isolation, it can only be a frontend or a backend to some other system. In a stack with many layers, (nearly) every layer is a frontend to one thing and a backend to another.

That said, I think I tend to use "upstream" in a similar sense to how package maintainers talk about packages: "upstream" is the source (where the thing originally came from) and "downstream" is the sink (where I will send it to). This implies that "upstream" for a request is "downstream" for a response, and vice versa. I don't know if that's necessarily correct usage, but that's how I've done it.


This hints at an (IMO underrated) skill: being able to look at a problem and decomposing it into sub-problems that are solved by known algorithms. Topological sort is a pretty fundamental building block for solving many problems that involve dependencies between components, since those can be modeled as a directed graph.

Knowing how to write a topological sort isn't the key skill here (although, I would argue, it's a good skill to know, and it's probably much simpler than you're imagining). The key skill is knowing that a topological sort will solve the problem, or perhaps it's simply knowing that "finding an order of actions that satisfies these dependency constraints" is called a topological sort.

I have always been under the impression that the goal of leetcode-style interviewing was actually to measure a candidate's ability to recognize what algorithmic tool to use, rather than to measure a candidate's ability to implement an algorithm from scratch. When I've been in the position of interviewing in the past, I've always been more interested in that problem-solving approach than in the actual code.

In my experience, most software jobs are 90% figuring out the problem that needs to be solved and coming up with tactics for solving the problem, and 10% actually implementing those tactics. In that context, investigating the ability of a candidate to analyze problems seems like an excellent interview technique.


I am finding this article and the comments here very surprising. Do a large number of people actually believe that the word "crypto" means exclusively "cryptocurrency"? Does anyone believe it means exclusively that?

To me it seems similar to how "auto" as a noun is generally short for "automobile", but most people are aware that other things can also be called "auto". When a camera says it is "auto focus" I cannot imagine that any normal person would assume that phrase has anything to do with automobiles.

It is incredibly common for the same word to have different meanings in different contexts. I personally have literally never had a conversation about cryptocurrency in which any person used the word "crypto" to mean "cryptocurrency", so I am clearly out of this loop. But if people decide to use it that way as slang in a certain context it certainly doesn't change the meaning of related words, or even mean it's impossible to use a different slang meaning in different contexts.


> Does anyone believe it means exclusively that?

I'd say about 90% of people believe that since they don't know cryptography is a thing. Now if you're talking to people that work in tech it's a different story, and they'd probably accept both definitions.


Yes. A large number of people who are not software engineers or mathematicians currently think "crypto" means "cryptocurrency", and don't know what "cryptography" is or think about it at all in and of itself. They think about cryptocurrency a lot, and call it "crypto".

I see it all the time on my social media feeds.


Overall this seems like a very positive change. However, I wonder how it will affect local development of servers that participate in API flows with public-facing systems.

As an example: imagine I am developing my application locally, at http://localhost:8080/ , and this application supports an OpenID Connect identity flow with an identity provider, https://idp.corp.example . Today I can test the login flow by telling the IDP that localhost:8080 is a valid URL to redirect back to, so that I can click a "login" button in my application, log into the real IDP, and get a token posted back to something like http://localhost:8080/idp-callback . This makes it easy to develop a system locally that also communicates with various backend microservices which require authentication with a common IDP.

I can't imagine that this is a rare scenario: it seems pretty normal to me. But if I understand the proposal, it sounds like the long-term goal is to prevent this kind of environment for local development, and instead force you to either run your dev stack remotely (with a public IP!) or else run your entire IDP stack locally (so that it has a local IP). Neither of those seem like good ideas to me.

Of course, the other option would be to modify local dev servers to accept CORS preflight requests and respond correctly, but I'm always slightly uncomfortable adding code into a local dev stack that would be unsafe if enabled in production. At very least it makes it harder to debug when something inevitably goes wrong with this API flow.

There are probably ways to solve this by introducing more proxies into a local dev stack, but I worry that these kinds of little papercuts will just make development that much harder for microservices-architecture applications, which are already hard enough to develop and debug as it is.


If you have a domain that you can control the DNS of, you could temporarily stand up an internet-facing server for localhost.yourdomain.com, get a certificate, then change the DNS to localhost.yourdomain.com to 127.0.0.1 (or put it in your HOSTS file), then address it with localhost.yourdomain.com rather than just localhost.

But yeah, I think really, browsers should be able to allow self-signed certs for localhost.


Let's Encrypt also allows you to do DNS based validation (DNS-01) where you just set a TXT record on the domain and update it from time to time.

The acme.sh script actually supports quite a few providers so you can cron it up and never even have to run an out-facing HTTP server. Useful for if say your ISP blocks port 80 or if you're behind a NAT you cannot control.


It looks like this is still allowed. https://idp.corp.example is a secure context, and you can add proper CORS headers to your local endpoint, and then you meet the criteria for the request.


Non-Googler: What do all those words mean?

Noogler: Haha, this video is so funny!

L4 SWE: (Crying because the video is so true)

L5 SWE: Haha, this video is so funny! I should show it to my interns, this will be a good training for them.

L6+ SWE: Why do people think this is funny? This Broccoli Man guy makes some really good points...


Where does "these are really good points, but why don't we have tooling which sets everything up automatically?" fit on the scale?


That would be 'Xoogler' because Google's engineering and broader corporate culture does not reward work like that and so when you realize that, you leave.

In general, Googlers have very little idea how far behind the rest of the industry they are when it comes to tooling.

I am a Xoogler.


I got the impression, based on a blog post by Eric Lawrence [1], that Google's developer tooling was top-notch (except for devs working on open-source projects like Chromium). Did it get worse since 2017, or are you talking about a different kind of tooling?

[1]; https://textslashplain.com/2017/02/01/google-chrome-one-year...


Or: These are really good points for a visibly-user-facing post-alpha service, but isn't it a bit overengineered for an experimental internal service whose clients can tolerate the risk of occasional downtime?


L5 Xoogler who left for a startup.


Yeah, that was my reaction. I get the need for all this reliability/failover, but it's horrible failure of abstraction/separation of concerns.

There's no reason the serving team should have to learn how to do all of those things on the checklist, since it can be done by anyone who's already learned the infra. You're expecting them to learn all kinds of stuff outside of their specialty, when they should be able to kick the app over the wall and let infra ensure that the app is deployed in two separate PCR zones with the failover plan etc, which should itself be mostly automated.


Mega-Caps suffer from the following problem:

1. There are more engineers making more divergent architectural solutions such that there is never a single place where you can make changes across the group.

2. Failures keep happening, so process is instituted with many checkboxes for engineers to work through.

3. Engineers on the small scale stuff get stack ranked against the engineers on the big scale stuff. Everyone needs to show that they can do the work and are "fungible". This leads to small internal systems having the same operational standard as large public facing systems.


I don't see what that's replying to. Nothing in that list would justify demanding that the app's team have knowledge or preference about which PCR zones to pick and which will just have to be corrected when they inevitably pick the wrong one.


The point is that every team gets to set their own failure modes. I know of multiple tier-1 services which diverge from at least one best practice.

Think of the scenario where a cloud provider needs to evacuate an az. There is no API which would allow the compute team to force migrate tens of thousands of apps and guarantee that they both are not effected and maintain their redundancy guarantees.

Internal services at google are in the same boat. However google knows about the hard edges and forces everyone to deal with all of that complexity - there is no api which the serving team could plug into which will avoid this overhead.


That still at no point requires the application's team to make decisions about which two PCR zones to pick and which cells within it to pick, which [decision] can still be cleanly abstracted away, and would still be a mixing of unrelated concerns, and so your comments are still orthogonal to the point I was bringing up here.

Edit: It might help to check out my comment here, where I clarify what a dev should vs shouldn't have to worry about: https://news.ycombinator.com/item?id=29085638


While what you say is true, I think GP is ultimately correct. You can have a system define a convention and allow bypassing it, instead of forcing everyone to start from scratch. In fact, this is the approach that pretty much any modern service at Google will use.


> when they should be able to kick the app over the wall and let infra ensure that the app is deployed in two separate PCR zones with the failover plan etc, which should itself be mostly automated

Not entirely - the developers should actively participate in designing the actual failover scenario and making sure the application can handle that (anything from being okay with some downtime due to the failover happening to designing an actual multi-region multi-master application). Making assumptions like 'infra will handle it' is a great way to not only get unexpected outages (because the developers assumed there would be no downtime because failover is magic, or that writes will never be lost) but to also introduce tensions between teams (because you now have an outside team having to wrangle an application into reliability when the original authors don't give a crap about it).

I get and agree with your point, the tooling and processes should definitely be simplified/automated when possible, and developers deserve a working platform that just works. The whole point of a platform team is to abstract away the mundane to let people do their job. But reliability is everyone's job, not just the infra's team, and developers must understand the tradeoffs and technology involved in order to not design broken systems.


If that's the point:

A) It's doing a horrible job conveying it. A dev does need to be concerned on how to handle failover, but only at a certain abstraction level. They should be required to specify something in the form "given server A fails and has to pass to B, what do you do?" That does not require you to know the terminology about PCRs and how to make decisions about which cells (or whatever) to pick on deployment, or avoiding the "gotcha" about making sure the two servers are in different PCR zones.

At that point, it's just following a checklist that needs no knowledge of the specifics of the app, and, to the extent that it's accurately representing how Google was, is indicative of bad processes.

B) Many things should be infra's job, as they're cleanly orthogonal to what dev's are doing. For example, how to apply a security patch to a DB. That's unrelated to the operation of the app.

I do get your point though, and I wouldn't say something like this about e.g. testing (which was the short, "reasonable" part of the video!) -- the devs have intimate knowledge of what counts as passing and failing and should be writing tests, and not 100% passing it over to QA. But that's precisely because such concerns are deeply tied in to the thing they are concerned with. "SQL 3.4.1 vs 3.4.2" is not.


Yeah, it seems like we agree :).


Because you have to get it working before you can make it better. Abstraction is quite secondary


Yes but the video is in the context of a mega-scale mega-corp that should have been able to set up clean abstraction boundaries at this point by now.


They already have done that, this video is 11 years old, at that point Google was half the age it is now and a fraction the size.


Google was still huge in 2010. Everyone seems to think that everything was a hundred percent different just <small number> of years ago...


> <small number> of years ago

Half a companies lifetime isn't a "<small number> of years ago" for that company. You can't compare tech ecosystems today to those in 2010, so many things has gotten standardized since then, Google was at the forefront back then.

Unlike modern companies Google had to build out everything themselves since nobody had built those systems or even had experience building such systems. That takes time, but today all of the things Google learned is common knowledge in papers and similar.

If you disagree mention one company that had a one button script that abstracted away things like where the data is stored to ensure failsafe, data replication etc, in 2010. I don't think there was any, just the fact that Google made it relatively easy to launch such services, just that you had to manually configure the replication script and the zones your data should be stored in wasn't really a big deal.


imho the Google interview process selects for people who thrive on organizational challenges.


I think that was more or less the intended response. And ten years on, most of these things are automated. This video was a kick in the pants internally.


L9+


T7-T9 Vision


Is there a page that documents this anecdote? I’d like to link to it next time i use this phrase. Ironically, googling for it doesn’t turn up anything relevant.


Google has officially apologized. The person in question had to take their blog offline due to bad behavior from readers (unrelated to Google AFAIK). Overall, this is a dark chapter. It's also been scrubbed internally. Not obliterated, but you won't accidentally bump into it as a Noogler.

As much as I think people should take responsibility for their own actions, it's probably for the better to let this one rest now. Who caused it is irrelevant at this point, though. We (Xooglers, and Googlers) can take responsibility for our actions, and not continue perpetuating it.


I use the phrase to describe how leveling is not about what junior people think it’s about. I’m not clear about the responsibility you’re talking about.


> Where does "these are really good points, but why don't we have tooling which sets everything up automatically?" fit on the scale?

My guess it fits nowhere because the L5s don't have the ability to automate it, and the L6s think it's trivial and as it's done sparingly then it doesn't justify the work to do things differently.

And this is why we can't have nice things.


And yet it's been a decade since this video and practically everything it mentions is a non-problem now.

No one is spinning up new borgmon instances. Spanner is replicated by default. Only very low level services need to care about PCRs. If you use one of the approved frameworks it will set up practically all the production configuration for you. Basic alerting for your service is automated, just turn it on, picking cells to run in is automated, scaling your service is automated, etc.

Actually getting quota remains a problem... :-p

Anyway I would argue we can and do have nice things, and that has happened precisely through the efforts of a huge number of people at all levels.

Edit to add: of course, there are always new problems to complain about! It's the march of progress after all.


Yes. If someone were to make this video today, it wouldn't be about production jobs and PCRs, it would be about privacy reviews and branding approvals.

But the quota issues haven't changed a bit.


More like you aren't going to get promoted for automating someone else's toil. Also, now who's going to support it, better deprecate it since the library changed / got deprecated / it's tuesday.


> More like you aren't going to get promoted for automating someone else's toil.

Lots of people were promoted for automating these things. They built easy to use services, got extra headcount since they became important and climbed the ranks. So not sure why you'd think that.

It may be different at other companies, but at Google building stuff that many other engineers depends on is a major way to get promoted. Of course if you automate something and nobody uses your automation tooling then you wont get promoted, but if your work gets used by basically every new engineer you'll climb the ranks quickly.


L7+ SWE

My life is a waste but the money is too good...


This applies to every level, particularly the lower levels.


it ain't much, but it's honest crying into piles of money


How good?



/me falls from the chair.


Not sure if "SWE" stands for software engineer, or "Sweden" as in Stockholm Syndrome


Random synapse activation:

A few years ago there was a Swedish tourist at a hotel where I was on vacation. He had a blue-yellow hat with "SWE" written on it in Courier font. I felt an urge to steal his hat because it looked better than most of the Google-branded swag I got as a Google SWE :)


Why not both? :sob:


Oh definitely Sweden.


Non-Googler: What do all those words mean?

Exactly. This wasn't too relatable, even though I have the GCP Certified Architect cert.


I can't tell if this comment is implying that my comment is unclear, or if you're agreeing with the first line of my comment.

In either case, though, it's an inside joke precisely because it's more relatable to those who are (or were) inside. In particular, I think it would be most funny to someone who was at Google about a decade ago; when I left Google in 2017 things had already changed enough that this didn't ring quite as true for new hires.

That said, GCP is not very representative of what the internal platform looked like circa 2010. (Or even of what the internal platform looks like now, as far as I know.)


I agree that as a non-Googler, I don't get the video, that is all. No negative connotation toward your comment.


Why would internal tooling mean anything to you? And why would GCP knowledge be useful in any way?

Its fairly simple to extract the gist of what these systems from the script.


As an ex Apple person, i'd say it means there's way too much hierarchy at Google? not sure i'm reading it right though


IMO/IME it's the clash between tooling, systems and and processes designed for running long-term highly scalable and reliable services maintained by teams in multiple geographical locations and used by billions of people; and greenfield projects that just want to get things done at an early stage.

Requiring multi-cluster/region, the quota/resource economy system, handling PCRs, code review, readability approval for complex configuration languages (and the existence of such complex languages in the first place) ... all of that makes sense in a vacuum and all were built to handle real problems and are likely written in the blood of a near-miss outage. But it also all comes crashing down on you when you're doing things from scratch for a relatively simple usecase that no-one really designed for.


we still had our processes though. Radar was my least favorite, but they replaced the ant eater app with one that was at least partially usable right before i left.


we'd say, about spoken mad scientist style requests, if it's not in radar, it never existed. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: