Hacker Newsnew | past | comments | ask | show | jobs | submit | embedding-shape's commentslogin

Neither are they gonna lose the potential of getting the data of any of their visitors, hence they're in this catch-22.

Really? How long you've been a developer? I've been almost exclusively doing "agent coding" for the last year + some months, been a professional developer for a decade or something. Tried just now to write some random JavaScript, C#, Java, Rust and Clojure "manually" and seems my muscle memory works just as well as two years ago.

I'm wondering if this is something that hits new developers faster than more experienced ones?


Probably depends on the individual. Senior developer here and I've always offloaded boilerplate and other "easy to google" things to search engines and now AI. Just how my brain and memory work. Anything I haven't used recently isn't worth keeping (in my subconscious mind's opinion anyway).

Yeah, having to look up the "basic boilerplate" stuff is not worse for me after starting to use AI than it was beforehand.

Experience isn't the problem. I have 20+ years of C++ development, built commercial software in Java, Rust, Python, played with assembly, Erlang, Prolog, Basic.

Played with these coding agents for the last couple weeks and instantly noticed the brainrot when I was staring at an empty vim screen trying to type a skeleton helloworld in C.

Luckily the right idioms came back after couple of hours, but the experience gave me a big scare.


Same for me. Been fully agentic for half a year or so, still remember the myriad of programming languages and things just as well if there's no AI present at all. Hard to shake 15 years of experience that quick, unless maybe that experience never fully cemented?

Maybe the difference between actually knowing stuff vs surface level? I know a lot of devs just know how to glue stuff together, not really how to make anything, so I'd imagine those devs lose their skills much faster.


> random JavaScript, C#, Java, Rust and Clojure "manually"

Right, sounds very credible to me. What did you write, an addition function in each of those?


> I'm wondering if this is something that hits new developers faster than more experienced ones?

Almost certainly, at least according to Ebbinghaus' forgetting curve.


I can tell you that I can still code Python and Haskell just fine (I did those in vim without bothering to set up any language assistance), but Rust I only ever did with AI and IDE and compiler assistance.

It a side effect of using AI.

People using AI for tasks (essay writing in the MIT study linked below) showed lower ownership, brain connectivity, and ability to quote their work accurately.

> https://arxiv.org/abs/2506.08872

There was a MSFT and Carnegie Mellon study that saw a link between AI use, confidence in ones skills, confidence in AI, and critical thinking. The takeaway for me is that people are getting into “AI take the wheel” scenarios when using GenAI and not thinking about the task. This affects people novices more than experts.

If you managed to do critical thinking, and had relegated sufficient code to muscle memory, perhaps you aren’t as impacted.


It's probably too much inside baseball to merit a study, but I'm curious if the results would change for part-time coders. When I'm not coding, I'm writing patents, doing technical competitive analysis, team building, etc.

My theory is that if you're not full-time coding, it's hard harder to remember the boiler plate and obligatory code entailed by different SDKs for different modules. That's where the documentation reading time goes, and what slows down debugging. That's where agent assisted coding helps me the most.


> [...] and ability to quote their work accurately.

I guess that's an advantage? People shouldn't have to burden their memory with boilerplate and CRUD code.


The task was essay writing, and the three 3 groups were No tools, search, ChatGPT.

The people who used chatGPT had the most difficulty quoting their own work. So not boilerplate, CRUD - but yes the advantage is clear for those types of tasks.

There were definite time and cognitive effort savings. I think they measured time saved, and it was ~60% time saved, and a ~32% reduction in cognitive effort.

So its pretty clear, people are going to use this all over the place.


i think your environment is a big role. with Ai you can kind of code first, understand second. without AI if you dont fully understand something then you havent finished coding it, and the task is not complete. if the deadline is too aggressive you push back and ask for more time. with AI, that becomes harder to do. you move on to the next thing before you are able to take the time to understand what it has done.

i dont think it is entirely a case of voluntary outsourcing of critical thinking. I think it's a problem of 1) total time devoted to the task decreasing, and 2) it's like trying to teach yourself puzzle solving skills when the puzzles are all solved for you quickly. You can stare at the answer and try to think about how you would have arrived at it, and maybe you convince yourself of it, but it should be relatively common sense that the learning value of a puzzle becomes obsolete if you are given the answer.


I love seeing this, and love seeing regulations working exactly as wanted! What I see is basically "We're unable to serve this website without compromising your privacy, so instead of pretending or giving you a choice, we give you this message so you can turn around".

> "We're unable to serve this website without compromising your privacy... "

More accurately, "we do not have the staff or funds to figure out what every single random law around the globe requires of us, and since foreign countries are not a realistic advertising market for a local Michigan newspaper, there's really no reason for us to try."


Well, you don't have to do any of that stuff if you either are upfront about selling user data and ask if it's OK, or if you just don't do that stuff at all.

But to know that you would have to study the laws of other countries or in this case EU which costs money and in this case is not an obviously beneficial investment.

they blocked a continent without seeking any advice?

European law imposes a great deal more obligations on a business than that. This claim is simplistic to the point of disingenuousness.

>since foreign countries are not a realistic advertising market for a local Michigan newspaper

This may be true for in house ads, but there are ad networks that already are able to personalize ads and have ad inventory for such foreign countries.


It's illegal for us to steal from you, so we won't invite you inside.

What does GDPR get you that browser settings and an extension don't? I'm genuinely curious how random websites refusing to serve content / spamming cookie banners is a good thing?

The data download and removal side of GDPR seems useful for more "entrenched" use cases where you have an account and a long history on a service but... fly-by website visits should not be this heavily regulated. Blocking cookies and scripts is trivial.


I should not need extensions for a business to respect my privacy. It's as simple as that.

If you look at it through an equity angle, needing extensions relegates the negative effects to those that are already not "well off" — the technologically illiterate who don't know what to do or know someone who does.


So someone's refusal to make a couple clicks to install an extension necessitates: 1) millions of users having to click to get the annoying popup off their screen, 2) installing an extension to block those anyway, and 3) a more fractured internet where website operators outright refuse to serve content because of liability? I'd bet a very large sum of money that the technologically illiterate don't read anything on those popups and click "Accept all cookies"

Right... as if can trust some random American or other non-European website that it really respects the law. What are you gonna do if it breaks the GDPR law? GDPR ruined the Internet.

I'd argue greedy capitalists ruined it. They were also the cause of GDPR

They also built it out.

Basically all up to the training data, as things often are.

I wonder if that might be another reason to just completely disable this feature and not make it a permission: otherwise people could use it to build trainingsets for geoguesser models.

People already uploaded tons of images and data while playing Pokemon GO. Probably model is already has been built and being tested right now

You still need some smarts, since the picture you just took won't be in the training data.

> If Google feels that the location tags and filenames are unacceptably invasive, it can stop writing them that way.

Something can be "not invasive" when only done locally, but turn out to be a bad idea when you share publicly. Not hard to imagine a lot of users want to organize their libraries by location in a easy way, but still not share the location of every photo they share online.


Definitely. I want to be able to search my Google Photos for "Berlin" and get me all the pictures I took there.

> I’d wager 99.9% of the users didn’t realize that they are effectively sending their live GPS coords to a random website when taking a photo.

I'd wager 90% of the photos on Google Maps associated with various listings don't actually know their photos are in public. I keep coming across selfies and other photos that look very personal, but somehow someone uploaded to Google Maps, the photo is next to a store or something and Google somehow linked them together, probably by EXIF.


Google prompts you in Google Maps if you want to upload your picture to Maps.

I sometimes do that for random pictures, even like selfies, which I don't mind popping up there.


Wait... You post selfies on Google Maps? The thought never crossed my mind. What would the purpose be? Sorry I'm probably thick...

I can say for me that after my father died I posted pictures of him at some of his favorite places or from favorite trips.

Google Maps app sees that you took photo near POI and later in the day asks you in notification if you want to share it on maps.

You review the photo and go "lol, sure".

At least for me that doesn't even feel like posting due to how frictionless it is and that it's about natural discoverability (someone has to click that POI and scroll through photos to find it).


About the latter: that's why Google Maps is my favourite social medium. It's hyper-local.

I had a popup on my iPhone one day "You were in City Park last weekend, would you like to share those photos?". I stopped allowing google access to my photos after that. A little late though, they had apparently scraped all of my data already.

I had a similar moment a few years ago. That Google Maps pop-up was what caused me to first switch to de-googled Android, and once that turned out to be a hassle after a couple of years, switch to an iPhone without Google stuff. (On Android, Google is a location provider, so blocking their access is much harder.)

I suspect there used to be a flow which was far too easy to share directly to Google maps. I was browsing the map once and found a picture of a credit card in a room in a hotel. I guess the guy intended to send it to his PA or something.

I have friends that do that and it’s intentional. Had a good time at a store or restaurant? Take a selfie and upload to Google Maps. Also take a selfie video and upload to Instagram stories. It’s a way of life that defaults to more sharing.

Couldn't you use <input type="file" accept=".jpg,.jpeg"> (different than image/jpeg mime-type I think, not sure if that also strips EXIF?), then manually parse the EXIF in JS? Shouldn't be that complicated to parse and I'm guessing there is a bunch of libraries for doing just that should you not want to do that yourself.

From another submission (https://news.ycombinator.com/item?id=47738827), there was a screenshot of Google Docs/Drive showing a popup saying "You cannot do copy/cut/paste with the mouse" whenever you try to right-click and copy.

Some months ago, I saw that very popup, and finally started working on something I've been wanted to do for a long-time, a spreadsheet application. It's cross-platform (looks and work identical across Windows, macOS and Linux), lightweight, and does what a spreadsheet application should be able to do, in the way you expect it, forever. As an extra benefit, I can finally open some spreadsheets that grown out of control (+100MB and growing) without having to go and make a cup of coffee while the spreadsheet loads.

I don't really have any concrete to share, I guess it'll be a Show HN eventually, but I thought it was funny it was brought up in a similar way in that article as was the motivation for me to build yet another spreadsheet application.


I mean, once in a different country, you either experience the locale shock once then adapt, or you've seen it before and kind of know what to expect.

And for the rest of the users who have no idea about locales, using whatever locale they have on their computer might be technically incorrect for some of them, but at least they're somewhat used to that incorrectness already, as it's likely been their locale for a while and will remain so.


Well, the issue is when the applications look at the wrong configuration to set this up.

Think about traveling to a different country for a limited time. I want my location, time zone, etc to be set to where I am. I traveled across the US a few years ago, and I would rather not have to mentally follow in which time zone I was. Heck, I don't even know where the limits are. Bonus points for DST happening on a different date than in Europe, and extra bonus for there being no DST in Arizona, except for Navajo Nation? I remember signs saying it was illegal to carry alcohol, but I don't recall anything about time zones.

But as a European, I don't want my date to suddenly appear in US format; I'm only there for a few weeks.


> And for the rest of the users who have no idea about locales, using whatever locale they have on their computer might be technically incorrect for some of them, but at least they're somewhat used to that incorrectness already, as it's likely been their locale for a while and will remain so.

Not really. A lot of computers are set to US locale (probably because it's the default) and the user just has no idea why some programs have dates in some crazy middle-out format and avoids those programs.


> One of my pet peeves is that increasingly frequently, pressing Enter to submit a web form doesn’t even universally work anymore. Instead you have to tab to the submit button, and (depending on the web page), have to press Space or Enter to actuate it.

The other day I used Safari on a newly setup macOS machine for the first time in probably a decade. Of course wanted to browse HN, and eventually wanted to write a comment. Wrote a bunch of stuff, and by muscle memory, hit tab then enter.

Guess what happened instead of "submitted the comment"? Tab on macOS Safari apparently jumps up to the addressbar (???), and then of course you press Enter so it reloads the page, and everything you wrote disappears. I'm gonna admit, I did the same time just minutes later again, then I gave up using Safari for any sort of browsing and downloaded Firefox instead.


I would argue that behavior is idiomatic for macOS but not idiomatic for web browsers. Keyboard navigation of all elements has never been the default in macOS. Tab moves between input fields, but without turning on other settings, almost never moved between other elements because macOS was a mouse first OS from its earliest days. Web browsers often broke this convention, but Safari has from day one not used tab for full keyboard navigation by default.

And this highlights something that I think the author glosses over a little but is part of why idioms break for a lot of web applications. A lot of the keyboard commands we're used to issue commands to the OS and so their idioms are generally defined by the idioms of the OS. A web application, by nature of being an application within an application, has to try to intercept or override those commands. It's the same problem that linux (and windows) face with key commands shared by their terminals and their GUIs. Is "ctrl-c" copy or interrupt? Depends on what has focus right now, and both are "idiomatic" for their particular environments. macOS neatly sidesteps this for terminals because "ctrl-c" was never used for copy, it was always "cmd-c".

Incidentally, what you're looking for in Safari is either "Press Tab to highlight each item on a webpage" setting in the Advanced settings tab. By default with that off, you would use "opt-Tab" to navigate to all elements.


System Settings -> Keyboard -> and toggle Keyboard navigation.

I'm not sure why this isn't the default, but this allows for UI navigation via keyboard on macOS, including Safari.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: