Hacker Newsnew | past | comments | ask | show | jobs | submit | theteapot's commentslogin

I have a vaguely unrelated question re:

> You do what your supervisor did for you, years ago: you give each of them a well-defined project. Something you know is solvable, because other people have solved adjacent versions of it. Something that would take you, personally, about a month or two. You expect it to take each student about a year ...

Is that how PhD projects are supposed to work? The supervisor is a subject matter expert and comes up with a well-defined achievable project for the student?


I think it just really depends. There is no fixed rule to how PhD programs are supposed to work. Sometimes your advisor will suggest projects he finds interesting and wants to see done, he just doesn't have time to do it himself. That's pretty common. Sometimes advisors don't have that and/or want students to come up with their own projects proposals, etc.

It depends on the program, and even more so, the student and the mentor. It can also vary over time, with more direction early on in a graduate program, and less direction later. Some mentors are very directive, and basically treat students as labor executing tasks they don't have time or want to do. Other times, the student is coming up with all the ideas and the mentor is facilitating it with resources or even nothing but uncertain advice or permissions now and then.

This can lead to a lot of problems as I think in some fields, by some academics, the default assumption is the former, when it's really the latter. This leads to a kind of overattribution of contribution by senior faculty, or conversely, an underappreciation of less senior individuals. The tendency for senior faculty be listed last on papers, and therefore, for the first and last authors to accumulate credit, is a good example of how twisted this logic has become.

It's one tiny example of enormous problems with credit in academics (but also maybe far afield from your question).


It is a spectrum. My advisor was very hands off. He didn't, ultimately, even really understand my PhD. He knew the problem, but he had no path in mind to solve it, that was up to me. I'm now working (as a software engineer) with a person who is very hands on with his students (and even postdocs) to the point of giving them specific tasks to do and then discussing the result every week. He defines the problems and structure of the solution, the students at least partially are an extension of himself, they are doing stuff he merely doesn't have time to do himself.

And there is everything in between.


Often at the start yes. So the students gets a bit of recognition, a bit of experience and a bit of knowledge.

From the cases I've observed directly in the area I work in, yes.

Roughly 20y DeepBlue to AlphaZero. I don't think that is comparable though. Use of deep neural networks was what made the machines starting with AlphaZero dominant again. I.e. we're already in the new paradigm.

> While I’m certain that this technology is producing some productivity improvements, I’m still genuinely (and frustratingly) unsure just how much of an improvement it is actually creating.

I often wonder how much more productive I'd be if just a fraction the effort and money poured into LLMs was spent on better API documentation and conventional coding tools. A lot of the time, I'm resorting to using an AI because I can't get information on how the current API of some-thing works into my brain fast enough, because the docs are non existent, outdated, or scattered and hard to collate.


This is facts. All of this talk about putting agent skills directly into repos (as Markdown!) is maddening. "Where were LITERALLY ALL OF YOU whenever the topic of docs as code came up?"

This is doubly maddening with NotebookLMs. They are becoming single sources of knowledge for large domains, which is great (except you can't just read the sources, which is very "We will read the Bible to you" energy), but, in the past, this knowledge would've been all over SharePoint, Slack, Google Drive, Confluence, etc.


I've chose to embrace the silver lining where there is now business backing to prioritize all the devx/documentation work because it's easier to quantify the "value" because LLM sessions provide a much larger sample size than inconsistent new hire onboarding (which was also a one-time process, instead of per session).

I do think people are going way overboard with markdown though, and that'll be the new documentation debt. Needs to be relatively high level and pointers, not duplicate details; agents can parse code at scale much faster than humans.


> Where were LITERALLY ALL OF YOU whenever the topic of docs as code came up?

Docs as code is still writing and not coding. Those are simply different skills. As programmers, we find coding to be fun and glamorous and writing to be difficult. Emotionally, it's much easier to finish a piece of code and feel genuinely happy with it (you are proud of your achievement) than it is to write a paragraph of docs and feel genuinely happy with it (you can feel in your bones that it's not good but you don't know how to improve it and you just want it over with). We have not built anywhere near the level of skill for writing than we did for coding when we wrote our own little programs for ourselves and never built a habit of thinking about how other people would interact with our code.

(For me, this is exacerbated by having been more isolated from other people than the average population, partly due to neurodivergence and partly because the hobby was niche at the time, and I assume this is also true of a lot of people currently employed as professional programmers.)


Haha indeed. At work suddenly documentation and APIs are important, but it's all for/behind "skills". Before it was always "sure, that would be nice"...

I do welcome the improvements to doc and APIs this brings though!


My favorite thing is when some projects now have better documentation in their Claude skills or MCPs than they ever did for users.


There is natural incentive for engineers working on a project to keep Claude skills up to date. I cannot say the same for general documentation.


But maybe not for long. When we get long-running AIs, the knowledge locked inside the AI's thinking might supplant docs once again. Like if you had an engineer working at your company for a long time and knowing everything. With all the problems that implies, of course.


at any time you can ask the model to produce documents given the latest state of the code base and at an altitude you choose.


That's the weird best thing about LLMs - there is finally incentive for projects to create documentation, CLIs, tests, and well organized modular codebases, since LLMs fall flat on their face without these things!


Yeah, I joined a project a couple of months ago, felt completely lost.

Last week, a colleague finally added for Claude all the documentation I'd have needed on day one. Meanwhile, I'm addressing issues from the other direction, writing custom linters to make sure that Claude progressively fixes its messes.


But that documentation itself is likely AI-generated


At least it saves me from having to generate the docs myself!


Why continue involvement with a project that clearly devalues their “customers” or “users” who care about documentation?


Projects that spend time on documentation for my robots have shown me they care about my use case!


I feel like Google search results have gotten tremendously worse over the past 2 years too. It's almost like you have to use AI search to find anything useful now.

Which of course reduces traffic to sites and thus the incentives to create the content you're looking for in the first place :(


I actually think the AI Overviews from Google have improved a lot in the last 2 years. They used to be trash. And now they are often good-enough so that I do not even switch to ChatGPT anymore.

The traditional search results suffer a lot because AI and AI content generation have enable a lot of aggressive SEO/spam plays.


There’s many groups that “win” by making search results worse. It’s an ongoing battle between them, and if someone’s blaming solely Google for it, they’re way oversimplifying.


I totally agree with you, this reduces the traffic to sites but also there were lots of website that information wasn't true or correct.


Does anyone know which tool can be best used instead of Google for "classic", non-AI googling?


Pure non-AI googling will not work since many websites now use AI to create content. And so far, no search engine has managed to reliable detect and filter that out.


Kagi


> I often wonder how much more productive I'd be if just a fraction the effort and money poured into LLMs was spent on better API documentation and conventional coding tools.

Probably negligible. It's not a problem you can solve by pouring more money in. Evidence: configuration file format. I've never seen programmers who enjoy writing YAML. And pure JSON (without comments) is simply not a format should be written by humans. But as far as I know even in the richest companies these formats are still common. And the bad thing they were supposed to replace, XML config, was popularized by rich companies too...!


Programmers don’t enjoy writing things they have no good understanding of, and no good way to ascertain or predict in advance, how exactly it will behave. That’s at least partly due to poor documentation. Good documentation gives you a reliable conceptual model and makes you confident about how to use a tool.


I love YAML, so there is at least one weirdo out there on the internet who is bitter that TOML and JSON won


As a TOML and JSON fan I must say those formats definitely didn't win :). YAML did, by a really long shot too unfortunately


JSON is not designed as a configuration file format.


Yeah I get this impression too. AI feels like it's papering over overwrought and badly designed frameworks, tech stacks with far too many things in them, and also the decline of people creating or advocating for really expressive languages.

Pragmatic sure, but we're building a tower of chairs here rather than building ladders like a real engineering field.


As someone who does broad activities, it supercharges a lot of things. Having a critical eye is required though. I estimate 40%-60% improvements on basic coding tasks.

I don't bring huge codebases to it.


And hilariously, the worst offenders are AI frameworks themselves. A couple months ago I was helping a client build out some "agentic" stuff and we switched from OpenAI Agents library to Agno. Agents is messy enough, like making inconsistent use of its own enums etc, but with Agno you can really feel that they are eating their own dog food. Plenty of times I literally could not find the API for some object, and of course their docs page pushes you toward chatting with their goddamn docs chatbot, which barfs up some outdated function signature for you.


I can't speak for your efficiency, but for me it's now often easier to create a tool than find if one exists and learn how to use it.

I was able to one-shot a parameterized SVG template creator for a laser cutter. Unlikely I could have achieved the same with 40 hours of pure focus.


> better API documentation and conventional coding tools

Agreed, and it depends on the language I suppose. I'm a C++ developer and when you start working with templates even at a non-casual level, the compiler errors due to either genuine syntactic errors or 'seems correct but the standard doesn't support' can be infuriatingly obtuse. The LLM 'just knows' the standard (kind of, all 2k pages), and can figure out and fix most of those errors far faster than I can. In fact one of my preferred usages is to point Codex at my compiler output and get it to do nothing more than fix template errors.

Kotlin, for example, is much more in your face, in the IDE which does a correctness pass, before you even invoke the compiler (in the traditional sense) and the language spec is considerably leaner with less (no?) UB, unlike C++.


Depends on which C++ we are talking about.

You can have the Kotlin experience with a mix of static asserts, constexpr and concepts.

C++ IDEs also offer many goodies which those that insist in using vi and emacs keep missing out.


Can’t you make the LLM write API documentation?


I agree. I think of AI as a search engine on steroids.

But I think it IS the best way to search for information, to be able to put a question in natural language. I'm always amazed just how exactly on-point the answer is.

I mean even the best of docs out there that have a great search bar like the Vue docs still only matches your search term and surfaces relevant topics.


then you should be delighted we have LLMs one of the use cases they are best suited to is writing documentation, much better than humans can.


Good is debatable. The docs I want point out the weird shit in the system. The AI docs I've read are all basically "the get user endpoint can be called with HTTP to get a user, given a valid auth token". Thanks, it would have been faster to read the code.


They write good _looking_ documentation. How good those docs are is entirely on the person/people who prompted them into existence.


Please don't inflict LLM docs on people


Why is this interesting?


The LLM content piracy to isomorphic plagiarism business loop is unsustainable. Yet for context search it is reasonably useful. =3

https://www.youtube.com/watch?v=T4Upf_B9RLQ


I dunno. I trained as a software engineer, pivoted to civil laborer. I just can't see a robot doing 90% of the stuff I do anytime soon. Same goes for plumber, electrician, ... even most mobile plant operations. As a supplement around the edges, sure. But replace? Not in the near term. And that's not even considering the safety certification moats around skilled labor roles.


I'm think event photography is another.

It's one thing to use AI to touch up photos, but in the end, you probably still want photos that match your memories and good photography still has an element of taste and creativity.


Yeah I think with all the AI slop around, people are going to value 'real' a lot more.


> For the next eight hours, every developer who installed or updated Cline got OpenClaw - a separate AI agent with full system access - installed globally on their machine ...

Except those with ignore-scripts=true in their npm config ...


Or those who use pnpm


I’ll do you one better. I refuse to install npm or anything like npm. Keep that bloated garbage off my machine plz.

I guaranteed way for me to NOT try a piece of software is if the first setup step is “npm install…”


Sure, but throwing the baby out with the bathwater tends to not be a solution that people will find clever or reasonable.


I guess it’s because I do C++ and robotics. But npm is just not part of my world. The only time I come across it is when someone gets real lazy and doesn’t ship a proper single exe distributable. Claude Code and Codex CLIs were both naughty on initial release. But are now a single file distributable the way the lord intended.


> Most abstractions in software exist because humans need help. We couldn't hold the whole system in our heads, so we built layers to manage the complexity for us.

Kind of a sloppy statement, but I don't think it's accurate to say abstraction or layering exists in software just because humans need help comprehending it. Abstractions often exist to capture the essence of some aspect of the real world, and to allow for software reuse. AIs will still find reusing software useful? Secondly, you equate "abstractions" with "layers" which aren't really the same thing. Layers are more about separation of concerns. Maybe it could be argued layering is a type of abstraction.


Right now I'm trying to get an AI (actually two ChatGPT and Grok) to write me a simple HomeAssistant integration that blinks a virtual light on and off driven by a random boolean virtual sensor. I just started using HomeAssistant and don't know it well. +2H and a few iterations in, still doesn't work. Winning.


Forget both of them and throw everything at my boi Opus 4.6.


HomeAssistant is probably doing too much for what you need. Imo it's not a good piece of software. https://nodered.org/ is maybe a better fit. Or just some plain old scripts.


It looks like the point is Home Assistant integration. I seriously doubt they need an led to be blinked on and off based on a mock sensor. That's either "for the integration test" or "as a placeholder for something more". Either way, the is failing.


Nah HA is defs what I want. I agree it's terrible software. All the more motivation for me to try throw AI at it. If the docs were better I'd just grind the docs instead it would probably be ahead, but the HA docs suck almost as bad as the code - which may have something to do with why the AIs are sucking now that I think about it ..


Need to combine this with LVM or BTRFS or similar to be a true snapshot. Rsnapshot supports LVM snapshot pretty good.


Once you have btrfs you don't really need rsync anymore, its snapshot + send/receive functionality are all you need for convenient and efficient backups, using a tool like btrbk.


I feel like apparmor is getting there, very, very slowly. Just need every package to come with a declarative profile or fallback to a strict default profile.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: