Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

<< It would have taken "less than a year" with or without AI. They just spent 10 years not trying.

I suppose we can mark this statement as technically true. I can only attest to my experience using o4 for python mini projects ( popular, so lots of functional code to train on ).

The thing I found is that without it, all the interesting little curve balls I encountered likely would have thrown a serious wrench into the process ( yesterday, it was unraid specific way of handling xml vm ). All of sudden, I am not learning how to program, but learning how qemu actually works, but it is a lot more seamless than having to explore it 'on my own'. And that little detour took half a day when all was said and done. There was another little detour at dockers ( again unraid specific isseus ), but all was overcome, because now I had 4o guide me.

It is scary, because it can work and work well ( even when correcting for randomness). FWIW, my first language was basic way back when.



Lots of people just went the traditional way of learning things from first principles. So you don't suddenly learn docker, you learn how visualization works. And it's easy because you already know computer hardware works and it's relation to the OS. And that network course has been done already since years so you have no issue talking about bridges and routing. It's an incremental way of learning and before realizing it, you're studying distributed algorithms.


Eh, it works as an abstract, when you are intentional about your long term learning path, but I tend to ( and here I think a lot of people are the same way ) be more reactive and less intentional about those, which in practice means that if I run into a problem I don't enroll in a course, but do what I can with resources available. It is a different approach and both have uses.

Incremental obviously is the ideal version especially from long term perspective if the plan for it is decent, but it is simply not always as useful in real world.

Not to search very far, I can no longer spend more than a day on pursuing random threads ( or intentional ones for that matter ).

I guess what I am saying is: learning from first principles is a good idea if you can do it that way. And no for docker example. You learn how they should work. When playing in real world, you quickly find out there are interesting edge cases, exceptions and issues galore. How they should work only gets you so far.


My philosophy is something like GTD where you have tasks, projects, and area of responsibilities. Tasks are the here and now, and they’re akin to the snippets of information you have to digest.

Project have a more long term objective. What’s important is the consistency and alignment of the individual tasks. In learning terms, that may be a book, a library docs, some codebase. The most essential aspect is that they have an end condition.

Areas is just thing that you have to do or take care of. The end condition is not fully set. In learning terms, these are my interest like drawing or computer techs. As long as something interesting pops up, I will consume it.

It’s very rare for me to have to learn stuff without notice. Most will fall under an objective or something that I was pursuing for a while.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: