Hacker Newsnew | past | comments | ask | show | jobs | submit | xdavidliu's commentslogin

for a book that surveys pretty much all of it, see "Empire of AI" by Karen Hao


thanks

do you happen to know if there are groups talking about how societies will rebalance after the gpt era ?


good question, I'm not sure. Maybe check out the new Eliezer Yudkowsky book? He definity talks about something akin to "post-GPT era" on there.


thanks a lot


seems to me when this kind of stuff happens, there's usually something else completely unrelated, and your comment was simply the first one they happened to have latched onto. surely by itself it is not enough to elicit that kind of reaction


I do see the point a bit? and like a reasonable comment to that effect sure, I probably don’t respond and take it into account going forward

but accusing me of being deficient in English or some AI system is…odd…

especially while doing (the opposite of) the exact thing they’re complaining about. upvote/downvote and move on. I do tend to regret commenting on here myself FWIW because of interactions like this


Some people seem to think that every thing they say or write has to somehow be an argument or counterpoint, or find something to correct, or point out flaw.

So when they see a piece of writing that is in agreement and concisely affirms the points being made, they don’t understand why they never get invited to parties.


open source code is a miniscule fraction of the training data


I'd love to see a citation there. We already know from a few years ago that they were training AI based on projects on GitHub. Meanwhile, I highly doubt software firms were lining up to have their proprietary code bases ingested by AI for training purposes. Even with NDAs, we would have heard something about it.


I should have clarified what I meant. The training data includes roughly speaking the entire internet. Open source code is probably a large fraction of the code in the data, but it is a tiny fraction of the total data, which is mostly non-code.

My point was that the hypothetical of "not contributing to any open source code" to the extent that LLMs had no code to train on, would not have made as big of an impact as that person thought, since a very large majority of the internet is text, not code.


I'm sorry but your point doesn't make sense to me. Training on all the world's text but omitting code means that your machine won't know how to write code. That's an enormous impact, not a small one.

Unless you're in the camp that believes ChatGPT can extrapolate outside of its training data and do computer programming without having ever trained on any computer programming material?


fair point


Where did most of the code in their training data come from?


i think your comment actually mostly makes sense, except the part about neural network guys needing to familiarize with Chomsky, which is not the case at all


Ergo, my initial claim that "modern approaches have zero overlap with Chomsky's deterministic methodology." Statistical token prediction began with the Dragon folks, the CMU guys, and Yorktown Heights, many of whom encountered Chomsky formalism as undergrads.


yeah, your parent comment appears to be just nonsensical name-dropping. happens a lot here. A different type of comment is the "X is just Y" comment that is kind of annoying, like "all of AI is just curve fitting", which the commenter wants readers to think is some kind of profound insight.


when Elon bought twitter, I incorrectly assumed that this was the reason. (it may still have been the intended reason, but it didnt seem to play out that way)


is there a more accepted connotation of the lone word "computation" that means something different from "theory of computation" (in the sense of turing machines, computability, decidability, complexity classes, Sipser) etc?


I could see someone interpreting "computation" to be more practical.


Yeah, actually computing things imo


the theory is mainly about uncomputable things tho


One time in around 2008 when I was in undergrad, I got facebook requested by a guy named David Liu (same name as me) who was going to a school many thousands of miles away, and I noticed that he had facebooked about 20-30 other David Liu's.

About 3-4 years later, I was in grad school, I meet this guy in person (he happened to be going to the same grad school), I recognize his name and face, and let him know that he facebooked me back then, and he got a chuckle out of it.


Reminds me of this prank where you call two people with the same name and you conference them together and have them figure it out.


I got the same impression as well. I think I've become so cynical to these kinds of things that whenever I see this kind of thing, I immediately assume bad faith / woo and just move on to the next article to read.


i watched the lecture series during the pandemic and commented on many of the youtube videos. in at least one instance, a library function is used on the board that is not compatible with the current function signature in mit scheme.


Oh no.

I suppose it is something to do with the fact that it has been, what, almost 40 years since the lectures?

The fact that most of the code would still work is a miracle. That wouldn't work for, say, Java (which didn't exist in 1986). Nor C++. Nor Javascript (also not there back then). Fortran and C might be able to pull it off (but barely).

Remember, we didn't have computers worth the name back then. Shoot, we didn't even have dirt yet, just rocks.


> That wouldn't work for, say, Java

The ~29 years deprecated java.util.Date* methods would like to have a word. ;-)

*https://docs.oracle.com/en/java/javase/25/docs/api/java.base...



Which function?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: