I've been writing interactive math and computer science articles at https://growingswe.com/blog. The past few months, I have been obsessed with interactive learning experiences and currently building https://math.growingswe.com for learning probability.
I really like https://math.growingswe.com nice job! I did the foundations page. I will work through some more lessons and give you some feedback later this week. I am also working on some math projects. Take a look at my other comment in this Ask HN.
I see. The problem with me writing these is even though I'm not an expert, I do have a bit of knowledge on certain things so I'm prone to say things that make sense to me but not to beginners. I'll rethink it
One of the downsides of using an expert LLM to write for you is that they know all that perfectly well, even if you don't, and aren't too bothered by such a chunk. It's like reading any Wikipedia article on mathematics... This is the kind of thing that people are documenting in the LLM-user literature in creating an illusion of expertise (or 'illusion of transparency'). Because the LLM explains it so fluently, you feel like you understand, even though you don't. Hence new phrases like 'cognitive debt' to try to deal with it.
Probably because names kinda obfuscate the ridiculous impracticality of this exercise. This microgpt can produce a random sequence of letters and by chance it might look like a name. If the thing output, let's say "Kianna" you just think "wow, it IS a name" but is it though? (Idk if it's a real name, at least not in Spanish) Isn't a normal word, so the randomness of names helps to hide the fact that this gpt just outputs random shit that looks like names. If you just use words you will get mostly random shit that doesn't resemble any real words. Just my hypothesis. I can see the convenience of using names. The output look like real names but you can achieve the same result with old ai and very basic algorithms.
I tried to include tooltips in some places that go into more depth, but I understand there's a jump. I'm not sure what will be the best way to go about it tbh
I had to look up what a content mill is. I'm not one, I think. It's "random" stuff because my interests are different. These posts are not written sequentially, I've been working on them (except for this MicroGPT one) for weeks and only publishing now.
> Andrej Karpathy wrote a 200-line Python script that trains and runs a GPT from scratch, with no libraries or dependencies, just pure Python.
Almost immediately afterwards, you have a section titled "Numbers, not letters". Need I go on?
Interestingly, despite all the AI tics, the opening passes Pangram as 100% human... though all the following sections I randomly checked also come back as 100% AI. So the simplest explanation would be that you are operating adversarially and you tweaked the opening to target Pangram (perhaps through a anti-AI-detection service, which now exist and are being used by the cutting edge, as Pangram is known to be relatively easy to beat, similar to how people started search-and-replacing em-dashes when that got a little too well known), which unfortunately means I now expect you to lie to me in your response since you apparently went that far to start building up clout.
(BTW, how did you accidentally pick 4 rare names which were in the dataset? "Thanks, will fix" is not a real response to that observation. Are you also going to remove all of the 'just pure X' and 'Y, not X' constructions from your posts now that I've pointed it out?)
This already exists in visual art: timelapses of the drawing process were being used to prove that pictures weren’t AI generated, until someone made a program that takes a picture and generates a fake progress vid
This is likely because the blog is AI generated and keys off this point from Karpathy: "As a preview, by the end of the script our model will generate (“hallucinate”!) new, plausible-sounding names.", so the LLM just repackaged that into something that is obviously wrong, which is kind of ironic.