(Not the person you replied to, but) I just re-read it, and the "canned retort" still looks completely accurate and relevant. Can you elaborate on why you think that AI's (known, admitted, and inherent) propensity for hallucination _wouldn't_ be disastrous in the context of pedagogy?
If the original comment had _just_ proposed to direct students to locations _within_ the original content ("filter"), it would have been less-impactful - being directed to the wrong part of a (non-hallucinated) textbook would still be confusing, but in the "this doesn't look right...?" sense, rather than the "this looks plausible (but is actually incorrect)" sense. But given that the comment referred to "Conversational AI", and to "modulat[ing]" the content (i.e. _giving_ answers, not just providing pointers to the original content), hallucination is still a problem.
Hey it’s the original commenter himself! I appreciate you taking my comment seriously enough to analyze, but I think I missed the mark; I totally agree that LLMs shouldn’t be giving the answers to literal arithmetic problems, or be anywhere near designing the materials (digital textbooks) themselves.
I was indeed referring mostly to something like filtering, but I think there’s plenty of room for an LLM to help out there. With something as relatively complex as simulation parameters, theres lots of room for them to support the users choices by making changes to machine-readable formats.
Thus the LLM would be “tweaking” or “framing” or “instantiating” the content without getting near the fundamental signal, which here is the specific pedagogical intent of that diagram in the context of the current lesson. I used “modulate” to try to express this idea somewhat clumsily, would love suggestions on a better one though from lurkers!
IMO simulations are hard to justify as embedded content of a pedagogical site because they’re so engaging, which makes them dangerous in a situation where close attention to the teacher/problem set/text is the much more important goal in the background. They’d have to be low cognitive load to use individually during class time, ideally so low they’re practically ambient, and I think LLMs are the only practical path in that direction.
TL;DR I didn’t mean writing LaTEX pedagogical content, I meant writing JSON objects that do stuff like highlighting, variants, scaling, inputting specific equations to a general sim, etc.
Oh, fascinating! OK, yeah, I fully misunderstood your intent, then - I thought you were suggesting the LLMs should be summarizing the content in response to queries from students ("How do I find the determinant of a matrix?" // "Well, first you..."), which I think we both agree that they're not ready for (and, while hallucination remains a problem, never will be).
So if I'm understanding it right, your proposal is for the LLM instead to be a "control layer" over the simulation object, so that a student could say something like "what happens if I increase the scale factor by 2?" and the LLM interprets that natural-language request and outputs the simulation-control-variables that correspond with the student's request (and then either feeds them into the simulation directly, or outputs them for the student to read, understand, and input)? Makes sense to me!
If the original comment had _just_ proposed to direct students to locations _within_ the original content ("filter"), it would have been less-impactful - being directed to the wrong part of a (non-hallucinated) textbook would still be confusing, but in the "this doesn't look right...?" sense, rather than the "this looks plausible (but is actually incorrect)" sense. But given that the comment referred to "Conversational AI", and to "modulat[ing]" the content (i.e. _giving_ answers, not just providing pointers to the original content), hallucination is still a problem.