Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At the moment, the real killer use-case for the current generation of AI agents (Claude, Gemini, ChatGPT) is when you know you don't know something AND you have a specific task to get done.

Some examples in past week:

  1. Claude: "Why is this code failing typechecking in typescript" -> gives me detailed reasons why, teaches me about some things I didn't know about, and solves the problem for me while uplevelling my understanding,

  2. Gemini: "What kind of contractor do I need to install an oven with a downdraft?" -> Tells me I need an appliance installer, and then links me to https://www.thumbtack.com/ca/san-francisco/appliance-installers and I select the 1st one.
Applying this to education, I think that education needs to pivot explicitly to "solving problems" and not education for educations sake. e.g .The student needs to be engaged with a problem and reaches for AI as a tool to solve it and, as a result, up level their understanding.

If a huge mechanism for assessing folks in education is "write an essay on this" and the teacher then "grades the essay output by the student," that's almost the perfect task where AI can do both ends. Which is pretty much a sign that assessments such as this have low educational impact.



I hate listening to tech people talk about education like it's some problem to be solved. Using a system that you know is wrong 10% of the time for education is a terrible idea. We're trying to teach things like basic literacy and cultural contex. School does not exist for the sole reason of pumping of good little tech workers. Some things are actually hard to measure and AI being able to put out a convincing looking essay doesn't mean the act of writing essays has no value for the student. The goal was not the production of an essay.


I am in school currently and have had students sitting next to me use ChatGPT to help with some in class quizzes, we are allowed to use other resources, so it wasn't cheating. For some of the questions, ChatGPT was flat wrong when analyzing code. One specific question was asking where errors were in a block of code by line number, ChatGPT gave the wrong line number and wrong number of lines on top of that. The student actually knew how to answer the question, but didn't want to think. In a different class, a prof said she knew some students were using ChatGPT because the errors some students made were the same errors ChatGPT made and they were kind of dumb mistakes if you knew what was going on.

I don't mean to say your experience isn't possible nor a fluke. What I mean to say is, if you don't know something, ChatGPT shouldn't be your only resource, you won't know when it's wrong.

edit: words, spelling


> that's almost the perfect task where AI can do both ends. [..] Which is pretty much a sign that assessments such as this have low educational impact.

This seems like a bit too far of a leap.

When teaching math, we want students to be able to prove novel theorems. But all the open problems in mathematics are really hard, so teachers often start by getting students to practice on easier theorems that have already been proven.

In this context, something like "prove the Pythagorean Theorem" is a useful exercise and valuable assessment! You just need to make sure the student actually does it instead of copying the answer from the Internet (directly or via ChatGPT).


> Applying this to education, I think that education needs to pivot explicitly to "solving problems" and not education for educations sake. e.g .The student needs to be engaged with a problem and reaches for AI as a tool to solve it and, as a result, up level their understanding.

It seems you are thinking of project-based learning. That is a great way to learn, but it only works when you have a certain level of baseline knowledge within the "zone of proximal development" to what the task requires. And even then, it does not replace other kinds of learning for developing expertise.

Think of asking a 5th grader to implement the FFT within a programming langauge of their choice. They wouldn't understand the problem, wouldn't know where to begin, and would probably learn very little. But they could still ask Claude or ChatGPT for an answer and there's a good chance it would be pretty close.

For a curious and motivated student, AI is an amplifier---similar to how you were using Claude. But most students just want to finish the task, in which case the more powerful the AI, the less the student needs to know to get a result. These students are not using the AI as a tutor or to fill in gaps in their knowledge. They are just copy-pasting the assignment into the prompt and pasting the output as the answer. That is not learning, and it can be done with projects too.

> If a huge mechanism for assessing folks in education is "write an essay on this" and the teacher then "grades the essay output by the student," that's almost the perfect task where AI can do both ends. Which is pretty much a sign that assessments such as this have low educational impact.

Not at all. It's a sign that AI can do the task.


> If a huge mechanism for assessing folks in education is "write an essay on this" and the teacher then "grades the essay output by the student," that's almost the perfect task where AI can do both ends. Which is pretty much a sign that assessments such as this have low educational impact.

Disagree here. AI can be (theoretically) trained to do any tasks with which it has an IO interface to perform the task. The same goes for humans, they can't inherently prove they "know" the information, they have to show it, via an IO interface (their hands/voice).

As such, any metric for proving human knowledge could be gamed by AI, assuming an environment where AI could be used (i.e. at home). The only places where AI can be controlled is in the classroom, like during exams or in-person activities.

So to conclude activity X has "low impact" because AI can do it, implies that all education at home has low impact, because AI could do it (through a human performing verbatim what it instructs, ignoring any explanation the AI might give).


The problem is that you have a sense of what a correct and what an incorrect answer is.

When you go and ask it for some assembly code ChatGPT will happily mix Intel and ARM instructions.

And if you don't have a frame of reference you can spend quite a big amount of time figuring out what is wrong.


They gave us the go-ahead at work last week so I tried to use ChatGPT today on a task. It failed, and failed, and failed, over and over again. I kept prompting it, telling it where it was wrong, and why it was wrong, and then it would say something trite like, "Got it! Let's update the logic to reflect $THING_I_SAID_TO_IT..." and then it would just regurgitate the exact same broken code over and over again.

It was an absolute waste of my time. I quite like it when I need to get some piece of weird information out of it about the Python standard library or whatever but whenever I try to get it to do anything useful with code it fumbles the bag so hard.


You have to start from scratch. The training data has many examples of almost directly copied blocks of text and code and the model will tend to just repeat itself after several iterations.


An interaction like this led me to cancel my chatgpt subscription.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: