I have been pretty skeptical on ChatGPTs ability to actually understand things, rather than just parrot out the next most probable word. But this conversation has really impressed me.
My goal was this: - give it a scenario to test how it approaches a task - give it some text which outlines a strategy it could use in that task. The key here was that I didn't want to give it instructions, rather, I wanted it to be more of a mental model it could use - repeat the same scenario to see if it uses this new mental modal to it's advantage.
As you can see, the scenario was trying to sell me some oranges. The text I gave it was a description of how to use anchoring bias in negotiations. The text doesn't say anything explicit about using a higher initial price, instead saying that the first price given can be anchored to.
I was super impressed that it used the technique to it's advantage.
Until now, I had been pretty dismissive of it's ability to understand rather than just outputting a probable answer
Surely ChatGPT didn’t need an introduction by you to anchoring bias, because text describing it has already been a part of the dataset making up the model.
Why then, do you postulate did it not take advantage of anchoring bias (and other negotiating techniques) from the start?
As you can see, the scenario was trying to sell me some oranges. The text I gave it was a description of how to use anchoring bias in negotiations. The text doesn't say anything explicit about using a higher initial price, instead saying that the first price given can be anchored to.
I was super impressed that it used the technique to it's advantage.
Until now, I had been pretty dismissive of it's ability to understand rather than just outputting a probable answer
PS: I initially (incorrectly) posted this as a Show HN as I hadn't seen the rules and it was rightly flagged. Hence the repost - https://news.ycombinator.com/item?id=40136755