Hacker Newsnew | past | comments | ask | show | jobs | submit | __cayenne__'s commentslogin

luckily Google now support's using the OpenAI lib https://cloud.google.com/vertex-ai/generative-ai/docs/multim...


In 2012, JCPenney launched their "Fair and Square" pricing campaign, which included adopting whole number pricing. This campaign was considered a significant failure and is attributed with causing a 20% decrease in sales.


I don't think we can draw any conclusion from that campaign because multiple variables were changed simultaneously

The biggest ones in my mind, the ones my family had always played: they got rid of the game playing involved in buying during sales windows. This eliminated both the urgency to buy and the fun of feeling you were getting a deal other people weren't (this is all from memory I'm afraid)


I think the "no more coupons or discounts" played a huge part in this failure. This whole strategy was something brought in by ex-Apple Retail Store exec "Ron Johnson" when he became CEO in 2011.

My own speculation is that he tried to apply hard-line strategies that work when you have a unique good with strictly-set pricing (Apple products), but fall apart when you're selling goods that people can get anywhere for a variety of prices (e.g. Levi's jeans).


I'm sure this was mostly about people wanting to feel they got a bargain, and being programmed to shop for "50% off" sales.

It seems that perception of value is more important than actual price. In similar vein there have been many cases where sellers have increased sales of an item significantly by increasing the price to make it seem more valuable.

Of course both techniques can be combined.


Hard to control for this though. How did other department stores do in 2012? I doubt e.g. Sears were putting out great numbers.


The funny unique characteristic of the economic science is that it’s almost the only science without experiments. We can have a multitude of tests and get close to reality, but it’s impossible to reset the initial environment, control variables or test in isolation. You can’t reset people’s minds, so reproducing twice on the same island won’t give the same results, reproducing on two islands won’t either, and reproducing with 3 months delay won’t put you in the same season. Even biology and psychology are much more controllable. It’s definitely a science, but with the same criticism as chess being a sport.


I don't disagree at all with any of what you have to say, but indexing returns to a "category" does go some way toward accounting for e.g. the overall decline of brick-and-mortar retail and malls.


totally understand not allowing microphone access. you can decline and still play the game by typing in your responses (hit Enter to type)


hey all, thanks for checking it out. the game is in an early state and so we're hoping to get feedback from people as we iterate and discord makes that easier for us


When the performance of Godot's physics engine has been mentioned before I've seen https://github.com/godot-jolt/godot-jolt pointed to as a drop in more performant solution.

Haven't tried it in a project yet myself


I think this is the simplest "jailbreak" jailbreak I've seen work so far - clever!



Thanks for trying it out! Right now we give the game engine some initial examples, and some character details are stored outside of the LLM context.

The token limit problem / memory isn't really being handled very well yet. There is some character consistency slippage that does happen with the current mechanism as well so hoping to tackle both of these in next version.

Trying to figure out the best way to get community input, long term would be great if anyone can make a custom RPG with custom world rules and lore.


yea, AI Dungeon is OK but would love to see more here. Have some workarounds for memory if you are using langchain and happy to share. Feel free to shoot me a message on Discord geembop0x#9165


I’ve added some simple memory handling so it should be able to continue stories passed the token limit now


This weekend at the NYC GPT/LLM Hackathon, we built rpgGPT, a text-based RPG set in the world of Occidaria (think fall of the Western Roman Empire, with a slightly magical twist). System hooks allow the LLM to set persistant game state, and every NPC is powered by the LLM. As you discover other NPCs in the world they are created on the the fly along with an avatar image. We've seen interesting emergent game play. My favorite experience to date was a character, after living through a particularly dicey adventure, decided they deserved the moniker "the Brave". Because the game engine has a rename character hook, that was persisted to the game state. Hope you can try it you!


What LLM are you using? How do you save and load the persistent state? And how do you deal with the limited context window of the LLM you are using?


Currently using gpt-3.5-turbo. All state has a json representation that gets based between the game engine and the LLM. In v0.2 not handling the context window super well, planning to use a "memory" management prompt + storing state in game engine in v0.3


That’s a clever way of doing state. For dndinfinity.com, I rendered the game state as natural language (e.g. “the players are engaged in the following quests”) in the prompt, under the theory that it would be able to reason better.

What issues (if any) did you have storing state in json and just plopping that in the context window?


The biggest trade off is you sometimes get invalid JSON. So I try repairing the JSON with some regex and also just retry a few times if it returns invalid JSON that can't be repaired.


I haven't got yet to use it myself but this might be useful for this?

https://github.com/juanignaciomolina/GPTyped

Really cool project! I want to make something similar I think if time and skills allow me to


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: