The problem is the promotion of values and behaviors that plague a low-trust society. I think making excuses for it is truly inappropriate and immoral.
Indeed. And even among those who do drink, many don't seem to understand very well how widely variable tolerance can be between people. Even someone who only drinks occassionally can still have a much higher tolerance than someone else. Where two drinks might make somebody tipsy or even drunk, to another person they might barely even feel it.
The job isn't finished if a complaint has to go viral every time slack has an oversight and threatens to delete nonprofit chat data within 7 days. The comment you're calling unprocessed rage and tar-and-feathering just said it's more than a simple mistake. If anything your comments have been harsher to individuals.
Yeah I see how I could have given that impression, now that you say it. We do try to be careful not to single out or pile on specific users, but there's always more to learn about that.
Nah, this was one of those evil customer service reps who will do anything to make thier employer look bad. No corporation would ever leverage data custody to extract a quick balloon paymemt. Devilish customer service reps reading from false scripts are the real problem.
Humans have a direct connection to our world through sensation and valence, pleasure, pain, then fear, hope, desire, up to love. Our consciousness is animal and as much or more pre-linguistic as linguistic. This grounds our symbolic language and is what attaches it to real life. We can feel instantly that we know or don't know. Yes we make errors and hallucinate, but I'm not going to make up an API out of the blue; I'll know by feeling that what I'm doing is mistaken.
It's insane that this has to be explained to a fellow living person. There must be some mass psychosis going on if even seemingly coherent and rational people can make this mistake.
We’re all prone to anthropomorphizing from time to time. It’s the mechanizing of humans that concerns me more than the humanizing of these tools, those aren’t equivalent.
Perception and understanding are different things. Just because you have wiring in your body to perceive certain vibrations in spacetime in certain ways, does not mean that you fully grasp reality - you have some data about reality, but that data comprises an incomplete, human-biased world model.
Yeah we'll end up on a "yes and no" level of accord here. Yes I agree that understanding and perception aren't always the same, or maybe I'd put it that understanding can go beyond perception, which I think is what you mean when you say "incomplete." But I'd say, "Sorry but no, I respectfully disagree" in that at least from my point of view, we can't equate human experience with "data" and doing so, or viewing people as machines, cosmos as machine, everything as merely material in a dead way out of which somehow springs this perhaps even illusion of "life" that turns out to be a machine after all, this kind of view risks extremely deep and dangerous -- eventually even perilous -- error. As we debated this, assuming I'm not mischaracterizing your position but it does seem to lead in that direction, I'd shore up my arguments with support from phenomenologists, I'd try to use recent physics of various flavors though I'm very very much out of my depth here but at least enough to puncture the scientific materialism bias, Wittgenstein, from the likes of McGilchrist and neuro and psychological sources, even Searle's "Seeing Things as They Are" which argues that perception is not made of data. I'd be against someone like a Daniel Dennett (though I'm sure he was a swell fellow) or Richard Dawkins. Would I prevail in the discussion? Of course I'm not sure, and realize now that I might, in LLM style, sound like I know more than I actually do!
Humans do many things that are not remembering. Every time a high school geometry student comes up with a proof as a homework exercise, or every time a real mathematician comes up with a proof, that is not remembering; rather, it is thinking of something they never heard. (Well, except for Lobachevsky--at least according to Tom Lehrer.)
The same when we make a plan for something we've never done before, whether it's a picnic at a new park or setting up the bedroom for a new baby. It's not remembering, even though it may involve remembering about places we've seen or picnics we've had before.
Do you genuinely believe that humans just hallucinate everything? When you or I say my favorite ice cream flavor is vanilla, is that just a hallucination? If ChatGPT were to say their favorite ice cream flavor is vanilla, are you taking it with equal weight? Come on.
I genuinely believe that human brains are made of neurons and that our memories arise from how those neurons connect. I believe this is fundamentally lossy and probabilistic.
Obviously human brains are still much more sophisticated than the artificial neural networks that we can make with current technology. But I believe there’s a lot more in common than some people would like to admit.
All you’re doing is calling the same thing hallucination when an LLM does it and memory when a human does it. You have provided no basis that the two are actually different.
Humans are better at noticing when their recollections are incorrect. But LLMs are quickly improving.
So when I tell you I like vanilla ice cream I am just hallucinating and calling it a memory? And when chatgpt says they like vanilla ice cream they are doing the same thing as me? Do I need to prove it to you that they are different? Is it really baseless of me to insist otherwise? I have a body, millions of different receptors, a mouth with taste buds, I have a consciousness, a mind, a brain that interacts with the world directly, and it's all just words on a screen to you interchangeable with a word pattern matcher?
I’m not calling what you’re doing a hallucination. I’m saying that what an LLM does is in fact memory.
But it’s a memory based on what it’s trained on. Of course it doesn’t have a favorite ice cream. It’s not trained to have one. But that doesn’t mean it has no memory.
My argument is that humans have fallible memories too. Sometimes you say something wrong or that you don’t really mean. Then you might or might not notice you made a mistake.
The part LLMs don’t do great at is noticing the mistake. They have no filter and say whatever they’re thinking. They don’t run through thoughts in their head first and see if they make any sense.
Of course, that’s part of what companies are trying to fix with reasoning models. To give them the ability to think before they speak.
Can you just train one to have a favorite ice cream? You think training on a bunch of words saying I like vanilla ice cream is somehow equivalent to remembering times you ate ice cream and saying my favorite is vanilla? Just because an LLM can do recall when prompted to based on training data doesn’t make it the same as human memory, in the same way a database isn’t memory the way humans do it.
When people made studio ghibli versions of themselves for free, were they creating hundreds of dollars worth of value since that's how much it would've cost a freelancer to commission such a picture? I would say rather the value of the pictures themselves became very cheap.
I'm confused by your explanation. You originally spelled out MCP and then edited it. Did you originally have it as model context protocol and then edited it to model control plane? Or did you originally have it spelled out as model control plane and missed it in editing?