Intuition is highly personal. Many people believe that abandoning monism feels intuitively wrong and that dualism is an excuse for high minded religiosity.
Leibniz seems to get to high-minded religiosity fine with monadology and still dodge dualism. I'm probably overdue to try and grapple with this stuff again, since I think you'd have to revisit it pretty often to stay fresh. But I'll hazard a summary: phenomena exist, and both the soul of the individual and God exist too, necessarily, as a kind of completion or closure. A kind of panpsychism that's logically rigorous and still not violating parsimony.
AI folks honestly need to look at this stuff (and Wittgenstein) a bit more, especially if you think that ML and Bayes is all about mathematically operationalizing Occam. Shaking down your friendly neighborhood philosopher for good axioms is a useful approach
I think you misunderstood GP, they don't seem to be a fan of dualism either and are in fact defending it as a valid position. The point about intuitive feeling was just a polite concession.
> Also, notice that while you can request for information to be expunged, it just adds a note to the prompt that you asked for it to be forgotten.
Are you inferring that from the is_redaction_request flag you quoted? Or did you do some additional tests?
It seems possible that there could be multiple redaction mechanisms.
That and part of the instructions referring to user commands to forget. I replied to another comment with the specifics.
It is certainly possible there are other redaction mechanisms -- but if that's the case, why is Gemini not redacting "prohibited content" from the user_context block of its prompt?
Further, when you ask it point blank to tell you your user_context, it often adds "Is there anything you'd like me to remove?", in my experience. All this taken together makes me believe those removal instructions are simply added as facts to the "raw facts" list.
>Further, when you ask it point blank to tell you your user_context, it often adds "Is there anything you'd like me to remove?", in my experience. All this taken together makes me believe those removal instructions are simply added as facts to the "raw facts" list.
Why would you tell the chatbot to forget stuff for you, when google themselves have a dedicated delete option?
>You can find and delete your past chats in Your Gemini Apps Activity.
I suspect "ask chatbot to delete stuff for you" isn't really guaranteed to work, similar to how logging out of a site doesn't mean the site completely forgets about you. At most it should be used for low level security stuff like "forget that I planned this surprise birthday party!" or whatever.
That settings menu gives you two relevant options:
1. The ability to delete specific conversations,
2. The ability to not use "conversation memory" at all.
It doesn't provide the ability to forget specific details that might be spread over multiple conversations, including details it will explicitly not tell you about, while still remembering. That's the point -- not that it's using summaries of user conversations for memory purposes (which is explicitly communicated), but that if you tell it "Forget about <X>", it will feign compliance, without actually removing that data. Your only "real" options are all-or-nothing: have no memories at all, or have all your conversations collated into an opaque `user_context` which you have no insight or control over.
That's the weird part. Obviously, Google is storing copies of all conversations (unless you disable history altogether). That's expected. What I don't expect is this strange inclusion of "prohibited" or "deleted" data within the system prompt of every new conversation.
We’ve been using mail ballots for decades, as a voter this system is convenient and afaik hasn’t been seriously challenged.
Your suggestion for its abolition aligns with treasonous players like Vought.
You could interface with an Artisan LCD maybe? I’ve got my 1 lb drum roaster hooked up to that and watch the degrees-per-minute display to estimate the roast curve.
reply