I have no idea about the journey that atoms of my body took to reach where they are now (as me, myself). I wish them good lucky on their future endeavours. I think we should get acclimatized to similar process about "Ideas and Concepts that we think originates from Us". These concepts will be meat grinded into large LLMs and hopefully help someone in future.
Is the solution to sycophancy just a very good clever prompt that forces logical reasoning? Do we want our LLMs to be scientifically accurate or truthful or be creative and exploratory in nature? Fuzzy systems like LLMs will always have these kinds of tradeoffs and there should be a better UI and accessible "traits" (devil's advocate, therapist, expert doctor, finance advisor) that one can invoke.
I disagree. As many intellectuals and spiritual mystics attest to their personal experience, knowledge actually liberates mind. Imagine a mind which truly understands that it is embedded inside a vastness which spans from planck scale to blackholes. It would be humble or more likely amoral.
Okay, sure, at least according to a certain interpretation, but...
> Imagine a mind which truly understands that it is embedded inside a vastness which spans from planck scale to blackholes. It would be humble or more likely amoral.
This is just gobbledygook. The conclusion does not even follow from the premises. You are question begging, assuming that moral nihilism is actually true, and so naturally, any mind in touch with the truth would conclude that morality is bullshit.
The guy couldn't emotionally recognise his mother after seeing her and started calling her imposter. But when he heard her voice over telephone, he felt emotional connection and said the person on other end was indeed his mother. Emotional pathways provide salience information in conjunction with sensory pathways. Any disruption to emotional pathways can override even correct sensory data.
If your answer requires clustering and assembling disparate facts strewn about on the internet or company data / documents and reasoning over them, then LLMs can help that. Atleast that's what I did when I used to answer questions on stackoverflow.
My point was before AI, when I used to answer stackoverflow questions out of curiosity, I used to manually search around on internet to properly answer the question. This is exactly the process LLMs help with.
There are people who are experts in a generalist sense. When a new field opens up, they quickly snatch up the opportunity and make immense progress and name for themselves in the evolving field. So in this case the first author is the mouse who ate the cheese and died.
Sorry, I should have said he died in the process of getting the cheese while the second mouse got the cheese.
The phrase "the second mouse gets the cheese" means that it can be beneficial to let others take the initial risk, as the first to act might trigger a negative outcome, leaving the opportunity for the second person to succeed without the same danger. I
Is there a way to convert documents into a hierarchical connected graph data structure which references each other similar to how we use personal knowledge tools like Obsidian and ability to traverse this graph? Is GraphRag technique trying to do this exactly?
Not exactly what you’re looking for but Wilson Lin’s search engine creates a graph from the DOM for context. Here’s his write up: https://blog.wilsonl.in/search-engine/
I am wondering can we use LLMs to semantically encrypt our emails so that if I am talking about my startup strategy, to the person snooping or NSA it will appear as if we are talking about recipes.
We're proposing semantic steganography using LLMs as encoder/decoder pairs where startup strategy discussions appear as recipe exchanges. Unlike traditional crypto, security emerges from semantic complexity rather than mathematical hardness - the LLM maps between concept spaces (e.g., "fermentation time" ↔ "development cycles") using its world model. Both parties share a seed phrase that deterministically generates the same bidirectional mapping, eliminating key exchange over insecure channels. The core insight: natural language is already an encoder (concepts → symbols), so we're just adding a second semantic layer that looks like normal Layer-1 communication to observers. Main challenges are LLM non-determinism requiring error correction and the tradeoff between information density and plausibility. The approach essentially weaponizes the LLM's semantic understanding to create a regenerable codebook rather than storing/transmitting it.