At work this week I clocked one of the chief cardiologists I work with using chat GPT for comparing literature around the ideal temperature and fixation time for finding amyloid deposits in skin biopsy samples. He is an arrogant, dry, humorless type who is always grumpy and known for being a pain in my ass and incredibly brilliant and suddenly I have an email from him laden with emojis before bullet points and "It's not just blah - it's blah" writing style. It was classic ChatGPT explanation style and I KNOW it's like that because sometimes I ask ChatGPT to explain confusing work concepts to me and it does the same exact layout with the same emojis before bulleted sections.
It actually made me enjoy him a little bit more because normally he is huge arrogant jerk so it gave me pleasure to see him have a human foibles of relying on chatgpt for his output.
Haha great story! - interesting how ChatGPT appears to be softening or improving his communication style, an example perhaps of how LLMs can provide a kind of baseline exemplar for good communication, especially useful for the arrogant jerks.
You need to prompt the LLM to provide the style you want. I have sometimes drag a few of my rough notes into the prompt and tell it to write me an article explaining this topic in the style of a site like The Verge, Daring Fireball, NYT, the onion, etc...
I don't publish this kind of output or build on it, but it makes my own topic I need to review more interesting and makes me question it as if I was reading it on a site like those I mention.
Using this kind of output as your own work seems weird, given it is templating so hard on your references -- but I don't like reading the pollyanna GPT speak.
It might be because millions of people interact with the same entity and then swap notes. Someone points out "delve" and “it’s not just X, it’s Y,” and other people go, “Yeah, I see that too!”
It's the same thing with celebrity impressions. Everyone’s watched Jack Nicholson enough to sense his quirks without naming them. Then a comedian highlights those quirks, and we’re like, “Haha, that’s EXACTLY it!”
I think a lot of these are artifacts of RLHF and left-to-right thinking, and one might actually go away with diffusion LLMs. If anyone here spoke to Gemini Diffusion, does it have different speech patterns?
First time I've heard about Gemini Diffusion. I thought it was only a matter of time before others would adopt it as soon as I saw the paper by inceptionlabs.ai.
Not sure I'm happy about LLMS getting better... but it feels like an obvious improvement in both computational efficiency and training method.
I like the clarity and easy-to-read usability, but at the same time I find the stylistic tics and overusage of words and phrases often grates.
I wonder if this stylistic failure is something that will get fixed or if it's the inevitable consequence of LLMs.