> Every time the LLM is slightly off target, ask yourself, "What could've been clarified?
Better than that, ask the LLM. Better than that, have the LLM ask itself. You do still have make sure it doesn't go off the rails, but the LLM itself wrote this to help answer the question:
### Pattern 10: Student Pattern (Fresh Eyes)
*Concept:* Have a sub-agent read documentation/code/prompts "as a newcomer" to find gaps, contradictions, and confusion points that experts miss.
*Why it works:* Developers write with implicit knowledge they don't realize is missing. A "student" perspective catches assumptions, undefined terms, and inconsistencies.
Pretend you are a NEW AI agent who has never seen this codebase.
Read these docs as if encountering them for the first time:
1. CLAUDE.md
2. SUB_AGENT_QUICK_START.md
Then answer from a fresh perspective:
## Confusion Points
- What was confusing or unclear on first read?
- What terms are used without explanation?
## Contradictions
- Where do docs disagree with each other?
- What's inconsistent?
## Missing Information
- What would a new agent need to know that isn't covered?
## Recommendations
- Concrete edits to improve clarity
Be honest and critical. Include file:line references."
```
*Uses cases:* Before finalizing new documentation, evaluating prompts for future Agents.
For me it's the motion clarity that I notice the most. Higher FPS is just one way to get more clarity though, with other methods like black frame insertion then even 60 fps feels like 240.
Set nproc_per_node-1 instead of 8 (or run the training script directly instead of using torchrun) and set device_batch_size=4 instead of 32. You may be able to use 8 with a 5090, but it didn't work on my 4090. However it's way slower than expected, one H100 isn't 250x the 4090, so I'm not sure it's training correctly. I'll let it run overnight and see if the outputs make any sense, maybe the metrics are not accurate in this config.
I'm running it now and I had to go down to 4 instead of 8, and that 4 is using around 22-23GB of GPU memory. Not sure if something is wrong or if batch is only scaling part of the memory requirements. (Edit: I restarted running the training script directly instead of torch run, and 8 still doesn't fit, but 4 is now using 16-17 instead.)
On my 4090 the tok/sec is 523, which is 1/2000 of the 1,000,000 tok/sec of the 8 80GB H100s. That feels too slow so maybe something is wrong. The 4090 is about 1/3 of the raw compute. I'm sure there's other losses from less batching but even if it were 1/10ths as fast, I'd expected something more like 1,000,000 / 10 / 8 so at least 10,000 tok/sec.
> first time I've seen such expressiveness in TTS for laughs, coughs, yelling about a fire, etc!
The old Bark TTS is noisy and often unreliable, but pretty great at coughs, throat clears, and yelling. Even dialogs... sometimes. Same Dia prompt in Bark: https://vocaroo.com/12HsMlm1NGdv
Dia sounds much more clear and reliable, wild what 2 people can do in 3 months.
So this a new method that simulates a CRT and genuinely reduces motion blur on any type of higher framerate displays, starting a 120hz. But it doesn't dim the image like black frame insertion which is the only current method that comes close to the clarity of a CRT. But it also simulates other aspects of CRT displays, right?
Can you use this method just to reduce blur without reducing brightness, on any game? They mention reducing blur for many things other than retro games in "Possible Use Cases of Refresh Cycle Shaders" but does reducing blur in a flight simulator also make it visually look like a CRT with phosphors?
They do mention that it does reduce brightness. The selling point compared to strobing sounds to be less eyestrain. I'd expect it to lose more brightness than strobing, considering the lower relative pixel on time.
>I love the way “take a break” is presented as an available option. I guarantee that for many caregivers it’s absolutely not.
I had the same first reaction - why didn't I think of just taking a break or hiring help? It was right in front of me!
But the article does lead with reminding people to simply ask friends or family for help and that is both easy to forget about and hard to do it even when you remember.
>But why would I buy those books or listen to those podcasts that are synthetic affectations of no substance?
A randomly selected NotebookLM podcast is probably not substantial enough on its own. But with human curation, a carefully prompted and cherry-picked NotebookLM podcast could be pretty good.
Or without curation, I would use this on a long drive where audio was the only option to get a quick survey of a bunch of material.
Better than that, ask the LLM. Better than that, have the LLM ask itself. You do still have make sure it doesn't go off the rails, but the LLM itself wrote this to help answer the question:
### Pattern 10: Student Pattern (Fresh Eyes)
*Concept:* Have a sub-agent read documentation/code/prompts "as a newcomer" to find gaps, contradictions, and confusion points that experts miss.
*Why it works:* Developers write with implicit knowledge they don't realize is missing. A "student" perspective catches assumptions, undefined terms, and inconsistencies.
*Example prompt:* ``` Task: "Student Pattern Review
Pretend you are a NEW AI agent who has never seen this codebase. Read these docs as if encountering them for the first time: 1. CLAUDE.md 2. SUB_AGENT_QUICK_START.md
Then answer from a fresh perspective:
## Confusion Points - What was confusing or unclear on first read? - What terms are used without explanation?
## Contradictions - Where do docs disagree with each other? - What's inconsistent?
## Missing Information - What would a new agent need to know that isn't covered?
## Recommendations - Concrete edits to improve clarity
Be honest and critical. Include file:line references." ```
*Uses cases:* Before finalizing new documentation, evaluating prompts for future Agents.