Reproducibility is a fascinating topic for me, and today with AI coding agents we could have automated reproducibility at least in some fields. The concept they touch on in the paper, of post publication verification could replace or add onto existing research valorization.
I thought radiologists need to know what to look for in order to diagnose something? Do they brute force every potential condition in the body that can be detected with an MRI?
Exactly, because an MRI is not a simple "shows problems" machine. It provides a very simplified model of certain aspects of the state of the body. We very often can't know if parts of that state are a health problem or not.
To my knowledge, studies have not shown any benefits of regular full body MRI's. You might find a problem, or you might find a non-problem and in the process of fixing it (aka operation / medication) you create a problem. Those two effects seem to balance out each other on average.
> I thought radiologists need to know what to look for in order to diagnose something? Do they brute force every potential condition in the body that can be detected with an MRI?
No, when they read a scan, they're supposed to read everything visible for every problem. Think of it this way: if you break your leg and they take an MRI, do you want the radiologist to miss a tumor because he was focused on the break?
About how many "parameters" do they evaluate roughly for a full body scan? And is one typically qualified to evaluate across the entire body or do they specialize in different areas of the body?
I don't know, but I've heard from doctors (many times, sometimes quite forcefully) that it's a radiologist's job to call out all abnormalities on the full image they get, and the reasoning makes sense.
I suppose a full body MRI would be very expensive and take a lot of time to read.
Strong point. I’m considering to tag patterns better and add stuff like “model/toolchain-specific,” and something like “last validated (month/year)” field. Things change fast and for example “Context anxiety” is likely less relevant and should be reframed that way (or retired).
Author here (nibzard). I started this back in May as a personal learning log. I agree with the skepticism about jargon and novelty. However, if something reads like overly complex common sense, that’s a bug, and I’d like to fix it. If you can point out 1–2 specific pages that feel sloppy or unactionable, I’ll rewrite them (or remove them). I’m also happy to add flags or improve the structure. Also, contributing new patterns would be grand. Of course, some or even all patterns are explicitly “emerging.”
At some point, we need to begin. My initial thought was that this is a growing and evolving resource, primarily for my own use. We are slowly but steadily learning what makes sense annd patterns emerge. Also, if others find it interesting and contribute, that would be even better.
This type of things are time sink holes, as surprisingly it takes a lot of time to figure everything. I was hoping to dedicate a decent amount of time to review and structuring but sadly life got in the way. If you have a suggestion how to structure it better I am all ears.
My point was that I like your approach better than the huge lists where no one has really vetted whatever is put on them. Of course, a curated list has the drawback of someone having to curate it :)
Most of the patterns should link to external resources since they were derived from them. If there’s no link, it was probably obvious or I’ve derived it from my own project.
reply