The SETI@Home screensaver [0] (the predecessor to BOINC) was the peak of nerd-cool when I was 11 or 12 years old. I had no idea what any of the graphs meant, and still don't, but I knew that I could help find _aliens_, and look like a hacker doing it
Seti@home started a long long time ago, when computer used the same amount of power, if they worked hard or if the computer was idle. That's why they started as screensaver, it did just not matter.
Yes, I remember. But even then we knew full well the chance of identifying anything via SETI was minute. We were looking at a sliver of a sliver of a sliver of what's out there.
A tiny office full of grad students could have come up with a dozen more useful ways to use that many distributed computing cycles, say for medical research. Not only folding@home: imagine what a Wolfram or a Knuth would have achieved with all those cycles.
I suspect the useful things you can do with this model are quite limited. You need:
* A problem that can be cut into chunks small enough for a low end desktop computer
* One that's solvable without communicating with anything else
* The chunk is solvable enough that the user doesn't interrupt the process
* And the bandwidth usage is small
* And the solution is easily verifiable
* And the problem is big enough that it needs a vast amount of hardware to solve
* But is relatively unimportant, so nobody wants to spend any money to solve it faster
Especially the latter parts seem to mostly suggest quirky uses like SETI. If it's something important you can probably do it faster by just finding funding, than by trying to convince millions of people to install a screensaver.
Well, we know what they could have done. Nothing. Because both were around and active then and they didn't do anything. Folding@Home followed SETI@Home so we know offering a different thing would have participants.
It's actually fascinating. People back then would be eager to find ways to do things. So many people now are lamenters, not doers. Perhaps that's just mean reversion and the original population was more agentic because it was a new tech.
After the geeks and mops, perhaps the ones who follow are the lamenters.
Well, one used maybe tens or hundreds of megawatt hours over the course of a year, while the other uses tens or hundreds of terawatt hours per year.
One was a launching platform for dozens of scientific and other large scale volunteer computing projects, and the other's value is entirely based in speculation.
SETI wasn't a waste of time don't have any good alternatives to it. Folding@home on the other hand probably could have been a lot more useful if they were training transformers on their results instead of just doing pure protein dynamics simulations.
Well, it predates the deep-learning ML approaches to protein folding like Alphafold. But people have been applying ML methods to protein folding for decades -- that has long been the two camps in the protein-folding bioinformatics world -- whether the best way to get structures was to simulate what the protein was actually doing or if there was a computational shortcut.
The goal of protein folding simulations like Folding@Home is not to predict 3D structures - it's to understand how folding actually works, and why it sometimes doesn't work. When FAH came out it was already very obvious that there were good computational shortcuts to predicting the end state (the Rosetta approach), but those don't tell you very much about the physical process. Different questions call for different approaches.
Just wait till you hear about the developers out there doing `select* from` and then using reduce inside their api server to sum up the values of a column.
[0] https://en.wikipedia.org/wiki/SETI@home?useskin=vector#/medi...