Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To all those commenting on how terrible of an idea this is: what if it includes a kill switch that's fundamental to its existence? (Where kill switch is eg. a vaccine to the vaccine as a break glass option)


Because when the original thing didn't work as intended (with which we were pretty sure, otherwise we'd never done it cause of the risks) then the kill switch will work 100% as intended for sure, and also have no unexpected side effects?


Nothing is 100% without risk, but each layer of safeguard reduces it. The standard shouldn't be zero risk, but rather weighing the risk of one (in)action vs another.

To draw an analogy to the obvious vaccine-related situation on everyone's minds, no one who knows what they're talking about would say that mRNA covid vaccines are 100% safe. But they are far, far safer than not having a vaccine, so it makes sense to accept that risk. I'm not saying the same is necessarily true of any given 'contagious' animal vaccine, but it could be.


True.

> I'm not saying the same is necessarily true of any given 'contagious' animal vaccine, but it could be.

That's what I think I wanted to say.. there is a huge difference to anything self-spreading / replicating - be it self-reproducing true AI nano robotors, simple things as self replicating plants (and here we have already a dozen examples of promised containment not working with bad consequences involved - oops) or 'contagious' vaccines.. it needs to be magnitudes safer, not?

So if as an individual there is a minimal risk to my health by taking a mRNA vaccine, I can decide for myself vs the risk of infection or consequences, or also let my doctor decide. It is also easy to run "trials" among the population for studies. But if there is a risk to our complete biosphere, that is just very different for the risk calculation?


And think about the timescales we are talking about here. We might be creating something that evolves and persists for millions of years. We don’t seem anywhere close to the level of knowledge or wisdom needed to safely do anything that has that kind of long term impact.


The article discusses a safeguard that would only allow a set number of replications, so the lifespan of the intervention would be limited. (Assuming it works as expected, of course.)


>what if it includes a kill switch that's fundamental to its existence

Like my everyday flying car, that's a nice idea but may have the problem of not existing.


Jurassic Park, while a fictional story, illustrates the problem beautifully. "Life finds a way".


The article discusses a safeguard by which the virus would only be able to replicate a set number of times.


Don't cells in humans (and other animals) basically have a similar safeguard, and when it breaks due to mutations the result is cancer? And cancer arises with uncomfortably high probability...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: