You're not addressing my broader point, though, just the easy-to-snide-at version of my point.
Yes, it's pretty obvious that Dall-E and similar models won't destroy humanity.
My point isn't that Dall-E is a black ball. My point is we better hope a black ball doesn't exist at all, because the way this is going, if it exists, we are going to pick it, we clearly won't be able to stop ourselves.
(For the sake of dicussion, we can imagine a black ball could be "a ML model running on a laptop that can tell you how to produce an undetectable ultra-transmissible deadly virus from easily purchased items")
> we can imagine a black ball could be "a ML model running on a laptop that can tell you how to produce an undetectable ultra-transmissible deadly virus from easily purchased items
I think we’re already past the point where we could have done something about this. In fact, we’ve probably been past that point since humanity was born.
I think it’s probably more valuable if we think about how we’ll deal with it if we do draw something that could be/is a black ball.
That said, so far all evidence points to extreme destruction just being really hard, which leads me to believe that truly black ball technologies may not exist.
> Apart from nuclear scientists I don't know a field where participants are as conscious of the risks as AI research.
Great. Now some of these researchers preceived some risk with this technology. Not human extinction level risk, but risks. So they attempted to control the technology. To be specific: OpenAI is worried about deepfakes so they engineered guard rails into their implementation. OpenAI was worried about misinformation so they did not release the bigger GPT models.
Note: I’m not arguing either way if OpenAI was right, or honest about their motivations just observing that they expressed this opinion and acted on it to guard the risk.
Got this so far? Keep this in mind because I’m going to use this information to answer your question:
> If you know that image generators are not it, then why talk about it here?
Because it is a technology which were deemed risky by some practitioners and they attempted to control it, and those attempts to control the spread of the technology failed. This does not bode well towards our ability to restrain ourselves from picking up a real black ball, if we ever come across one. And that is why it is worth talking about black balls in this context.
Note it is unlikely that a black ball event will completely blindside us. It is unlikely that someone develops a clone of pacman with improved graphics and boom that alone leads to the inevitable death of humanity. It is much more likely that when the new and dangerous tech appears on our horizon there will be people talking about the potential dangers. What remains to be answered: what can we do then? This experience has shown us that if we ever encounter a black ball technology the steps taken by OpenAI doesn’t seem to be enough.
This is why it is worth talking about black ball technologies here. I hope this answers your question?
I hear you. The concepts we are dealing with here are very abstract.
It sounds like you are getting hung up on details of a particular example. That is not useful. We can’t give you exact details for a particular black ball because we haven’t encountered one yet. Sadly the fact that we haven’t encountered one yet doesn’t mean that they don’t exists.
Think about it like this: There are technologies which are easier to stop spreading and there are technologies which are harder to stop spreading.
Example for a technology which is easier to stop: Imagine that a despotic government wants to stop people from space launches. All the known tech to reach orbit is big and heavy and requires a lot of people. It is comperatively easy to send out agents who look at all the big industrial installations and dismantle the ones used for space launches. There will be only a handful of them and they are hard to hide.
Now an example for a technology which is harder to stop: imagine that the fictional despotic government has it in for cryptography. That is a lot harder to stop. One can do it alone in the privacy of their own home! All you need to do is some pen and paper. That can be hidden anywhere! A lot lot harder thing for the agents to find and distrupt.
We talked about how easy to stop the spread of a given technology. Now let’s think about something else. The potential of a given tech to cause harm.
An example for a risky technology: nuclear weapons. If you have them you can level a city. That is a lot of harm in one pile.
An example for a less risky technology: ergonomic tool handles. Those rubbery overmoldings which make it nicer to use the tool long term. There is no risk free technology, but I hope you agree that these are a lot less dangerous than a nuclear bomb.
Do I have you so far? Good. Because this was the easy part. We talked about things which already exists. Now comes the hard part. This requires some imagination: We have seen tech which was easy to control and tech which was harder. We have seen tech which was risky and tech which was less risky. Can these properties come in all combinations? In particular: are there technologies which are both risky and hard to control? Something for example where any able human can accidentally or intentionally level a city or kill all humans? I can’t give you an example, we don’t have technology like that yet.
The example you are asking about is an example for this kind of technology: high risk, hard to control.
Nobody says that you can download software today from github which can help you engineer a deadly virus from household chemicals. This does not exist. It is a stand in for the kind of tech which if it were possible it would mean that we have a high risk, hard to control technology
Does this help explain the context better? Let me know if you still have questions.
> Does this help explain the context better? Let me know if you still have questions.
That's the problem. The thing you're scared about (dangerous technology) has nothing to do with the context (AGI) because there's no reason to think AGI is especially capable of creating any of it or is going to. Humans create general intelligences (babies) all the time and you aren't capable of, nor are you putting any effort into, "aligning" babies or stopping them from existing.
AGI being superintelligent won't give it superhuman creation powers, because creating things involves patience, real-life experimentation and research funds, and while I'll grant you computers have the first they won't have the other two.
Sorry. Where did i mention anything about AGI? Why is that the “context”?
Some form of AGI under some circumstances might be black ball tech. There can be other black balls which have nothing to do with AI let alone AGI.
> The thing you're scared about
I’m scarred about many things but black ball tech is not one of them.
One can discuss existential risks without being scared about it.
> they won't have the other two
If you say so? I don’t agree with you on this, but it feels this would mislead the conversation, since AGIs and black ball tech has at most some overlap.
Exactly, and there are many pros for humanity to this too: people will be able to make funny pictures and things like that, so it's not like it's a bad deal.