I would modify that statement without only a single gatekeeper. All countries ( ideally democratically elected) should have a say in how that system operates.
Having gatekeepers is not inherently bad, having just one who can effectively block anything is problematic.
But there isn't a website with comment threads that are high quality. Most of the quality crypto discussions happen in Telegram and Discord.
I've realized one of the reasons programmers dislike cryptocurrency is because they value efficiency more than they understand incentive systems. They complain that crypto is inefficient and unoptimized and miss that it's an entirely new incentive system.
There's a lot to be skeptical of in the cryptocurrency space, but a community centered around outsized gains through startups should remember that pessimists get to be right and optimists get to be rich.
I disagree this should have been flagged. The authors do claim the tasks are "general" but they do have a lot in common with each other, including not-so-far-away misuse... which also happened with AlphaDogfight... and that was transferring from playing Go to flying jets. This is clearly not as big of a leap as the author points out.
Aka - notice that none of the agents in this example are folding proteins. They're all engaged in inherently combat-relevant skills. :)
This completely misses the point. The problem isn't that people do worse, but that automating warfare changes the cost function. Technologically superior and determined enemies can still be dissuaded by incurring significant numbers of casualties or losses to their infrastructure. A small number of casualties stiffens resolve, a large number of casualties or a never ending trickle of them eventually breaks or wears it away.
If military actors can reliably change outcomes by the relatively low-cost expedient of throwing in autonomous weapons platforms that cost about as much as a washing machine, they will and they'll do it at scale, and (in the short term at least) their political backers will cheer and get off on it. In the longer run it will lead to a considerable increase in terrorism against the technologically advanced power.
Sure, people ultimately make these decisions and deploy such technologies, but so what? it's not like that can change in any way because you can't take people out of the equation and you can't just wish away political forces by pinning the blame on select individuals. Rather than retreating into truisms, it's more important to assess the impact of this emerging force multiplier and develop countermeasures.
My point is that it's already automated. Those things I mentioned are already not hand-to-hand combat and don't have human soldiers risking their lives. So however scary some new automated weapon is, it should be no worse than existing automated weapons. What makes a robot soldier worse than a cruise missile or land mines or a bomber aircraft or a remote piloted drone? All those things can already be used by technologically superior enemies without incurring casualties themselves.
You mention attacks against a technologically advanced power (does an "enemy" become a "power" when it's a friend?), but obviously those powers will find ways to defend against them. Maybe it's just in the form of slightly more advanced "washing machines".
This fear thinking seems to come from assuming no secondary advancements occur. Suddenly robot soldiers are cheaply available and nobody develops any defense against them, either political or technological.
You left out the bit where the various superpowers inevitably have to try using the shiny new technology against their rivals because it's never been tried so we can't be certain it's a bad idea. That's the bit that worries me the most - I'd rather do without a rerun of World War 1.
> it's more important to assess the impact of this emerging force multiplier and develop countermeasures
What is there to do other than develop your own equivalent systems though?
There's a whole literature on the logic (and meta-logic) of deterrence called power transition theory that is worth looking into, as it sheds a lot of light on the unpleasant topic of nuclear deterrence and how that works.
In a more general sense, the solution to an elevated attack is not always a retaliatory attack, but perhaps a better defense that neutralizes it. Helmets can be used as weapons, but their primary purpose is to make weapons less effective and change the strategic calculus - now the enemy gets lesser results for the same effort, and either gives up or tires out and can be defeated with a smaller retaliation. In general, defense is thought to be somewhat stronger than offense, which is why surprise is so important. Technological edges tend to be negated over time.
Deeply understanding this takes a long time and a lot of study. Military science is a difficult but interesting subject, and tips over into systems theory.
I love this essay so much. So much that I tried to implement all of the demos in JavaScript, which you can try in interactive form here: https://www.newline.co/choc/
newline | Course author | Remote | Part Time | https://www.newline.co/write-with-us 7 out of our last 10 authors made $50k+ (each). We’re the authors of Fullstack React, ng-book, Fullstack Vue and we’re looking to work with authors like you to write a few new courses this year. Our books & courses sell very well because: - We go way beyond API docs and teach everything you need to know to build real apps. - We guarantee they're up-to-date. - We invest in marketing the books (and have an active email list of over 100k) - We love the topics we write about and aim to create something remarkable every time. If you decided to self-publish, you may find the marketing is more than writing the course. We have an audience, and we know what they want to learn - so when your course is done, we already have people who want to buy it. If you decide to go with a “traditional” publisher, you may be given a mediocre editor, write your manuscript in MS Word (ha), and earn 5-15% in royalties. With us, our editors (me) are programmers first, our tooling is dev-friendly, and our royalties on profit are split 50/50. (For scale, the author of Fullstack Vue earned $20k on the opening weekend, Fullstack D3 even more.) We’re looking to write the definitive guides on programming topics. Things like "The newline Guide to Authentication with React and Node in 2020" - But variations on that can be any major stack or task: Not only JavaScript, but also Rust, Go, Java, AWS, DevOps, Angular, React, ASP.NET Core, Serverless, Python, Elixir, Data Science etc.
If this sounds like something you’d be interested in, fill out the form linked below. Looking forward to hearing from you!
Sorry to hear you're going through this. I went through something similar and it wasn't fun.
As much as you have shareholder agreements etc. none of that matters too much if the business fails and so it's basically about what the two of you can negotiate.
In my case, I've paid off a former business partner much like a loan. You can negotiate all sorts of parameters on this: monthly payments, grace period, cash triggers, funding triggers etc.
Basically you set a valuation for the business (at least as set by the price of the round of the last investor, if not more because of growth) and then he buys your ownership.
Idk what "reverse" vesting is, but if you had normal vesting it sounds like your 49%, after the 1 year cliff, would be worth e.g. 12%. So you can either keep that 12% or if he wants to buy you out he could pay you your 12% vested * last valuation * growth factor.
It sounds like it's not going to work for the two of you to work together, so now it's just about negotiating the details before the conflict kills the company
This seems like the most realistic and likely answer. My co-founder hasn't really been budging so far and I feel like they don't fully understand the situation. They think that because it was their idea that they are entitled to a lot more than me.
One issue for me is that I don't have that much faith in them being able to execute on the company vision by themself, e.g. they don't want to monetize right now or do a revenue split for reasons I'm unclear about, which makes the practicality of monthly payments tricky.
Take it from me, a negotiated buy-out is the way to go. I had to buy out a former partner and negotiated a payment over 12 months. It worked out for everyone.
Current valuation should be valuation at the time of the investment multiplied by a small growth factor (1.5x perhaps).
Your stake is the amount you would have owned as of the first cliff (and not any sooner), which is 10% after the investment round.
if the company was valued at $1M at investment, then it would be worth maybe $1.5M at time of the 1yr cliff given the growth factor.
Your 10% of that is $150,000. The company should pay you $12,500 per month for 12 months to fully buy you out. And the company should time the payments to reduction in your equity. If they speed up payments, it speeds up the buy out. If they slow it down, it slows down the buy out.
newline | Course author | Remote | Part Time | https://www.newline.co/write-with-us
7 out of our last 10 authors made $50k+ (each). We’re the authors of Fullstack React, ng-book, Fullstack Vue and we’re looking to work with authors like you to write a few new courses this year. Our books & courses sell very well because: - We go way beyond API docs and teach everything you need to know to build real apps. - We guarantee they're up-to-date. - We invest in marketing the books (and have an active email list of over 100k) - We love the topics we write about and aim to create something remarkable every time. If you decided to self-publish, you may find the marketing is more than writing the course. We have an audience, and we know what they want to learn - so when your course is done, we already have people who want to buy it. If you decide to go with a “traditional” publisher, you may be given a mediocre editor, write your manuscript in MS Word (ha), and earn 5-15% in royalties. With us, our editors (me) are programmers first, our tooling is dev-friendly, and our royalties on profit are split 50/50. (For scale, the author of Fullstack Vue earned $20k on the opening weekend, Fullstack D3 even more.) We’re looking to write the definitive guides on programming topics. Things like "The newline Guide to Authentication with React and Node in 2020" - But variations on that can be any major stack or task: Not only JavaScript, but also Rust, Go, Java, AWS, DevOps, Angular, React, ASP.NET Core, Serverless, Python, Elixir, Data Science etc.
If this sounds like something you’d be interested in, fill out the form linked below. Looking forward to hearing from you!
7 out of our last 10 authors made $50k+ (each). We’re the authors of Fullstack React, ng-book, Fullstack Vue and we’re looking to work with authors like you to write a few new courses this year. Our books & courses sell very well because: - We go way beyond API docs and teach everything you need to know to build real apps. - We guarantee they're up-to-date. - We invest in marketing the books (and have an active email list of over 100k) - We love the topics we write about and aim to create something remarkable every time. If you decided to self-publish, you may find the marketing is more than writing the course. We have an audience, and we know what they want to learn - so when your course is done, we already have people who want to buy it. If you decide to go with a “traditional” publisher, you may be given a mediocre editor, write your manuscript in MS Word (ha), and earn 5-15% in royalties. With us, our editors (me) are programmers first, our tooling is dev-friendly, and our royalties on profit are split 50/50. (For scale, the author of Fullstack Vue earned $20k on the opening weekend, Fullstack D3 even more.)
We’re looking to write the definitive guides on programming topics. Things like "The newline Guide to Authentication with React and Node in 2020" - But variations on that can be any major stack or task: Not only JavaScript, but also Rust, Go, Java, AWS, DevOps, Angular, React, ASP.NET Core, Serverless, Python, Elixir, Data Science etc.
If this sounds like something you’d be interested in, fill out the form linked below. Looking forward to hearing from you!
Imagine there are 1,000 doors and you pick 1. All other doors except 1 are opened and you're given the offer: keep the door you picked, or pick this other door. What are the chances you picked the right door (vs. this other door)?
People seem to intuitively understand that having only one door unopened is a massive "hint" to where the prize is.
I was lucky enough to get this explanation from my high school physics teacher, who first presented the classic Monty Hall problem and then illustrated the changing of the odds by substituting all of the lockers in our school for the three doors. Switching gives a clear advantage.
The rest of this post is an anecdote from the same class that this brought to mind, and is unrelated to the topic. Maybe we can say it shows how good teachers engage their students or something, but really it’s just a good yarn.
We were learning about inelastic vs elastic collisions, and how an elastic collision has 2x the energy of an inelastic one. The teacher asked for a volunteer, and a bright-eyed student rose to the occasion. The teacher gave him some safety glasses and told him to lie down on the floor.
The teacher took the inelastic ball and said, “Okay, I’m gonna drop this on your forehead now, ready?” PLONK. “Ow.”
“Remember that feeling! This is the elastic one, and it has the same mass, so it should hurt twice as much.” PLONK. “Ow.”
The teacher asked, “So, did the second one hurt more than the first?” The rest of us anticipated the experimental confirmation of what we’d just learned about.
“...I couldn’t really tell the difference,” said the student.
“Yeah,” said the teacher, “I knew you wouldn’t. I just wanted to see if you’d let me do it.”
In the 1,000 doors problem, my odds of being right initially were something like 1/1000 and then it changes to something like 998/1000 or 999/1000 for switching, I can’t intuitively grasp exactly what the odds become of winning if I switch, I just know it’s high. Bringing it down to 3 doors doesn’t help me much — it’s still something like 1/2 or 1/3.
Try thinking of it this way. It may or may not be any more useful. I've seen different people come to understand the problem from different examples.
1. Observe that 3/3 = 1. Pedantic, yes, but good for frame of mind here.
2. Pick one of three doors. (1/3 odds)
3. Gain information that one of the three doors is a loser.
4. Note your odds on choosing the original door correctly are still 1/3.
5. Note that if you change doors, there are still 2/3 doors there to choose.
5. Note you're not going to switch to the known loser door, so if you change doors you know 100% which of the other 2/3 of doors to choose.
The intuition usually is that you're down to two doors after the loser door is opened, but that's not the case. There are still three doors. The host has just told you that if you trade doors, you know which door to trade for. So trade for it.
Note there's a newer version of "Let's Make a Deal", hosted by Wayne Brady, but there is no option to switch after a losing door has been shown in that version.
You choosing a door in the first place gave you 1-in-3 chance. But your 1-in-3 choice is deducted from the 3-of-3 choices that Monty could have had, so Monty only had a 2-in-3 choice: there's a 2/3 chance that the car is in the pool of doors from which Monty could select. Monty has a 100% chance of choosing a door without a car. Therefore, you inherit the 2-in-3 chances if you change your selection.
The first door will have the car 1/3 of the time. The second door's chances had been expanded to the remaining 2/3 percent thanks to Monty always choosing the last 1/3 door which is guaranteed to not have a car.
Because it isn't different really.
It is always a goat door that is opened, so you don't gain any information about your door by the opening of the goat door.
I'm thinking of a number between 1 and 10, guess it. If I now tell you a number I promise is not the one I was thinking and not your number, you have no more information about if you were correct.
Because it really centers on the initial premise: Monty will always open a goat door after your choice, no matter what.
So, you make a totally random choice. That choice must be 1/3 right, right? Now the thing that you already knew would definitely happen happens: Monty opens a goat door. How can your odds suddenly jump to 1/2?
Are you saying every single time you play the game, you always have a 1/2 chance of getting it right first time?
What you knew about the door you initially picked hasn't changed at all. What you now know about the other doors has. By giving you information about 1/2 of the other 2/3 doors, Monty has given you an extra 1/3 chance if you pick among those.
Assume there are seven billion people on the planet. One of them knows the location of a specific hidden treasure. I know who it is and I ask you to guess who it is, but you have no possible way of knowing or even getting a hint about it.
You pick some random person. I then bring in another stranger and tell you that the person who knows where the hidden treasure is is either the random person you chose or the one I brought in.
At this point, there are only two possibilities:
1. You happened to randomly choose the right person on Earth and in my surprise, I had to pick some other random stranger to pretend they knew the secret.
2. You chose a total rando who has no idea what's going on and the person I brought in is in fact the one who knows where the treasure is
You don't need to know the exact odds to understand that it's higher. I think that's the main takeaway of making it a 1,000 door problem. It makes it intuitive that the correct solution is to switch. The exact probability doesn't matter.
IMO what helps is to imagine slightly changing the order.
First step is still that you pick a door. There's a 1/3 chance it has the car. Now you can either keep that single door (with a 1/3 chance of a car), or switch and get both of the other two doors (each with a 1/3 chance of the car, for a total of 2/3 chance). After you pick, I'll reveal all the goats.
Wow. I "understood" the reasoning behind the original and knew that was the right answer, but your anecdote just made it totally click. Of course if I choose one random locker there is a 1/1000 chance of getting a prize, and if Monty opens 998 other empty lockers, of course I should switch. That would make switching be the correct choice in 999/1000 times.
For someone who doesn't get it, the problem is then whether the correct extension is all the doors being opened. With 3 doors it's the same.
For me the most sensible explanation requires you to know that a dud door is always opened, thus the probability from the 2/3 is the one you are switching to.
This is the most intuitive explanation by far. I'd even say 1 million doors to really drive the point home.
You choose a door with only 1 in a million odds of it being the door with a prize. Monty Hall know where the prize is and will only open the remaining doors he KNOWS doesn't have the prize. If he then opens up 999,998 doors without a prize behind them and asks if you want to keep your original door or switch, you'd obviously know that Monty's last remaining door must be the one with the prize.
When I first heard about the problem, I struggled with the seemingly 50/50 chances. either you picked the right door and now switch and lose, or you picked a wrong door and switch and win. Switching seemed a zero sum game.
The explanation that works best for me is that you were more likely to have picked a wrong door in the first place, so while the impacts are opposite equals, the likelihoods are not equivalent.
That's the explanation that helped me, and led me to realize something about this problem. The problem asks what is the best strategy. It doesn't ask what should you do at that moment. At that moment implies you should process the information available to you right then and there. The information available to you at that time - two doors, ignores prior information, which I think is the counterintuitive aspect that trips people up.
That explanation never worked for me, because you can turn it around to the situation where Monty does not know where the car is. Say there are 1000 doors, and you pick door 429. On his way to open door 429, Monty stumbles, falls, and accidentally knocks open every door except door 128. If by some coincidence all opened doors happened to contain goats, you will have nothing to gain from switching. Very counter-intuitive, but just as true as the original problem.
A possible intuition here is that Universes where your first pick was the door with the car, which initially were just 1 in a 1000 compared to Universes in which you picked a goat, will suddenly become massively overrepresented. After all, in these types of Universe Monty's Fall couldn't possibly have shown a car, whereas most of the other Universes will not survive to the next "round".
Of course, if this happened in real life, Bayesian thinking would increase the likelihood of hypotheses such as, for example, "The door containing the car has a better lock" to such an extent that I would switch.
You can't really turn it around, because Monty knowing and using his knowledge of where the car is to reveal only goats is what makes switching advantageous.
In the case of the clumsy Monty of your example, it goes like this:
1. There is a 1/1000 chance door 429 has the car.
2a. If it has the car, then when Monty accidentally opens 998 doors no car will be revealed. This does not change the chances that 429 has that car, which remain 1/1000.
2b. If 429 does NOT have the car, then 998/999 times that Monty accidentally opens 998 doors, he will reveal a car, which presumably ends that game. There is only a 1/999 chance that he will not reveal the car and the game proceeds.
3. Thus, there are two cases where the game reaches the point of two remaining doors, with 998 revealed, the car is behind one of the two, and you have a chance to switch.
3a. Your door has the car, which happens 1/1000 games.
3b. Your door does not have the car, which happens 999/1000 x 1/999 games, or 1/1000 games.
In other words, if the clumsy Monty version is played repeatedly, 998 out of 1000 games end without even getting too the point you get a chance to switch, and 2 get to where you get the chance. In those two, one has the car in your door, one not. There is no advantage to switching.
In the case of the systematic Monty who knows where everything is and ALWAYS opens 998 goats, it goes like this:
1. There is a 1/1000 chance your door, 429, has the car.
2a. If it had the car, Monty opens 998 doors that do not have the car, leaving one door besides your yours.
2b. If your door did not have the car, it is one of the 999, and Monty systematically opens the 998 of those 999 that do not have the car.
3. You always reach the choice stage. You can either get there via 2a, which always results in the car being behind your door, or via 2b, which always results in the car being behind the other door.
3a. You get there via 2a in 1/1000 games.
3b. You get there via 2b in 999/1000 games.
If you do not switch, you only if and only if you got there via 2a, so you only win 1/1000 games. If you always switch, you win if and only if you get there via 2b, so you win 999/1000 games.
In the original problem, Monty always opens a door to reveal a goat. This strongly implies that he knows which doors have goats.
Your alternative is different. If the doors opened are truly selected randomly, then the odds are high that he would reveal a car, especially in the 1000-door version. What are the odds that Monty could choose 998 doors randomly out of 1000 and NOT pick the one with the car?
If Monty doesn't know where the car is, then you're arguing with a completely different version of the game that could result in him opening the door with the car, so the strategy is going to be different.
Changing the scenario is the exactly the point of why I am unsure increasing the number of doors gives true intuition. In the original problem with the original scenario, people intuitively think switching doesn't matter, when it provably does. With 1000 doors and the Monty Fall scenario, again the intuition is wrong. So are we gaining true intuition for the Monty Hall scenario by expanding the setup, or is it just some subconscious Bayesian thinking accidentally steering us toward the right answer.