Just like everything else in regards to this issue - who knows? It's a noisy mess, and the only thing we can be sure of is that it's going to be a never-ending nightmare for a lot of people through absolutely no fault of their own, requiring constant vigilance for the rest of their lives, and the system that built and continues to support this monster isn't going away any time soon.
I think GP means fraud on Equifax's end (by telling you you are compromised even though they shouldn't be able tot ell you that since tha data is fake) to get you to bey into their protection service.
Anti-Fragile is a popular work on risk; the description is simply accurate, and distinguishes the works she had read from Taleb’s academic work.
Most academics who also write popular works aren't prickly about describing their popular works as such; I don't think Hawking would get upset at someone describing A Brief History of Time as pop cosmology.
The piece seemed like it was intended to explain the historical consensus on ethnic diversity in Roman Britain and how that consensus was achieved, which one can't really do reasonably in 140 characters. She mentions the Twitter drama for context, but I think she's trying to keep the debate focused on the point of factual contention rather than personal conflict.
If people want to get anything productive at all out of these constant Twitter flareups they need to allow for de-escalation.
"I think Prof Taleb did get annoyed when I said that I had read his ‘pop risk’ book, not the others. But I was actually trying to make clear that I had some knowledge of his work, though not a lot."
She not only mentioned that in the article, it's the truth? And how is that a slight? It's bloody amazing that you can write books on risk that reach mass audiences. It's only a slight to Taleb because he prefers to think of himself as above the fray.
Also, even if you disregard the "pop risk books" qualification, her argument comes down to "what did you publish on Roman Britain", instead of engaging with Taleb's argument.
Taleb is disregarding the entire field of history because it is 'Anecdotal' as opposed to 'Statistical'. Since there are no 'statistics' from the period he is acting like he knows as much as one of top living scholars on Roman history. The written and archaeological record shows that auxiliaries were often stationed far from their home territories, but I guess that was all part of a 2000 year plot to make Britons accept immigrants.
No, all he's saying is that you cannot ignore genetics, and that genetics is more reliable than fragmentary historical records. And that those Roman auxiliaries would have been Mediterranean, not sub-Saharen Africans.
Jesus man, I'm in my early 20's and even I don't make sure to put my phone in my pocket just to go pick up the mail. What could possibly be so important that it couldn't wait a whole 120 seconds?
Nothing - but the same habit also compels me to carry my wallet & keys everywhere I go as well. It costs me nothing to carry it, and there are times where I've returned to my apartment after stepping out for a minute to find myself locked out.
My door locks automatically when its closed - so when if I were to forget my keys, I would get locked out (which has happened, which is why I always carry my keys).
I do that with my wallet and keys. I rarely take my phone out of the house though, and even leave it switched off for days or weeks at a time. I'm not sure what I'm missing out on by not having it available 24/7.
It's a very liberating feeling once you have 'trained' your friends and family that you are not dead nor hate them if a text goes unanswered for a few hours.
That is orthogonal to having a smart phone on you 24/7. I trained my friends and family not to expect instant replies (or me always picking up phone calls), but I use the phone also to read stuff, search for stuff, and note down stuff, so I do want it on me all the time.
I'm usually in the same place, and using my Linux desktop. I guess that's why I've always found it hard to get interested in phones: annoying devices with too-small screens, slow data entry and usually a locked-down OS with no updates.
If humans didn't have a capacity for reason, then observing stimuli wouldn't mean anything and we wouldn't be able to transform observation to concepts. His example was people accidentally making a camera obscura by having a hole in the wall. Without reason, you
1. Wouldn't be able to conclude that it was even the hole in the wall causing the effect
2. Would be equally likely to attribute the image to the act of a divine power
And his failings on blank slate theory are not on the genetics side of things, but on human behavior. Much like the conception of the new socialist man, his thinking relies on behaviorism.
> So the creatures who invented the conceptual framework of reason and logic have no way to use reason and logic. Got it.
Correct. Perfect example would be Casinos, a person knows the hard odds are against them but continues to not use logic and reason. Perhaps our individual concepts of the words "reason" and "logic" are not synced, thus causing the disparity.
I must have misunderstood your original stance on this.
I believe we are actually in agreement on this topic.
What part of this video do you disagree with?
> If humans didn't have a capacity for reason, then observing stimuli wouldn't mean anything and we wouldn't be able to transform observation to concepts. His example was people accidentally making a camera obscura by having a hole in the wall. Without reason, you...
In his example of the hole in the wall, you claim it was not by accident and that they were purposely looking for the intended output in the configuration? It seems the information Jacque was attempting to convey was that through observation and "accidental" tries, output is generated. Edison and Tesla did the same exact thing, only difference being Tesla narrowed his "accident" tests down to less cases. It all comes down to if-statements. If a hand "accidentally" covered the hole, the picture disappears... then another if-statement executes in their mind... if another object obscures the frame, what happens? Everything evolves from these "accidents".
> 1. Wouldn't be able to conclude that it was even the hole in the wall causing the effect
> 2. Would be equally likely to attribute the image to the act of a divine power
-Yes, what we call Science
-Yes, what we call Religion
> And his failings on blank slate theory are not on the genetics side of things, but on human behavior. Much like the conception of the new socialist man, his thinking relies on behaviorism.
Behavior is mutable. Genetics plays a large factor as well.
We both seem to agree that the blank slate theory is blown out of the water.
His thinking can seem to rely on behaviorism, but more aptly he would refer to it as "operational conditioning".
This is a nice sentiment, but this rarely plays out in practice in my experience.
People use Python all the time to manipulate data sets >= 100G in size despite its speed failings at that size. Why? Because Pandas is just so damn convenient. It would take me a grand total of 30 seconds to write Pandas code which read a TSV and gave me the sum of two multiplied together columns grouped by the day of a timestamp column. Doing that in C would take several orders of magnitude more time.
It's an optimization of people's time problem. You could probably spend several hours (or days) writing a C program for a specific problem. But if you can spend only 40% of the time writing the program and have it only 20% slower, then that's a definite win (these numbers are just an example).
In order to get permission to build a large building in central London, it's necessary to have a distinctive, unique design. The city does not want to look like America.
Hence, the Gherkin, Cheesegrater, Can of Ham, Walkie Talkie, Shard, Helter-skelter.
I live in a neighborhood full of slate tile roofs and tile replacement is a fairly routine occurrence. Are you positive that no tiles have been replaced in the past 100+ years?
This isn't true. All the saved are saints, even if they aren't canonized. It is reasonable to assume that they are in such a state and thus engage them in intercessory prayer like any other saint.
> The idea of AI picking up the biases within the language texts it trained on may not sound like an earth-shattering revelation
That's an understatement.
> But the study helps put the nail in the coffin of the old argument about AI automatically being more objective than humans
Again, this isn't AI, and anyone with knowledge on the subject has always known that a traditional machine learning algorithm is only as good as its training data.
This also seems like a case where the researchers are simply unhappy with the results they received, rather than being able to show that the results are wrong.
Everything you said is true, but I think it is dangerously missing the point.
Yes, everyone "in the know" knows that there's no such thing as "AI" right now, and what we actually have are just statistical models with "bias in, bias out". To us, this news is not surprising.
But that's not how these algorithms are being marketed, hyped, and sold, or how their decisions are being justified. Right now there's a lot of people selling "AI" as an unbiased and better decision-maker than humans. Where this gets really bad is when they start justifying the biased decisions of the machines as, "it's an AI program, so this can't be bias: whatever icky things it decided must be the truth!". That's the real worry here: when the marketing- and hence the policy- doesn't match the reality, and starts amplifying and reinforcing the very problems it was supposed to solve.
This is how I like to think about it: the term Artificial Intelligence is like artificial sweetener. It isn't sugar.
Machine Learning is artificial intelligence. When / if computers (or whatever they evolve in to) actually become intelligent we'll have to drop the the word artificial.
Artificial, adjective:
1.
made by human skill; produced by humans (opposed to natural ). eg. artificial flowers.