Context: kitboga[1] is a streamer on Twitch.tv who makes a living of wasting scammers' time. He uses voice modulation and a variety of tools (such as a fake banking website) to increase credibility in the eyes of the scammers he gets on the phone.
My take on this: if we want vision ML to succeed at recognition in the same way as humans, perhaps we need to pre-process and present visual information in the same way as the human vision system? As far as I'm aware, we get a lot of info from our eyes about lines and orientation that assists in recognizing shapes.
I'm not well-informed about the current state of visual recognition DL, perhaps someone who is can tell us more about whether that approach makes sense.
When you train a deep convolutional neural network, the first couple of layers appear to take on this role, detecting simple features like edges and textures, which the higher layers build upon to see more complex objects.
...which is exactly what "arguing from a position of ignorance" means. Once you attempt to verify medical advice (in good faith) you are no longer ignorant.
That is a ridiculous straw man and I'm pretty sure you are aware of this. At some point, there is trust involved. You balance the credibility of authentication guarantees based on the level of trust required for the transaction you're making.
It's funny that in a topic complaining about a company who spies on it's users someone brings up Last Pass which says right in it'a TOS they spy on all your browser traffic and share that info with marketing partners
[1] https://www.twitch.tv/kitboga