Hacker Newsnew | past | comments | ask | show | jobs | submit | sdl's commentslogin

Yes, in that case it's often called Photo-TAN or QR-TAN. See https://en.wikipedia.org/wiki/Transaction_authentication_num...

Previously there were also so called "flicker TAN" approaches: https://de.wikipedia.org/wiki/Transaktionsnummer#chipTAN_com...



So basically the map projection problem [1] in higher dimensions?

[1] https://en.m.wikipedia.org/wiki/Map_projection


Worse. Map projection means that you cannot have a mapping that preserves elements of the internal geometry: angles and such.

Violation of topology means that a surface wrongly is mapped to one intersecting itself: Think Klein Bottle.

https://en.wikipedia.org/wiki/Klein_bottle


Can you share an actual example demonstrating this potential pathology?

Like many things in ML, this might be a problem in theory but empirically it isn’t important, or is very low on the stack rank of issues with our models.


A fold means two different regions of topology get projected across each other.

It's a problem for the simplest of reasons, information is lost. You cannot reconstruct the original topology.

In terms of the model, it now can't distinguish between what were completely different regions.

From the Klein bottle perspective, a 4D shape gets projected into a 3D shape. On most of the bottle, there is still a 1 to 1 topological mapping from 3D to 4D versions.

But where two surfaces now intersect, there is now no way to distinguish between previously unrelated information. The model won’t be able to anything sensible with that.

TLDR; We don't like folding.


And in Reinforcement Learning:

POET (Paired Open-Ended Trailblazer): https://www.uber.com/en-DE/blog/poet-open-ended-deep-learnin...

SCoE (Scenario co-evolution): https://dl.acm.org/doi/10.1145/3321707.3321831


Evil and careless can be one and the same. They (FB) could not care-less about the consequences of their actions on other peoples' lives.

"The opposite of love is not hate, it's indifference." - Elie Wiesel


In 2015, when I was asked by friends if I'm worried about Self driving Cars and AI, I answered: "I'll start worrying about AI when my Tesla starts listening to the radio because it's bored." ... that didn't take too long


Maybe that's why my car keeps turning on the music when I didn't ask -- I had always thought Tesla devs were just absolute noobs when it came to state management.


With state management implemented as sophisticated enough ML model, it stops being clear whether the noob is on the outside or inside of the system.


I like the Bayesian Surprise definition for this. It's not about predicting the exact next state of the world (or the next frame of the noisy TV) but about how much the next state changes your model of the world.

https://papers.nips.cc/paper_files/paper/2005/hash/0172d289d...


I spent some (too much) time trying to get pretty much the same thing running using GOW [1]. Was quite a bit harder than I thought, requiring a hdmi dummy plug to get the xserver config right etc.

1: https://github.com/games-on-whales/gow


Good call out - this does require a dummy plug as well.


See e.g. Ian Osband's work (he calls it 'risk' VS 'uncertainty' for some good examples that help in differentiating this: https://scholar.google.com/citations?view_op=view_citation&h...


"Probabilistic Machine Learning" by Kevin Murphy is all you need. ... only half joking ;) https://probml.github.io/pml-book/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: