For all you folks who aren't ace programmer types, the Orange3[1] platform gives you a very miniaturized[2] ability to turn out these sorts of visualizations very rapidly. It's not the most stable thing in the world, but the node-based ML workflow designer is worth the price of admission all by itself.
[2] The Wikipedia extension in Text limits each search result to 25 articles, so sucking all of Wikipedia is . . well, Orange text analytics crashes when I look at it sideways with a null character, so let's not think about what would happen.
This is some very interesting network topology/graph theory work. Some really interesting information is demonstrated here.
Also I'd like to commend the author on his choice of alternating call center hold music and synthwave. I found it to go really well with the video and the graphical representation.
> "this game is played by ignoring links in the references and see also sections of Articles so when I construct this the graph I also ignored links in these sections since these are not necessarily a part of the article" [8:22]
Surely there are also some fascinating topologies amongst the references themselves?
Essentially a very long meandering joke that has all sorts of side-tracks and irrelevant bits before reaching the punchline. The British comedian Ronnie Corbett told these regularly on TV [0] in the 70s/80s.
> A "Shaggy Dog" Story is a plot with a high level of build-up and complicating action, only to be resolved with an anti-climax or ironic reversal, usually one that makes the entire story meaningless. The term comes from a type of joke (called "gildersome" in The Meaning of Liff) that worked the same way—a basic premise, a long amount of buildup, and a deliberately underwhelming punchline.
I can keep my viewing of shaggy-dog stories very very infrequent on Youtube. For example, there are many channels I have come to trust.
I think I'm going to stop following links from HN into YT though because most of the ones I've followed till now are long mostly-pointless expositions.
I've used my same wiki handle since 2003, and actually knew what the #1 linked-to article was/would be (don't want to spoil, but it's obvious if you're a frequent visitor to wikipedia).
Half-way through OP's video reminded me of this really fun mapping utility, self-described: sixdegreesofwikipedia.com/
Thanks. I almost want to try this as a small project, just to add degrees of separation with no target.
It would be nice to be able to put in single articles or words and see where they went to with 1,2,3... Or see what is 1,2,3... away and then jump over to those.
There are so many uncomfortable questions I want to ask from this data, like which editors contributed to two different language articles which disagree about the same historical event.
Perhaps it is a good thing not have all data at our fingertips…
[1] https://orangedatamining.com/
[2] The Wikipedia extension in Text limits each search result to 25 articles, so sucking all of Wikipedia is . . well, Orange text analytics crashes when I look at it sideways with a null character, so let's not think about what would happen.