Would this not require quantifying real, objective truth? How does one compute truth without relying on human input? (Which instead trends towards truthiness/the "feeling" of whether something is true.)
I am not being dismissive, I genuinely would like to know.
No - consider it mostly a design and infrastructure problem.
When looking at social media,
it’s part public forum that needs some type of discovery/filter mechanism, and part a tool for individual users and community to communicate and collaborate.
The barrier in current social media networks is largely skewed towards manipulative design that optimizes towards datamining and addictive gamified systems and interfaces.
Sure you could try to build a social network for open science and peer reviews of research projects, but the bar is set so low right now that any improvements to interfaces that facilitate a more comprehensive search/discover/filter system on datasets will be a massive improvement over now.
Information needs to be discoverable, but people need to be free from propaganda.
> When looking at social media, it’s part public forum that needs some type of discovery/filter mechanism, and part a tool for individual users and community to communicate and collaborate.
> The barrier in current social media networks is largely skewed towards manipulative design that optimizes towards datamining and addictive gamified systems and interfaces.
I think you're spot on. Even at the level of individual we have to do heavy noise filtering to reach at the signals that matter to us. We have heuristics to go towards people that we find useful and stay away from those are mere nuisance. Social media is this one giant noise machine that actively throws bullshit at us, ads are 99.99% noise considering their usability/mental processing cost, ranking algorithms are optimized to make signals that get you stay engaged overly salient in comparison to their natural incidence rates, our self-association is heavily distorted; content of friends that rile us up are showed as often as the friends we like etc.
> Information needs to be discoverable, but people need to be free from propaganda.
I think the only solution to this is breaking away any recommendation engine from the rest of the product and make it available for competition. "Use Facebook with our RecommendSmart, scientifically proven to make you less depressed than the default one". "Me and my friends use Twitter with SocratesSort, has been great at starting deep conversations on topics we care about, totally troll free".
You can’t quantity or even know “objective truth”. We can get really close for somethings, but knowing objective truth is akin to being a god. At the end of the day, everything is a model relying on some axioms.
Inter subjective truth, however, can be reached and is what we rely on most (a dollar bill is a fancy piece of cloth, but we all agree its worth a dollar). It is reached through consensus making.
Gathering a consensus is traditionally done through government or hierarchy, ultimately leading up to a single human or single groups input as “truth”. This method has continually begun to disintegrate as communication tech gets better (printing press -> mobile phone internet).
So the solution, to me, is to create consensus systems that rely on the input of many - use the law of large numbers, economic incentives, and the kaleidoscope of subjective truths to reach the most accurate objective truth we can.
It's true that society uses consensus as a proxy for truth. Even when scientists make a new discovery, it isn't considered "truth" until they convince the community – sometimes even taking decades!
Sadly, this consensus can be manufactured by those in power. Censorship helps to a surprising degree, for example. Social media sock puppets, astroturfing, bribery, the list goes on.
How do we fight against manufactured consent? Is it even possible at this point?
Yeah this is the crux of it isn't it? And it's not just the problem of manufactured consent either, there is also the problem of mistaken consent that grows organically out of human frailties like our cognitive biases and appetites for drama.
Yes thank you. Between this and your other comment to me in this thread I think you've really gotten to the heart of it. I appreciate you putting into concise words what was rumbling around in my head when I first asked the question.
But what then of the tyranny of the majority, lemon mentality being itself reinforcing? This cannot be know as anything but an agreed truth, or one applicable only to a greater objective, but never The Objective Truth. We need to embrace conflict and mutual exclusion to recognize more nuanced aspects which are relevant and "truthful" for a minority too.
I think this is one of "the big questions" right now.
Philosophy tells us that you can't compute truth without relying on axioms. But computer science tells us that if we accept basic axioms, the computation of truth quickly becomes orders-of-magnitude too complex to compute.
I suspect that this all leads us to needing to rely on coarse human input as "axioms".. which of course leads to the issue of which humans do we rely on as stalwarts of the truth? It's a bit of a chicken-and-egg problem.
My hope is that studies like these will tease out the nuances of networks so that we can engineer networks to nudge the nodes of better truth-telling to more centrality in the networks, and that gradually we'll master the art of building intelligent networks. After all, biology did it with the human brain.
> Philosophy tells us that you can't compute truth without relying on axioms.
Philosophy tells no such thing. It is not the province of philosophy to tell us the final word on what is what, and without corresponding it to any empirical exploration, asserting such a claim is mere dogmatism.
Computationalist model of "truth" (by which I think you mean reality) is dying. Embodied-embedded cognition offers an alternative in which your intelligent system has to be deeply embedded within all the other networks it has to interact with, and its adaptivity and constraints define it more than anything. There is no making an intelligent network in a test tube (talking about general intelligence).
> After all, biology did it with the human brain.
Biology might have put the required machinery, but machinery by no means is a guarantee that it will be neither intelligent nor adaptive. You could "engineer" your own network that is your body-brain to get better at conforming to reality, which is called self-transcendence and cultivating wisdom, and arguably the same principles would work for our social networks, artificial networks and us alike.
But going back to the notion of embeddedness, can a social network that will ultimately aim to conform to the norm of making more money be wiser? Can a wiser social network really out-survive a dumber one? Isn't both going to be ultimately embedded in the collective intelligence that is our economy? Therefore both will be constrained by the limits of the intelligence/wisdom of the economy, and unless there is a bunch of benevolent rich that will implement the engineered wiser social network and gift it to the humanity and get humanity to actually use it, there is no such place, i.e it is a utopia.
Regarding the quote it’s Gödel's incompleteness theorem that proves the ever present need for more “axioms” - and it exists in the cross disciplines of philosophy/math.
First incompleteness theorem says even with axioms you can't "compute" all truths in a formal system. That is a far cry from "we need axioms to compute truths".
It’s called a library in my idea of it. The sum of human knowledge, curated by experts in every field. I don’t think we can compute that last bit. We may not have to.
Maybe it's not necessary to define truth for this. Considering metrics that you want to influence might be a better way - hate crime arrests in locales, negative/divisive message content, donations/volunteering for positive causes, etc. But I'm a pessimist and I think moving these metrics in the right direction would adversely affect the $$$ metric that shareholders care about, so it's not going to happen.
Its a very big question - we almost need some sort of level of detail about the commenter, to understand their expertise, backgrounds, experiences, abilities, etc - but once again, how would you quantify it? For all topics, not everyone's voice should be considered equal
> Interestingly, unopened packs seem to fetch a fair amount, certainly more than their “expected value”
Yeah it's a phenomenon seen in a few different CCG markets actually. Roughly 2 years after a set's release the Sealed box price and the EV start to drift apart.
IMO you'd get a lot of mileage from the "Reserved List" or "Vintage Staples" but site is decent for a general price lookup as well.
Feel free to shoot follow-up questions. There are a few cards that went from trash to treasure since you've been out of the game. Lion's Eye Diamond probably the most extreme example, but basically anything on the Reserved List has gone insane in the past few years.
Even "pretty bad" condition Beta is still worth a pretty penny. Lots of people just want to have complete sets, or play "1994" League which allows only cards from that era. There will definitely be a market if you do indeed have Beta :)
They are dual lands with basic land types without any abilities (other than the mana abilities they inherit from their basic land types) which also means without any drawbacks. They're just "Swamp Forest", "Mountain Forest", "Island Swamp", etc. See here:
And they are literally what their type lines say. So you basically get two lands with a basic type for the price of one (i.e. one card, or one land drop; card and tempo advantage, 2-in-1). Their only limitation is that they are not basic lands so they are subject to the restriction of number of copies per deck (4, in most formats).
Every dual land created subsequently has some kind of drawback or limitation (other than the number of copies restriction): "Enters the battlefield tapped", paying some amount of life or taking damage, sacrificing a permanent, discarding a card (I think), printed as bouble-faced card, and so on.
The original dual lands are very powerful in the game and haven't been reprinted since Revised so there aren't a lot of them. And people pay that much for them.
Modern dual lands come with restrictions/penalties for taking advantage of their dualness (time delays, damage, choosing which version to use when played, etc) and they are often some of the most valuable cards in a set.
Anecdotal: The on-ramp to Gloomhaven is quite smooth, compared even to many "simpler" games. It feels like there is an MVP-design of a game initially introduced and then as you play more features and complexity are added.
I highly recommend it, even as a solitaire game. There is a digital version on Steam as well, which approximates the mechanics and gameplay well.
I think I played gloomhaven varying levels of wrong (monotonically decreasing) for the first 15 or so scenarios. The classic "wrong attack modifier deck", "elements only move up 1 when you generate", "Monsters cannot move through each other", "That monster has flying" are a smattering of my misread rules.
Who cares? The game was still fun, now we play more correctly (God knows what I haven't noticed I do wrong yet). I'm a firm believer in playing early, and learning from mistakes / as you go.
Yeah watching a section of code over time was what I thought this was going to be initially. That sounds really useful for sharing the story of "how we got here" to new devs.
It's scary how much this comment applies to my current job. Literally just spent today discussing with entire engineering org how to steer away from this behavior.