What makes the simulation we live in special compared to the simulation of a burning candle that you or I might be running?
That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.
If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?
>If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?
I can smell a "real" candle, a "real" candle can burn my hand. The term real here is just picking out a conceptual schema where its objects can feature as relata of the same laws, like a causal compatibility class defined by a shared causal scope. But this isn't unique to the question of real vs simulated. There are causal scopes all over the place. Subatomic particles are a scope. I, as a particular collection of atoms, am not causally compatible with individual electrons and neutrons. Different conceptual levels have their own causal scopes and their own laws (derivative of more fundamental laws) that determine how these aggregates behave. Real (as distinct from simulated) just identifies causal scopes that are derivative of our privileged scope.
Consciousness is not like the candle because everyone's consciousness is its own unique causal scope. There are psychological laws that determine how we process and respond to information. But each of our minds are causally isolated from one another. We can only know of each other's consciousness by judging behavior. There's nothing privileged about a biological substrate when it comes to determining "real" consciousness.
That's a fair reading but not what I was going for. I'm trying to argue for the irrelevance of causal scope when it comes to determining realness for consciousness. We are right to privilege non-virtual existence when it comes to things whose essential nature is to interact with our physical selves. But since no other consciousness directly physically interacts with ours, it being "real" (as in physically grounded in a compatible causal scope) is not an essential part of its existence.
Determining what is real by judging causal scope is generally successful but it misleads in the case of consciousness.
I don't think causal scope is what makes a virtual candle virtual.
If I make a button that lights the candle, and another button that puts it off, and I press those buttons, then the virtual candle is causally connected to our physical reality world.
But obviously the candle is still considered virtual.
Maybe a candle is not as illustrative, but let's say we're talking about a very realistic and immersive MMORPG. We directly do stuff in the game, and with the right VR hardware it might even feel real, but we call it a virtual reality anyway. Why? And if there's an AI NPC, we say that the NPC's body is virtual -- but when we talk about the AI's intelligence (which at this point is the only AI we know about -- simulated intelligence in computers) why do we not automatically think of this intelligence as virtual in the same way as a virtual candle or a virtual NPC's body?
Yes, causal scope isn't what makes it virtual. It's what makes us say it's not real. The real/virtual dichotomy is what I'm attacking. We treat virtual as the opposite of real, therefore a virtual consciousness is not real consciousness. But this inference is specious. We mistake the causal scope issue for the issue of realness. We say the virtual candle isn't real because it can't burn our hand. What I'm saying is that, actually the virtual candle can't burn our hand because of the disjoint causal scope. But the causal scope doesn't determine what is real, it just determines the space and limitations of potential causal interactions.
Real is about an object having all of the essential properties for that concept. If we take it as essential that candles can burn our hand, then the virtual candle isn't real. But it is not essential to consciousness that it is not virtual.
> If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?
A candle in Canada can't melt wax in Mexico, and a real candle can't melt simulated wax. If you want to differentiate two things along one axis, you can't just point out differences that may or may not have any effect on that axis. You have to establish a causal link before the differences have any meaning. To my knowledge, intelligence/consciousness/experience doesn't have a causal link with anything.
We know our brains cause consciousness the way we knew in 1500 that being on a boat for too long causes scurvy. Maybe the boat and the ocean matter, or maybe they don't.
I think the core trouble is that it's rather difficult to simulate anything at all without requiring a human in the loop before it "works". The simulation isn't anything (well, it's something, but it's definitely not what it's simulating) until we impose that meaning on it. (We could, of course, levy a similar accusation at reality, but folks tend to avoid that because it gets uselessly solipsistic in a hurry)
A simulation of a tree growing (say) is a lot more like the idea of love than it is... a real tree growing. Making the simulation more accurate changes that not a bit.
This comment here, I think, has the answer. The inial returns from using the more functional and mathematical view of programs are smaller than that of traditional approach. But the former scales better if you invest more time in it, and eventually can give you much more.
> It's pretty bizarre given there are few laws which seem more universally opposed by the open source community, and seems to me quite at odds with the values of FOSS - as is any DRM scheme.
Even if we don't like a law, it's a law, and you live in reality where it applies — and you should use it when needed. I don't find it bizarre at all.
I think you can prove that one can't just map the edge weights (so that edge's new weight only depends on its original weight) to make all weights positive, and also preserve all shortest paths.
You may try something more complex, i.e. where the new weight of an edge would depend on some more edges around it. But then the cost of doing so may be close to Bellman-Ford algorithm itself.
Assuming a directed graph. At the entry to each negative edge collect all the nodes reachable in 1 extra hop from that edge and compute the total cost. All that have a positive weight, add as edges from the source of the negative edge. If they still have a negative weight, expand further. Once you're done, drop all the negative edges. This essentially replaces every negative edge with a set of 'shortcuts' that encode all the places the negative edge could have gotten you, such that the shortcuts have positive cost. Use regular Dijkstra on the augmented graph.
This obviously has a potentially exponential complexity, though it will work if there isn't too much negativity. It should at least be enough to convince you that preprocessing is possible absent the existence of negative cost loops.
> In my country, Sweden, such actions at a state-funded institute fall under the umbrella of “kränkande särbehandling” (victimization) and are unlawful.
Does this mean that Sweden is a good place for people who are, like the author, critical of the wokeness and "cancel culture"? I used to think they're the ones setting the trend there. I don't understand anything now.
> If I think my neighbor's a dick, I don't need to hold a trial by jury to decide if I'm allowed to disinvite him
In this case you are exempt from that principle because following it will cost you disproportionally more than the possible damage done if you're wrong. Not because universal justice principles somehow become inapplicable outside of courtroom.
It’s more like : you win a Turing award by finding a strategy to parallelize this problem as you’ll be able to use that approach to parallelize all problems in P, proving NC = P.
This applies both ways. You'll win a Turing award if you prove NC ≠ P, which is kind of what you said — at least, that the best way I see of reading your first and a few following messages.
Also: x86 is very much statically typed. Your types are 8-, 16-, 32- and 64-bit words. Each machine instruction knows the exact types of its operands, so there's no overhead on determining the types of values at runtime (like you'd need to do in case of dynamic typing).
What is "untyped" exactly? Google tells me it's the same as "dynamically typed".
Remember the context. The thread started with Elang VM making it impossible to make code faster, due to the need to accommodate Erlang's dynamic typing features and do runtime checks for that. Top comment said that dynamic typing per se can't be the obstacle for programs being faster, and gave x86 as an example of a presumably dynamic VM which does not limit the speed of the programs.
So "dynamically typed" from the article here meant that there's some runtime checks you can't unsubscribe from, and due to them there's a cap on how much faster you can make your code. It applies to some extent to x86, there's also checks there like segfault for example, and maybe you could save some space on silicon if you removed them and required the machine code to be verified to never cause segfaults. But I argue that this is too much of a stretch. Runtime checks that x86 performs are negligibly simple, and x86 is not a valid counter-example for the article's point. In my view the article's point is valid.
> 1/ there is no static checker
"return 0" is the static checker for x86 machine code.
But seriously: this is not a necessary condition for something to be statically typed, it can't be.
> 2/ if there was, the interpreter would run the failing code anyway
Failing code does not exist. All code is valid.
Haskell is undoubtedly statically typed. But division by zero will still cause a runtime error.
> 3/ memory can be integers, floats, machine instructions all at the same time
Ok. Disregard what I said about machine words. The only datatype in x86 is byte. Some operations use 2 bytes, some use 4. They may interpret them differently (some may see an int, some float), but it changes nothing, since both int and float operations are operations on bytes.
Spent so long trying to get quote formatting on a mobile that I didn't notice I've replied to the op, not the subthread. Bad times.
Whether x64 is typed or untyped feels like the start of an argument about whether xmm registers have the same type as stack memory which is kind of interesting but orthogonal to elixer.
I would agree that static, dynamic and untyped are distinct things. You can compile from a source language with any type system to a target language with any type system. I think, there may be edge cases around excessively constrained languages.
As an extreme example, the target machine says nothing about whether you can constant fold 1+2 into 3 in the source language, but the source language can definitely block or enable that minimal optimisation.
That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.
reply