Hacker Newsnew | past | comments | ask | show | jobs | submit | benchmarkist's commentslogin

Humans currently produce industrial poisons on an unimaginable scale so you're not far off.


Yet another instance of how people view the biosphere as a resource to be exploited for their own monetary profit and short-term benefit. That type of logic and thinking is a dead end. I've never had a silk robe and I doubt I would be any better off if I did have one. I'd much prefer clean air, water, and unpoisoned land for growing nutritious food.


Well said, friend. That is an important aspect of the Way. We must be gentle with our mother Earth in order to compassionately harmonize with future generations' happiness.

But what I really, really want is hemp clothing. When I was in Atlanta in the 90s, there was a hemp shop, where I bought the best jeans, shorts, and hats I've ever had. They got nothing but more comfortable over time, breathed excellently and never molded (not that I tried). I could imagine that they would've lasted my lifetime, or even more.

Sadly, they shrunk terribly over time, and by that I mean I "grew" out of them :-)

Now, all I've seen on the net are 60%-40% cotton blends, IIRC, not the 100% hemp that was available back then. It doesn't look like the new cannabis acceptance here in America has produced hemp cloth, but I could be wrong. I imagine that growers are more focused on the likely more lucrative drug version of the plant. That said, it's been years since I searched the net for sellers, but my daughter has the skills and the 503a to make me a sweet kilt and jumpsuit and long-sleeve long-hanging shirt, should the funds come through.

"Hemp for victory!" --WWII American poster slogan


Could it be that it is more sustainable to have them as blends?


I doubt it. I'm pretty sure the % cotton would wear out much sooner, but that's just my intuition.


Silk worm industry is the definition of long-term sustainable exploitation of biosphere. It was done for thousand years, with minimal resources, no polution, it captures CO2... It only takes a few worms and some leafs! Chemical alternatives are poluting and poising, land air and waters!


It takes a few worms and lots of leafs. Got a friend in that industry and the partnership with Brazil is flourishing.


How about we leave the silk worms alone.


Silk worms would starve to death if left alone.


Did they agree on to be later breed, boil then breed their kids and boils them too? This is crazy. We all know the reason we feed them is for our own interest.


Of course they agreed. It is like little boys and cutting digs. They just love that shit!


You are so fun man I wonder why you don’t use your real account to gather lol points.


[dead]


> Domestic silk moths are entirely dependent on humans for reproduction, as a result of millennia of selective breeding.

https://en.wikipedia.org/wiki/Bombyx_mori


Pretty rich coming from someone sending virtue signals from a device I presume isn’t constructed from twigs and berries.

Must be difficult as a person who doesn’t exploit the biosphere.. you’ll have to let us know how you mastered photosynthesis.


Can’t achieve perfection so goodness is out of question?


I don't eat meat and don't drive cars. How about you?


As a level 99 druid, I have not moved from this spot in over 300 hundred years. My clothes consist of centuries accumulation of dust. I eat meat, but only beings that choose to end their existence in my perpetually open mouth. I breathe four times a year. I reproduce by sporulation, prodigiously, my offspring is responsible for the ozone layer. I convey this information to you by way of telepathy through an intermediary, Shawn of Ohio, an intern in IT who maintains his WFH infrastructure by peddling a stationary bicycle.


we use people as machines to move stuff around and make new stuff. humans are the only creatures that have to pay to live on this planet, if they don't pay, they can't have food/shelter/etc.


With the context that "payment" is a way to make labor fungible and labor is expenditure of calories, every creature has to pay to live.

Humans probably spend fewer calories on survival (with better results) and more calories on pleasure than any other species. This is partly thanks to society.

So humans probably "pay" the least of any species just to exist.


> more calories on pleasure than any other species

That’s true we aren’t cheap on the pleasure calories, on the other hands most mammals seems to include pleasure in hunting or building their shelter or other activities. Think to dolphins, Wolfe, or rabbits. We humans have created many pleasant activities to forget our workday -which is not a pleasure for most-, while the lion enjoy its nap before an exiting hunt tonight with fellows.


And we use people as sex machines, unfortunately.


"People know the part I'm playing." --"Just a Gigalo" lyrics

And people use most anything in that capacity, Rule #42 is not a theorem, after all.

While the callously selfish oppressive monsters don't care how their victims feel, so long as they get their pleasure. Greed for money and sex are really the same kind of selfish vice. And callousness to the misery of others and heartlessly disrespecting a person's human right to choose what we do with whom they will, are two of the primary drivers of unhappiness upon our Earth today.

Compassion is a balm for all such vices, but we must choose to learn the truth of its importance and then choose to manifest it in our ideals, attitudes, and behaviors. It, like all things human, is within our power, both individually and collectively, but, first, we must give a sh_t.


If squirrels could figure out how to make money I'm sure someone would be happy to charge them rent


Counterpoint: being useful to a more successful species is a staggeringly effective evolutionary strategy. Nature is chock full of symbiotic relationships and it's a perfectly valid ecological niche. Symbionts exist whether or not the host is capable of feeling bad about it.

By becoming attached to such a successful species as humans, any symbiont species has an extremely good chance of surviving for as long as the host species. Including long after they'd have gone extinct naturally.

Most species that humans like or find useful will eventually end up colonizing entire star systems along with us. Those species will continue to live on in their evolutionary descendants long after the sun exapnds and earth becomes inhospitable.

Personally I call that a successful species.

Or we could just leave the worms alone and let them be hunted to extinction by predators or die out in natural climate or ecological shifts over time. I guess that's nicer than species continuity into galactic time scales.


Very interesting take. However there’s some important difference between the species that express this behavior in the wild, free world (think pilot fish) and those that are breed, used, breed again then killed -all while in forced captivity.

I doubt the livestock would define itself as a "successful” if it could use language.



And how do they do that? Be very specific.


Alright. Suppose "meaning" (or "understanding") is something which exists in human head.

It might seem as a complete black box, but we can get some information about it by observing human interactions.

E.g. suppose Алиса does not know English, but she has a bunch of cards with instructions "Turn left", "Bring me an apple", etc. If she shows these cards to Bob and Bob wants to help her, Bob can carry out instructions in a card. If they play this game, the meaning which card induces in Bob's head will be understood by Алиса, thus she will be able to map these cards to meaning in her head.

So there's a way to map meaning which is mediated by language.

Now from math perspective, if we are able to estimate semantic similarity between utterances we might be able to embed them into a latent "semantic" space.

If you accept that the process of LLM training captures some aspects of meaning of the language, you can also see how it leads to some degree of self-awareness. If you believe that meaning cannot be modeled with math then there's no way anyone can convince you.


How does math encode meaning if there is no Alice and Bob? You should quickly realize the absurdity of your argument once you take people out of the equation.


Alright, I wrote an article: https://killerstorm.github.io/2024/12/15/meaning.html

Please let me know which part you find absurd.


It's a good essay but you didn't address my point.


Not sure what you mean... A NN training process can extract semantics from observations. That semantics can be then subsequently applied e.g. to robots. So it doesn't depend on humans beyond production of observations.


The function/mathematics in an NN (neural network) is meaningless unless there is an outside observer to attribute meaning to it. There is no such thing as a meaningful mathematical expression without a conscious observer to give it meaning. Fundamentally there is no objective difference between one instance of a NN with one parameter, f(θ), evaluated on some input, f(θ)(x), and another instance of the same network with a small perturbation of the parameter, f(θ+ε), evaluated on the same input, f(θ+ε)(x), unless a conscious observer perceives the output and attributes meaning to the differences because the arithmetic operations performed by the network are the same in both networks in terms of their objective complexity and energy utilization.


How does the universe encode meaning if there is no Alice and Bob?

One common answer is: it doesn't.

And yet, here we are, creating meaning for ourselves despite being a state of the quantum wave functions for the relevant fermion and boson fields, evolving over time according to a mathematical equation.

(Philosophical question: if the time-evolution of the wave functions couldn't be described by some mathematical equation, what would that imply?)


The universe does not have a finite symbolic description. Whatever meaning you attribute to the symbols has no objective reality beyond how people interpret those symbols. Same is true for the arithmetic performed by neural networks to flash lights on the screen which people interpret as meaningful messages.


> The universe does not have a finite symbolic description

Why do you believe that? Have you mixed up the universe with Gödel's incompleteness theorems?

Your past light cone is finite in current standard models of cosmology, and according to the best available models of quantum mechanics a finite light cone has a finite representation — in a quantised sense, even, with a maximum number of bits, not just a finite number of real-valued dimensions.

Even if the universe outside your past light cone is infinite, that's unobservable.

> Same is true for the arithmetic performed by neural networks to flash lights on the screen which people interpret as meaningful messages.

This statement is fully compatible with the proposition that an artificial neural network itself is capable of attributing meaning in the same way as a biological neural network.

It does not say anything, one way or the other, about what is needed to make a difference between what can and cannot have (or give) meaning.


[flagged]


We've banned this account for repeatedly breaking HN's guidelines.

Please don't create accounts to break HN's rules with.

https://news.ycombinator.com/newsguidelines.html


Neural networks use smooth manifolds as their underlying inductive bias so in theory it should be possible to incorporate smooth kinematic and Hamiltonian constraints but I am certain no one at OpenAI actually understands enough of the theory to figure out how to do that.


> I am certain no one at OpenAI actually understands enough of the theory to figure out how to do that

We would love to learn more about the origin of your certainty.


I don't work there so I'm certain there is no one with enough knowledge to make it work with Hamiltonian constraints because the idea is very obvious but they haven't done it because they don't have the wherewithal to do so. In other words, no one at OpenAI understands enough basic physics to incorporate conservation principles into the generative network so that objects with random masses don't appear and disappear on the "video" manifold as it evolves in time.


> the idea is very obvious but they haven't done it because they don't have the wherewithal to do so

Fascinating! I wish I had the knowledge and wherewithal to do that and become rich instead of wasting my time on HN.


No one is perfect but you should try to do better and waste less time on HN now that you're aware and can act on that knowledge.


Nah, I'm good. HN can be a very amusing place at times. Thanks, though.


How does your conclusion follow from your statement?

Neural networks are largely black box piles of linear algebra which are massaged to minimize a loss function.

How would you incorporate smooth kinematic motion in such an environment?

The fact that you discount the knowledge of literally every single employee at OpenAI is a big signal that you have no idea what you’re talking about.

I don’t even really like OpenAI and I can see that.


I've seen the quality of OpenAI engineers on Twitter and it's easy enough to extrapolate. Moreoever, neural networks are not black boxes, you're just parroting whatever you've heard on social media. The underlying theory is very simple.


Do not make assumptions about people you do not know in an attempt to discredit them. You seem to be a big fan of that.

I have been working with NLP and neural networks since 2017.

They aren’t just black boxes, they are _largely_ black boxes.

When training an NN, you don’t have great control over what parts of the model does what or how.

Now instead of trying to discredit me, would you mind answering my question? Especially since, as you say, the theory is so simple.

How would you incorporate smooth kinematic motion in such an environment?


Why would I give away the idea for free? How much do you want to pay for the implementation?


cop out... according to you, the idea is so obvious it wouldn't be worth anything.


lol. Ok dude you have a good one.


You too but if you do want to learn the basics then here's one good reference: https://www.amazon.com/Hamiltonian-Dynamics-Gaetano-Vilasi/d.... If you already know the basics then this is a good followup: https://www.amazon.com/Integrable-Hamiltonian-Systems-Geomet.... The books are much cheaper than paying someone like me to do the implementation.


Seriously... The ability to identify what physics/math theories the AI should apply and being able to make the AI actually apply those are very different things. And you don't seem to understand that distinction.


Unless you have $500k to pay for the actual implementation of a Hamiltonian video generator then I don't think you're in a position to tell me what I know and don't know.


lolz, I doubt very much anyone would want to pay you $500k to perform magic. Basically, I think you are coming across as someone who is trying to sound clever rather than being clever.


My price is very cheap in terms of what it would enable and allow OpenAI to charge their customers. Hamiltonian video generation with conservation principles which do not have phantom masses appearing and disappearing out of nowhere is a billion dollar industry so my asking price is basically giving away the entire industry for free.


Sure, but I imagine the reason you haven't started your own company to do it is you need 10s of millions in compute, so the price would be 500k + 10s of millions... Or you can't actually do it and are just talking shit on the internet.


I guess we'll never know.


Yeah I mean I would never pay you for anything.

You’ve convinced me that you’re small and know very little about the subject matter.

You don’t need to reply to this. I’m done with this convo.


Ok, have a good one dude.


There are physicists at OpenAI. You can verify with a quick search. So someone there clearly knows these things.


I'd be embarrassed if I was a physicists and my name was associated with software that had phantom masses appearing and disappearing into the void.


Why don't you write a paper or start a company to show them the right way to do it?


I don't think there is any real value in making videos other than useless entertainment. The real inspired use of computation and AI is to cure cancer, that would be the right way to show the world that this technology is worthwhile and useful. The techniques involved would be the same because one would need to include real physical constraints like conservation of mass and energy instead of figuring out the best way to flash lights on the screen with no regard for any foundational physical principles.

Do you know anyone or any companies working on that?



And yet I prefer now to early big bang era of the universe, though technically reversible.


The universe is not a Markov chain, in fact, no one knows what it is but locally we do know that entropy increases and the inevitable endpoint in our corner of the universe is complete annihilation. Your preferences are completely irrelevant in the local scheme of things.


This is intuitively obvious. If I give you some data x and you transform it with a non-reversible function f into f(x) then you are losing information. Repeated applications of the function, f(f(f(...f(x)...))), can only make the end result worse. The current implementations inject some random bits, b ~ N(u, s), but this can be thought of as a convolution operation with the distribution function g of the random data, g*f, that is injected which, after repeated applications, (g*f)((g*f)((g*f)(...(g*f)(x)...))), reduces the information content of the data you started with because the transformation is still not reversible as convolutions can not really change the non-reversible aspect of the original function.

I'm sure there is some calculation using entropy of random variables and channels that fully formalizes this but I don't remember the references off the top of my head. The general reference I remember is called the data processing inequality.¹

¹ https://en.wikipedia.org/wiki/Data_processing_inequality?use...


This seems obvious, but you're forgetting the inputs may actually have low entropy to begin with. Lossy compression is non-reversible, but usually the expectation is that we don't care about the parts we lost.

How might this cash out with recursive LLMs? Generalizing is very similar to compression: imagine recovering the Schrodinger equation from lots of noisy physical experiments. You might imagine that an LLM could output a set of somewhat general models from real data, and training it on data generated from those models generalizes further in future passes until maybe it caps out at the lowest entropy model (a theory of everything?)

It doesn't seem like it actually works that way with current models, but it isn't a foregone conclusion at the mathematical level at least.


So correct me if I’m wrong here but wouldn’t another way to look at this be something like re-compressing a JPEG? Each time you compress a compressed jpeg you strip more and more information out of it? Same with any lossy compression, really.

These LLM’s are inherently a bit like lossy compression algorithms. They take information and pack it in a way that keeps its essence around (at least that is the plan). But like any lossy compression, you cannot reconstruct the original. Training a lossy compression scheme like an LLM using its own data is just taking that already packed information and degrading it.

I hope I’m right framing it this way because ultimately that is partly what an LLM is, it’s a lossy compression of “the entire internet”. A lossless model that can be queried like an LLM would be massive, slow and probably impossible with today’s tech.

I suspect that we will develop new information theory that mathematically proves these things can’t escape the box they were trained in, meaning they cannot come up with new information that isn’t already represented in the relationships between the various bits of data they were constructed with. They can “only” find new ways to link together the information in their corpus of knowledge. I use “only” in quotes because simply doing that alone is pretty powerful. It’s connecting the dots in ways that haven’t been done before.

Honestly the whole LLM space is cool as shit when you really think about it. It’s both incredibly overhyped yet very under hyped at the same time.



It’s not intuitively obvious losing information makes things worse. In fact, it’s not even true. Plenty of lossy functions make the problem under consideration better off, such as denoising, optimizing, models that expose underlying useful structures, and on and on.

Also injecting noise can improve many problems, like adding jitter before ADC (think noise shaping, which has tremendous uses).

So claiming things like “can only make the end result worse” is “intuitive obvious” is demonstrably wrong.


> with a non-reversible function f into f(x) then you are losing information.

A non-reversible function f does not necessarily lose information. Some non-reversible functions, like one-way functions used in cryptography, can be injective or even bijective but are computationally infeasible to invert, which makes them practically irreversible while retaining all information in a mathematical sense. However, there is a subset of non-reversible functions, such as non-injective functions, that lose information both mathematically and computationally. It’s important to distinguish these two cases to avoid conflating computational irreversibility with mathematical loss of information.


On the arguments involving modeling inference as simply some function f, the specific expression OP used discounts that each subsequent application would have been following some backpropagation and so implies a new f' at each application, rendering the claim invalid.

At that point, at least chaos theory is at play across the population of natural language, if not some expressed, but not yet considered truth.

This invalidates the subsequent claim about the functions which are convolved as well, I think all the GPUs might have something to say whether the bits changing the layers are random or correlated.


if a hash can transform any size input, into a fixed length string, then that implies irreversibility due to the pigeonhole principle. It's impossible, not infeasible


Hashes with that property are just a special case of one-way functions.


What about something like image improvement algorithms or NeRFs? They seem to increase information even if some of it is made up.


If the goal of an image improvement algorithm is effectively "how would this image have looked IN THE REAL WORLD if it had been taken with a better camera", then training on previous "virtual upscaled images" would be training on the wrong fitness function.


It isn't real information though. This is effectively a Chinese whispers.

The only way AI can create information is by doing something in the real world.


It is real information, it is just information that is not targeted at anything in particular. Random passwords are, well, random. That they are random and information is what makes them useful as passwords.

As said by others, There is nothing terribly insightful about making something estimate the output of another by a non-perfect reproduction mechanism and noticing the output is different. Absent any particular guidance the difference will not be targeted. That is tautologically obvious.

The difference is still information though, and with guidance you can target the difference to perform some goal. This is essentially what the gpt-o1 training was doing. Training on data generated by itself, but only when the generated data produced the correct answer.


> The only way AI can create information is by doing something in the real world.

Everything done is done in the real world, but the only way an AI can gather (not create) information about some particular thing is to interact with that thing. Without interacting with anything external to itself, all information it can gather is the information already gathered to create it.


Is there a formalization of this idea? Would love to read more.


That's a better way of putting it, yes.


Maybe information needs to be understood relationally as in "information for a subject x". So if we have an image with a license plate that is unreadable and there's an algorithm that makes it readable to x, there is an information gain for x, although the information might have been in the image all along.


If the license plate was not readable, then the additional information is false data. You do not know more about the image than you knew before by definition. Replacing pixels with plausible data does not mean a gain of information. If anything, I'd argue that a loss of information occurs: The fact that x was hardly readable/unreadable before is lost, and any decision later on can not factor this in as "x" is now clearly defined and not fuzzy anymore.

Would you accept a system that "enhances" images to find the license plate numbers of cars and fine their owners? If the plate number is unreadable the only acceptable option is to not use it. Inserting a plausible number and rolling with it even means that instead of a range of suspects, only one culprit can be supposed. Would you like to find yourself in court for crimes/offenses you never comitted because some black box decided it was a great idea to pretend it knew it was you?

Edit: I think I misunderstood the premise. Nonetheless my comment shall stay.


For an example of this, see "Xerox scanners and photocopiers randomly alter numbers in scanned documents"

https://news.ycombinator.com/item?id=6156238


eliminating the noise makes the useful information clearer, but the information describing the noise is lost


Sure, but what if the upscaling algorithm misinterpreted a P as an F? Without manual supervision/tagging, there's an inherent risk that this information will have an adverse effect on future models.


It’s information taken from many other photos and embedded into a single one of interest no?


“Made up” information is noise, not signal (OTOH, generated in images are used productively all the time in training, but the information content added is not in the images themselves but in their selection and relation to captions.)


Image improvement algorithms are basically injecting statistical information (collected from other images) into one image.

The above statement applies for non-neural-network algorithms as well.


Do they gain information, or just have lower loss?

Too much information encoded in a model can lower performance (called overfitting)

That’s why many NN topologies include dropout layers.


Once more and more new training images are based off of those new upscaled images the training of those upscaling algorithms will tend to generate even more of the same type of information drowning out the other information


That's assuming that the same function is applied in the same way at each iteration.

Think about this: The sum total of the human-generated knowledge was derived in a similar manner, with each generation learning from the one before and expanding the pool of knowledge incrementally.

Simply adding a bit of noise and then selecting good outputs after each iteration based on a high-level heuristic such as "utility" and "self consistency" may be sufficient to reproduce the growth of human knowledge in a purely mathematical AI system.

Something that hasn't been tried yet because it's too expensive (for now) is to let a bunch of different AI models act as agents updating a central wikipedia-style database.

These could start off with "simply" reading every single text book and primary source on Earth, updating and correcting the Wikipedia in every language. Then cross-translate from every source in some language to every other language.

Then use the collected facts to find errors in the primary sources, then re-check the Wikipedia based on this.

Train a new generation of AIs on the updated content and mutate them slightly to obtain some variations.

Iterate again.

Etc...

This could go on for quite a while before it would run out of steam. Longer than anybody has budget for, at least for now!


> The sum total of the human-generated knowledge was derived in a similar manner, with each generation learning from the one before and expanding the pool of knowledge incrementally.

Is human knowledge really derived in a similar manner though? That reduction of biological processes to compression algorithms seems like a huge oversimplification.

It's almost like saying that all of of human knowledge derives from Einstein's Field Equations, the Standard Model Lagrangian, and the Second Law of Thermodynamics (what else could human knowledge really derive from?) and all we have to do to create artificial intelligence is just to model these forces to a high enough fidelity and with enough computation.


It's not just any compression algorithm, though, it's a specific sort of algorithm that does not have the purpose of compression, even if compression is necessary for achieving its purpose. It could not be replaced by most other compression algorithms.

Having said that, I think this picture is missing something: when we teach each new generation what we know, part of that process involves recapitulating the steps by which we got to where we are. It is a highly selective (compressed?) history, however, focusing on the things that made a difference and putting aside most of the false starts, dead ends and mistaken notions (except when the topic is history, of course, and often even then.)

I do not know if this view has any significance for AI.


Human knowledge also tends to be tied to an objective, mostly constant reality.


The AIs could also learn form and interact with reality, same as humans.


Not really.

The models we use nowadays operate on discrete tokens. To overly reduce the process of human learning, we take a constant stream of realtime information. It never ends and it’s never discrete. Nor do we learn in an isolated “learn” stage in which we’re not interacting with our environment.

If you try taking reality and breaking into discrete (ordered in the case of LLMs) parts, you lose information.


Think about this: The sum total of the human-generated knowledge was derived in a similar manner, with each generation learning from the one before and expanding the pool of knowledge incrementally.

Not true. No amount of such iteration gets you from buffalo cave paintings to particle accelerators.

Humans generate knowledge by acting in the world, not by dwelling on our thoughts. The empiricists won a very long time ago.


It’s not binary. Humans generate plenty of knowledge from pure abstract thought.


Do they?

When I pursued creative writing in my teens and early 20s, it became clear to me that originality is extremely difficult. I am not entirely sure I have ever had an original thought--every idea I've put to paper thinking it was original, I later realized was a recombination of ideas I had come across somewhere else. The only exceptions I've found were places where I had a fairly unusual experience which I was able to interpret and relate, i.e. a unique interaction with the world.

Perhaps more importantly, LLMs do not contain any mechanism which even attempts to perform pure abstract thought, so even if we accept the questionable assumption that humans can generate ideas ex nihilo, that doesn't mean that LLMs can.


Unless your argument is that all creative writing is inspired by God, or some similar "external" source, then clearly a closed system such as "humanity" alone is capable of generating new creative works.


Did you even read the post you're responding to?


You’re right, we obtained the knowledge externally. It was aliens! I knew it!


Externally, yes, we obtain knowledge from the world around us. We’re not brains in vats conjuring knowledge from the void of our isolated minds.


If you repeatedly apply one of three simple functions picked at random you might end up with Sierpinski triangle.


This sounds fascinating! I know what a Sierpiński triangle triangle is but I'm having so me trouble seeing the connection from picking functions randomly to the triangle. Is there some graphics or animation somewhere on the web that someone can point me to visualize this better?


You can read section Chaos Game here:

https://en.m.wikipedia.org/wiki/Sierpi%C5%84ski_triangle

It basically using the fact that fractal is self similar. So picking one function (that scales whole triangle to one of the one thirds) and transforming single point on a fractal into a new point also gets you a point on the fractal.

If you repeat this process many times you get a lot of points of the fractal.

You can even start the process at any point and it will "get attracted" to the fractal.

That's why fractals are called strange attractors.



Good one but these theorems are useful to have when thinking about information processing systems and whatever promises the hype artists are making about the latest and greatest iteration of neural networks. There is no way to cheat entropy and basic physics so if it sounds too good to be true then it probably is too good to be true.


If it is entropy and basic physics why are humans immune to the effect?


Humans are not immune to the effect. We invented methodologies to mitigate the effect.

Think about science. I mean hard science, like physics. You can not say a theory is proven[0] if it is purely derived from existing data. You can only say it when you release your theory and successfully predicate the future experiment results.

In other words, you need to do new experiements, gather new information and effectively "inject" the entropy into the humanity's scientific consensus.

[0]: Of course when we say some physical theory is proven, it just means the probablilty that it's violated in certain conditions is negligible, not that it's an universal truth.


Bitcoin is actually just a bunch of numbers. Whatever value you think you're storing with it is always denominated in fiat which is very ironic.


You could say any currency is just a bunch of numbers, the US dollar isn’t backed by gold anymore and hasn’t been for a long time.

The question is more like: Would you let everyone dictate the value of a currency and leave it decentralized? Or would you rather a government controlled centralized currency.

It’s preference and opinion at the end of the day


The dollar is backed by the fact that you can pay any debt with it and (if you’re a US citizen) have to pay your taxes with it.


I'm not sure people care much about the debt repayment thing? If I rack up a bill at a bar, I can technically force them to accept a sack of pennies as repayment, but most people use digital payments like credit cards. At that point it doesn't really matter if my bank account holds USD, GBP, BTC or any other liquid asset, as long as the payment infrastructure can handle any necessary conversions.


There are no assets for digital currencies so the words you're using to describe it are ontologically nonsensical. It's numbers, binary sequences, in databases. The only thing that makes it all work is your belief and faith that the numbers mean something other than the electricity and infrastructure necessary to maintain the databases. The only real assets in the entire scheme are the computers and power plants with the spinning dynamos necessary for maintaining the illusion of "value".

This is why the masses are always dazed and confused. Your water and food are full of poisons but the numbers in databases are what get the most airtime. Bitcoin is the purest distillation of fiat currencies because there is no longer any actual physical manifestation of it anywhere other than whatever paper key you keep on you as a reminder that there is a database with some numbers which you and those like you collectively believe to be "valuable".


The fact that it is not physical makes Bitcoin more valuable. Why? Because I can store millions of it in my pocket. I can move millions around the world in less than an hour. 24/7. No banks or approvals or intermediaries required. I can keep it in my own custody safety. It is easy and fast to exchange. It is easy to validate its authenticity. The supply is fixed and near impossible to dilute. Etc.. Etc..

that is why Bitcoin is valuable. It’s valuable because it stores value with all the important properties I listed above. Those properties are intrinsic to Bitcoin. Say that for any other asset.

If you’re worried about civilization crashing and Bitcoin becoming worthless. Fair enough, you should diversify into guns and toilet paper.


Well, with toilet paper I can wipe my ass. Even a dollar bill will do the trick but I'm not sure what I can do with a number in a database disconnected from the spinning magnets that make it functional.

I'm not here to convince you that bitcoin is worthless because clearly there are enough people who think it is worth more than $100k so you'd be better off convincing those people to offer you services in exchange for bitcoins instead of arguing with a random stranger about the collapse of civilization.


> Well, with toilet paper I can wipe my ass. Even a dollar bill will do the trick

Wait, are you saying dollars are good because you can wipe with them? I think you're in full agreement with the person you're replying to!


My point is that Bitcoin is no different from fiat. In fact, it can't be anything but fiat because it has no physical representation.


Fiat can be minted easily. Bitcoin cannot any faster than the rate that it is mined. Massive difference. And the key reason why the dollar continually depreciates.


Fiat means faith and that's all you have with bitcoin, faith in the algorithms that make the ledger very hard to mutate without spending the required "effort" to "mint" the entries on the ledger. You can believe whatever you want but as I said previously you're better off convincing people to sell you services for your bitcoins instead of arguing with random strangers on the internet about cryptographic hashing functions.


Great for the government, terrible for anyone holding dollars. The government continues diluting the dollar to pay its debt.


All of our assets are denominated with depreciating dollars. It’s nice because our assets always look like they’re going up. Of course a big part of that is just the dollar going down.


Money is a meme, there is no value it can measure other than whatever delusion people collectively agree to call "value".


The US Dollar is quite the meme! If you’re a US citizen and you have an income in any currency whatever, you have to acquire US Dollars to pay taxes with, or else you’ll eventually wind up in jail :)


Money based on violence.


Yes, and…?

In a stateless society, how would the landed deal with squatters? Couples therapy?


Just the top of the cancerous industrial iceberg. Eventually people will figure out that consuming non-renewable resources at unsustainable rates and poisoning the environment with industrial toxic waste all in the name of profits and progress is actually the real existential risk to human survival. The system must change or we'll all continue to suffer the consequences of increasing pollution and dwindling natural resources.


Industrial society is self-terminating.


Well, if you prefer, there was an indigenous American culture that needed to carry water in leaky natural materials and discovered that this worked a lot better if you coated the inside of a waterskin with tar.

They were wiped out by some combination of tar consumption and the rest of their lifestyle.

I would suggest that industrial society is less prone to this, mostly because it's larger-scale. There's always someone doing something that will ultimately prove to be a bad idea. You can't know until you try. What you need is to be able to recover from trying.


This is why the global dumping of plastic nanoparticles is particularly worrying.


The scale will make the inevitable collapse much more spectacular than any previous one in recorded history. I don't have any strong opinions on tar containers but I doubt it's any worse than whatever people are consuming every day on their food because of ubiquitous use of pesticides, herbicides, and insecticides. Add a few more carcinogens from regular industrial pollution and those natives could be considered to be practically living in paradise compared to their modern counterparts.


Considering that I have tar flavoured cheddar in fridge(not great). And could get tar flavoured candies from store... I think there is probably lot worse things...


> Considering that I have tar flavoured cheddar in fridge(not great).

Why did you buy such a thing and also, who makes tar flavoured cheddar (and why)?

Personally, I do enjoy a bit of nettle-wrapped Cornish Yarg and there's quite a few cheeses that use ash (Kidderton Ash and Morbier are lovely), but I wouldn't want tar with my cheese.


It's worth noting that not all substances called "tar" are particularly similar to each other.


Bitumen paint for the inside of concrete water tanks is still a thing. I don't use it for that, but I have used it as a barrier layer on outdoor timber objects, like fence posts, DIY planters and shed floors.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: