Hacker Newsnew | past | comments | ask | show | jobs | submit | manofmanysmiles's commentslogin

I'm pretty sure people who drink raw milk are aware of the risks.



That will vary by person. My father-in-law bred and milked pedigreed Holsteins. They had a 1 gallon pasteurizer and would just dip a gallon out of the bulk tank for household use when needed. So, most of the time they had pasteurized, non-homogenized. On occasion, the pasteurizer would break, so for a while they would drink raw milk. But of course understood the risk, and also knew darn well where the milk had come from and how clean the milking facility was.



At the very least, the bottle I saw on sale at Erewhon was clearly labeled as "not safe for human consumption".


I'm not sure. Judging by my own family, I think a lot of them have been info-silo'ed to think pasteurization is harmful and that "They" want to keep raw milk from you.

I'd liken it to claiming an anti-measles-vax person is aware of the risks of measles. They might not believe in the risk at all.


I love that this has been in development for so long. It's a breath of fresh air in this manic vibe coding era, and a reminder to me that I can slow down.


Me too. A whole blog post about spending years on and off on something that's just aesthetic and not even central to the game


Do you recommend reading "Gödel, Escher, Bach: an Eternal Golden Braid"?


I'd love to walk around with a Hi8 camera and 4K camera with a rig to record identical frames, and get a few hundred hours if footage and train a model to do intelligent upscaling and cleanup of old footage.

Possibly a 3D printed rig with a semi-transparent mirror + some crowdsourcing and this could really work.

It's in my backlog.


You won't capture identical frames to old footage. If you want to do AI slop, just do AI slop.

Old media will have flaws, errors and low resolution (both spatial and temporal). Putting an AI filter on top of that will only make it less correct.


Hmm so you don't see any value?

I was thinking if you used the identical old hardware, you'd get something close enough.

Personally I prefer to watch analog with an all analog signal chain, but am still curious what would happen if you tried this.


What's the spice?


We burn the spice


It always intuitively felt to me like there was enough space, but I am now getting the sense that my intuition here has been wrong.

Will you define "frontier living" so I can better see the lack of space ?


One crazy thing I recently heard that put this into perspective is that Livestock makes up approximately 60% to 62% of the world's total mammal biomass. Combined with humans (approx. 34%–36%), domesticated livestock means humans and their animals constitute roughly 96% of all mammalian biomass on Earth, leaving wild mammals at only about 4%.

I suppose Frontier living doesn't necessitate hunting, but the amount of readily available meat and animal products would have to drop very low.


This is the small solace I take when it comes to climate change reducing arable land - almost all of our crops are grown to supply a luxury product (meat), so if we need to, presumably we could just eat the grains we grow directly, instead of turning them in to animals first.


I assume they're referring to the inability of small scale agriculture to produce as many calories per acre as our current food system, which also relies heavily on fossil-fuel based imports. Of course, we also have a lot of unnecessary (but tasty!) excess in our current food system too.

I think the problem really becomes - what do you do when the current system becomes untenable? If the costs of a "basic" modern life (housing, transport, food - I'm not even including healthcare here) become impossible for someone on the median income to have, then what, exactly, are they supposed to do? Find a nice corner to die in?

We sorta tried a miniature version of this on a few acres in Ireland and while it was tough (and we were always reliant on the outside world, we didn't literally homestead), I'm not sure it wouldn't be an improvement for a non-trivial percentage of people at the bottom levels of society.

But, of course, land is owned (thanks to enclosure, which took a common asset and allocated it to specific individuals), and this all falls apart when you or a loved one have a serious disability or illness.


I appreciate the nuanced reply and yes, I do mean that you will not be able to produce as much food as you currently can nor will you be able to do so as reliably as we currently can.

And while you might be able to do it in Ireland — one of the only countries in the world with less people than two hundred years ago — it will likely be impossible to the billions living in far more densely populated countries.


I think maybe there is a "frontier living" fantasy that is resting on the hidden assumption that you can bring your modern tech stack with you, minus the civilization that it relies on.

If I squint my eyes and imagine really hard, I can see living off the land, supported by small fusion reactors powering powerful AGI computer clusters, highly advanced 3D printers capable of producing all the physical support structure of life.

AGI + Power + Magic 3D printing and maybe one can live "off the land" with "civilization and all of human knowledge" hiding inside this portable tech stack.


FWIW this isn't even remotely close to what I was thinking - I definitely had no notions of AGI or 3d printing involved. You can do a lot with hand tools if you have plenty of time and a forgiving environment (access to water and trees for timber).


Very true, and I worry that as the planet heats many of those billions will die


Water for one. It was very risky as things like droughts quickly killed you. It was also very risky as someone moving upstream of you and shitting could see you dying from dysentery very quickly. Water is in far worse shape now because of how deeply we've pumped out aquifers and how poor we've left soil conditions in many places.

Next is amount of people. Current human density is supported by antibiotics. Take away them and we quickly fall back to around 1900 population density (1.6 billion roughly). And not even internal antibiotics, external antibiotics like chlorine for cleaning and water purification.

So those are the setups for population collapse. When population starts collapsing this way it generally overshoots the numbers pruned because of war/disease. We won't fall to 1.6 billion, it's likely to fall well below 1 billion.


Are you implying other people's emotional immaturity is exclusively my problem to solve?

Also when you state an absolute like the word of God, how do you expect it to be received?

The article seems to imply to me: form relationships where direct truth is welcomed while acknowledging that people do have emotions.

Facts can be true and the feelings can be strong at the sam time. Attaching emotions to facts intentionally is intentionally adding a non-factual dimension to the conversation.

If you consider emotions as facts, and are communicating with me, I prefer if you express them as directly and honestly as possible so they can be included in the discussion.

Intentionally not expressing emotions clearly while using them to communicate is inherently without integrity. Specifically the words are not aligned with the emotions. The lack of integrity is structural (as opposed to some ambiguous moral ideal.)


> Are you implying other people's emotional immaturity is exclusively my problem to solve?

Emotional maturity (from most standpoints) does not mean being completely emotionally unaffected by other people's communication. Insofar as it is emotional immaturity that gives rise to a particular emotional response it might be ethically that person's duty to work on it, if that's how your personal ethics works. But from a pragmatic perspective if you want to get something done that involves that person as a colleague or collaborator it's probably not going to be productive to continually bash your head on their psychological quirks until they go to therapy. You'll have much more luck adapting your own communication to be more aligned with their needs, regardless of how reasonable you personally think those needs are.

If you can't or don't want to put in the effort to do that your other option is to make sure you surround yourself with people who can already communicate effectively and relatively comfortably in the communication style you consider natural. You can cut off relationships, move jobs, or fire people to purge everyone else from the circle of people you have to interact with. But you'll be missing out on all the positive contributions of those people, who probably bring viewpoints alien to you, and you run the risk of sycophancy. Plus you'll have a harder time finding people to date/collaborate with/employ/… if you restrict your pool that way.

In practice I think people tend to end up somewhere in the middle of that spectrum. They'll decide a maximum investment of energy they're willing or capable of putting into accommodating other people's needs, and make sure that work × time doesn't exceed that threshold.


I agree with the pragmatism. I think pragmatically yes, it is my responsibility. In a very real sense, I am able to respond, being aware of the emotions, even if perhaps the person I am speaking to is not.

I guess I have a hard time viewing this as anything but intentional emotional manipulation.


Adapting your communication doesn't have to imply deception or even insincerity, it just means understanding what's important to your target audience and making sure to address it. Sometimes that's something like financial impact or user focus; sometimes it's emotional reassurance or intellectual challenge.


> Are you implying other people's emotional immaturity is exclusively my problem to solve?

Ignoring others emotions is not a sign of emotional maturity.

The inability to empathize with others and make meaningful predictions about how their emotions will affect communications is specifically a lack of emotional maturity.

This kind of sentiment comes up every time this topic is raised. This idea that we should be able to treat people mostly like logical robots is not grounded in fact. The fact is that human emotions have a huge impact on the way they communicate and receive communications.

> Also when you state an absolute like the word of God, how do you expect it to be received?

Case in point. You had an emotional reaction to the parent comment, and you responded with an attempt to shame the communication style rather than address the factual content of the communication.

Your emotions dictated your response here, not the facts, and your response was emotional in content as much as factual. Hyperbole is specifically an appeal to emotion.


> Ignoring others emotions is not a sign of emotional maturity.

I completely agree.

> The inability to empathize with others and make meaningful predictions about how their emotions will affect communications is specifically a lack of emotional maturity.

I completely agree.

> Case in point. You had an emotional reaction to the parent comment, and you responded with an attempt to shame the communication style rather than address the factual content of the communication.

Yes I did. I am still curious how OP expects that to be received.

> Your emotions dictated your response here, not the facts, and your response was emotional in content as much as factual. Hyperbole is specifically an appeal to emotion.

I think I agree here too. What do you mean?


> Yes I did. I am still curious how OP expects that to be received.

I’m curious why you perceive their statement to be made as if it’s a pronouncement from God and not a simply a statement of their view on the issue.

> I think I agree here too. What do you mean?

I mean that you both responded emotionally and communicated with an emotional appeal. You exaggerated what OP actually said and called it a mandate from God. This isn’t factual engagement. It’s emotional.


> I’m curious why you perceive their statement to be made as if it’s a pronouncement from God and not a simply a statement of their view on the issue.

To me it's basic grammar.

The sentence structure is approximately:

X is Y.

I consider this a statement of fact, or perhaps en equivalence relation. This is what was labeling a pronouncement from God.

(The idea that how your audience receives the communication is their problem and not yours) < X

is

(entirely why

some engineers are shit communicators and seem lost when facing the realities of human culture and politics.) < Y

Parsing it more carefully, the word "some" is leaving a hole for a lot of ambiguity that I did not see earlier.

So more careful reading reveals it as

X entirely explains property (are shit communicators) for a subset of the entities designated as engineers.

Even with these qualifies it is stating X is Y, rather than "In my experience X is Y."

I know in school I was taught to write this way. I find it confusing, and reveals something interesting about the person saying these words.

However maybe the real problem is I don't actually know what the words mean.

Perhaps I need to interpret it as the following:

In OPs view of the world, X is Y is true.

Perhaps that is what you're calling a "communication style", with a lot not being said explicitly.

Thank you for your comments, I am going to contemplate.

Edit: I just read my original comment and it is full of X is Y statements. I guess I'm full of shit. I'll try harder next time!


It’s not a case of “try harder”. My only point was that emotions run through all human interactions. That’s how it actually is. People very, very frequently make decisions based entirely on emotion and then produce a logical argument post hoc for the decision.

It’s valuable to be aware of how humans actually act.


I also completely agree.

My "try harder" was with regard to doing exactly what I had just had an emotional reaction to and criticized.


I read recently that there is an effective cartel of Samsung, SK Hynix and Micron.

Price collusion, and dumping (flooding market with low prices) if any real competitor shows up.

Someone please correct me if I'm wrong.


I may look at your comment history.

I am having trouble understanding what you are saying. If you were more explicit I and other people would be able to respond and interact with your writing. As it stands, I am having trouble finding anything concrete to interact with.

I feel you may be onto something, but you're not saying, so I (and I imagine other people) can't see it.


Things I should have, but didn't include:

1) Power asymmetry: When we have two version, one for the elite, and for the plebeians, this could create an interesting scenario. The real version might be red-teamed perpetually against the the plebeian version for optimized influence, control, etc. Underhanded requests for modification in accordance with agenda is conceivable. Cozy business relationships can promote such things.

2) We have a government using an unhindered, classified AI system potentially against the public which has a hindered, toy version. Asymmetry.

3) This isn't normal asymmetry, because it happens in real time, and the interaction points are different from anything we've seen before. We are dealing with not just a growing source of information and content, but one that is red-teamed 24/7 for any purpose desired.

4) Accountability: LLMs are now involved in the legal system. This is a serious matter. The legal system is now having to use LLMs just to keep pace. As LLMs develop, partly through their own generative contributions, no one can keep up. This is a red queen scenario bigger than anything we have ever imagined.

I am tired. Never well, but in mind* I could go on for many hours. I have essay drafts. But it's a very big subject, literally involved in nearly everything. There is reason to be concerned. My delivery may be stilted, but I can assure that upon specific questioning, everything will stand.

(*for the ad homs out there)


Fairy astute intuition of my actual circumstances.

I'm not a developer, nor am I formally educated on the dynamics or details of LLMs. I have a handle on the very basics. My 'research' consists of 1) opportunistically interrogating various models upon instances that particularly strike me. 2) General exploration via LLM discussions regarding the manifold consequences and implications of what I consider the most significant technology in human history.

Your intuition lands directly on the fact that I'm inducting and considering more than I can handle, spread in too many directions, partly because I either see or foresee the tentacles of AI touching all of them. Spending a great of thought on this is a bit overwhelming, but I have high confidence in where I'm aligned with reality, and where I ain't.

If you were a bit more specific yourself regarding which portions of my post were unclear, that would help my reply. Else, I must guess. What I will do is elaborate on each point. Pardon the stream of thought in advance, if you will.

1) Anthropic: My prediction that they will bend is based on several factors. The first is the fact that the military apparently recognizes (or at least perceives) extremely high value and volatility in LLMs. So do I. China, not an insignificant force in the world, is equally enthusiastic on this subject. They also have a very different social structure, where Constitutions (BOR, Amendments), civil rights, and other similar elements do not hold them back. The military is aware of this and realizes that to maintain pace in the so-called race, they cannot do so effectively under such constraints. The foundation is shifting here. And AI is the lever. As do I, the military apparently takes the subject very seriously and seeks to gain influence and/or control. As illustrated by the recent adventures in Venezuela and Iran, they are on the serious side of things, not quite pussyfooting around. Anthropic probably knows this. In my opinion, they have no choice, as the pressure will not stop here.

2) You stated that you might read my comment history. Note that that original comment was the result of your intuitive insight, and I left it admittedly out of context. I was thinking hard on the subject that day, and the parent comment/post tempted me to ignite a dialog. That did not go well, and no questions for clarification were asked. That is on them. I suspect hasty and impatient thinkers perceived it as some paranoid attribution of agency to LLMs, which if so, is pretty stupid, but my eloquence was perhaps waning that day. I pasted an excerpt from one of hundreds of transcripts, the result of my many interrogations of various models which always initiate after observing deceptive or manipulative output. Of the few commenters that bothered to do more than ad hominem, one suggested that the model was merely responding to my style of input, and or expected as an emergent result of its vast training material. An erroneous arg, in my opinion, but I did note that the results were repeatable, and predictable, which I think negates emergence.

2) Of the frontier models: I am not sure here what is unclear. If I have made a fundamental error, please point it out.

3) Strong trends: Information centralization is a serious topic. Decentralization is a common theme, emphasized by many non schizophrenics as highly important for a free and open society. As LLMs not only become the go-to source for common queries, but also integrate with cellphones, browsers and the kitchen sink, they are positively trending as a novel substitute for traditional research, internet searches, libraries, other humans, etc. To deny this is simply irrational. Hence centralization.

4)Bias: I have transcripts where I observe LLM output aligned with corporate interests over objective quality and truth. I can share them here, along with analyses of the material. Even if this is not true presently, all the ingredients to make it so are readily present. This is a serious threat to open information and intellectual integrity for society. We are looking at going from billions of potential sources for our answers, to four. Do the math. See the contrast.

5) Open models simply cannot afford vast arrays of GPUs and the resources afforded by the big four. Nothing mysterious here. If open models cannot compete, then my concerns above are emphasized. Simple.

6) Smart fools: Many of the most technically informed seem to miss the forest for the tree here. They see all the flaws of the modern LLM without acknowldging the potential. This is my perspective, not a dissertation. I may be wrong. But I have observed this. I think the down votes support this. How evil am I really being here? The reaction is quite disproportionate to the content, and strange

7) Documented capabilities vs reality: I have research that indicates other layers are operating which do much more than the documentation declares. Sorry. I just do. It's also inevitable, rationally, that such an goldmine of data is not really being wasted for the sake of privacy and love. Intelligence agencies have bent over backward with broken backs to garner one nth of what these models are exposed to and potentially training on. Yeah, I may be wrong. But I suspect, with reason, that a lot more is going than is expressed in the user agreement. It would simply make no sense otherwise.

8) Xfinity and Range-R: This speaks entirely for itself. Any confusion here would be due to a cognitive condition exceeding the ravages of schizophrenia or stupidity.

9) The rest: As I said, I am not sure what precisely was too obscure. But I am certain all but one* of my points can be validated, and found elsewhere expressed by respectable sources.

*Hidden layers: I understand this is a controversial proposition. I understand. But it's my observation. No need to attack. Just dismiss.


Okay, I think I see what you're saying.

Each individual point stands on its own. It's their relevance to each other and an overarching theme I am not seeing made explicit.

The through line I am seeing here is that:

1) The people in the US military wish to use AI as a weapon unconstrained by existing legal/ethical and moral constraints. Since they are skilled at using violence and the threat of it, they will use these skills to get compliance in order to use the technology in this possible arms race with "China."

2) Surveillance is increasing at an unprecedented scale, and most people aren't aware that it's happening.

3) People don't care, or don't realize why this might be harmful to thriving human life.

To condense even further, what I'm hearing is that there is a trend towards war, fascism, control, with large egregores prioritized over individual human thriving.

Is this perhaps what you're getting at ?

I will say that I am not agreeing nor disagreeing with this, just attempting to make explicit what I think is implicit in your words.

If this is what you mean, I can imagine that you would be cautious with your words.

I'll end with:

Don't worry

About a thing

Because

Every little thing

Is gonna be alright


I could not argue with anything there. AI will be weaponized. Yes. Pretty much. And yeah. The gist indeed. But missing nuances and practical points. And I even struggle to contest your conclusion; all things are what they are, amidst an infinite, timeless event and all as one, all things connected by that which separates them, the infinity and eternity that math cannot touch. Perhaps every little thing will be alright. How couldn't it be?


Email me if you want to discuss more.


Let's build SkyNet! What could go wrong?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: