Hacker Newsnew | past | comments | ask | show | jobs | submit | thegrim33's commentslogin

I don't understand how accounts such as these exist on HN without @dang and others being completely ideologically complicit/captured. The account has a 9 month history of posting absolutely nothing but political content to HN. And they're allowed to exist and continue posting. By the mods and by the users.

@dang doesn't work - I only saw this by accident. You need to email hn@ycombinator.com if you want reliable delivery.

If you see a post or account that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it. We don't come close to seeing everything here—there's far too much. You can help by flagging posts or emailing us at hn@ycombinator.com.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


the photos from the current US presidential inauguration will show that there is NO seperation beween the largest teck companys and politics, or government policy, and legal frame works, with every tool/power availible to politicians bieng used to promote or restrict access to computer hard and software. Then we can get into how technology is bieng used by governments to wage war, and controll populations, manipulate stock markets, choose judges for favorable trial results, etc.

There is no apolitical. You think tech is neutral when one of its titans buys elections? The only one ideologically complicit is you if you choose to close your eyes to that reality.

Nothing is purely apolitical, I agree. At the same time, we don't allow HN accounts to use the site primarily for political battle, regardless of which direction their politics take. This has been the standard for a long time: https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme....

Since your account has been using HN not just primarily for this but, apparently, exclusively for this, I've banned the account.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html. But please don't create accounts to break HN's rules with.


Write speed is probably the least important metric for people that are considering something like this. After everything with storage and longevity is taken care of, improving write speeds is a nice to have, but not the important part.

How do you need to supervise this "less" than an LLM that you can feed input to and get output back from? What does it mean that it's "running continuously"? Isn't it just waiting for input from different sources and responding to it?

As the person you're replying to feels, I just don't understand. All the descriptions are just random cool sounding words/phrases strung together but none of it actually providing any concrete detail of what it actually is.


I’m sure there are other ways of doing what I’m doing, but openclaw was the first “package it up and have it make sense” project that captured my imagination enough to begin playing with AI beyond simple copy/paste stuff from chatGPT.

One example from last night: I have openclaw running on a mostly sandboxed NUC on my lab/IoT network at home.

While at dinner someone mentioned I should change my holiday light WLED pattern to St Patrick’s day vs Valentine’s Day.

I just told openclaw (via a chat channel) the wled controller hostname, and to propose some appropriately themes for the holiday, investigate the API, and go ahead and implement the chosen theme plus set it as the active sundown profile.

I came back home to my lights displaying a well chosen pattern I’d never have come up with outside hours of tinkering, and everything configured appropriately.

Went from a chore/task that would have taken me a couple hours of a weekend or evening to something that took 5 minutes or less.

All it was doing was calling out to Codex for this, but it acting as a gateway/mediator/relay for both the access channel part plus tooling/skills/access is the “killer app” part for me.

I also worked with it to come up with a promox VE API skill and it’s now repeatable able to spin up VMS with my normalized defaults including brand new cloud init images of Linux flavors I’ve never configured on that hypervisor before. A chore I hate doing so now I can iterate in my lab much faster. Also is very helpful spinning up dev environments of various software to mess with on those vms after creation.

I haven’t really had it be very useful as a typical “personal assistant” both due to lack of time investment and running against its (lack of) security model for giving it access to comms - but as a “junior sysadmin” it’s becoming quite capable.


I don't have one going but I do get the appeal. One example might be that it is prompted behind the scenes every time an email comes in and it sorts it, unsubscribes from spam, other tedious stuff you have to do now that is annoying but necessary. Well that is something running in the background, not necessarily continuously in the sense that it's going every second, but could be invoked at any point in time on an incoming email. That particular use case wouldn't sit well with me with today's LLMs, but if we got to a point where I could trust one to handle this task without screwing up then I'd be on board.

> Isn't it just waiting for input from different sources and responding to it?

Well, yes. "Just" that. Only that this is at a high level a good description of how all humans do anything, so, you know.


Yeah, and if you give another human access to all your private information and accounts, they need lots of supervision, too; history is replete with examples demonstrating this.

It's not just waiting for input, it has a heartbeat.md prompt that runs every X minutes. That gives it a feeling that it's always on and thinking.

That gives _you_ a feeling that it's always on. It still can't model time.

"What I find annoying is repetitive stuff that's just typing"

..

"Where I can't trust AI is if it needs to copy paste / duplicate code"

???

AI takes away the "boring", "tedius" parts of coding for you, yet you at the same time don't trust it to even just duplicate code from one place to another?


It's so interesting the amount of people with these big AI fears who think that AI is going to replace most knowledge work within a short period of time, singularity, etc., but that same AI that takes over everything .. isn't going to be smart enough to operate robotics to do plumbing or welding? Those things will be outside the limits of its intelligence?

It's been my belief for over 20 years now that dedicated/instrumented roads for autonomous vehicles is the only way autonomous cars will ever be a thing, at mass scale, other than via the invention of true AGI (which I still don't think we're close to). Such roads becoming a thing within the next 6 years though, I'd doubt.


I think there might be a trial stretch of road somewhere in a few years, although surely not widespread. Such a thing feels inevitable to me, though, if we’re going to have self-driving cars at all.


Isn't a major feature of consensus algorithms for them to be tolerant to failures? Even basic algorithms take error handling into account and shouldn't be taken out by a bit flip in any one component.


Yes. To clarify, my understanding of _this_ particular incident was wrong because it was based on reading the report of a previous incident.

But for the 2008 incident I read and linked the report, that was what happened. The ADIRU unit did probably get a SEU event and that should have been mitigated by the design of the ELAC unit. The ELAC unit failed to mitigate it so that's the part that they probaby fixed.


For some reason it took this long to hit me.

If you take as axioms:

1) Countries have major political interest in whether other countries have nuclear reactors

2) Countries are already, at large scale, manipulating discourse across the internet to achieve their political goals

Then of course it follows that any comment thread on a semi-popular or higher site about whether a country should build more nuclear reactors is going to be heavily manipulated by said countries. That's where (most) of the insane people in such threads are probably coming from.

How are we supposed to survive as a civilization with such corrupted channels of communication?


What is, according to you, the political interest?

There are countries that have interest of having gas or oil bought from them. It is not clear if they are pro or against other countries going nuclear: on one hand, nuclear will replace part of their market. On the other hand, lobbying to move towards nuclear may impede progress in replacing gas and oil by renewable (a strategy would be to lobby so that the nuclear project starts and then lobby so that the project stagnates and never delivers).

There are countries that have interest in seeing nuclear adopted because they have a market for the ore extraction or waste processing. There are countries that have interest in seeing nuclear not adopted because they have a market around other generations.

Finally, some countries may want to see their neighbors adopt nuclear: the neighbor will pay all the front bills and take all the risk (economical but also PR, or the cost of educating experts, ...), and if they succeed, they will provide import energy very cheap that can fill the gaps the country did not wanted to invest in.

So it is not clear if there is just one stream of lobbying. The reality is probably that every "sides" does somehow contain manipulative discourse from foreign countries.


Does this apply also to fossil energy threads? Countries have a major political interest whether other countries use fossil energy, to mitigate the climate catastrophe and ramp down fossils use.


I really, really, wish somebody would actually put together a real reliability report. You know, by actually getting hard data on what repairs different models need, how often different models break down, how long different models last, etc. That's how you should rate reliability.

The consumer reports model of just surveying a random collection of people about what they personally think about the reliability of cars is not hard data. They don't collect any data themselves. They just take random people's beliefs as the data. It's also an oroborous, as what they rate as reliable/unreliable one year will then influence what people's beliefs are when they're surveyed the next year about what they believe is reliable.


They did it in Germany : https://www.autoblog.com/news/the-bestselling-tesla-model-y-...

But it's based on German made models


Afaik No, German report just lists passed/flagged TUV. TUV fails you on negligence like not servicing car every year like a good VW German owner.


It does not require that, though obviously a car which is never serviced is more likely to fail.


TUV inspection is all about checking if car is routinely maintained and in optimal working condition. It fails you on things like:

- rusted rotors, Tesla owner wont ever notice anything wrong with brakes

- worn out suspension, Tesla owners are used to harsh ride


All major communication forums on the internet have been mass manipulated/poisoned by countries across the world for well over a decade now. A huge chunk of all internet speech is inauthentic. In my mind, AI videos really don't degrade the situation much further. The internet as a communication medium has already been completely compromised for a long time.


Citation?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: