I have a simple and brittle system to track people and facts and associations on Newspapers, which is basically: "LLM extract people, places/projects/structure/places and save them as an Obsidian compatible graph network."
For 2 or 3 newspapers it works; my idea was to use it as grounding to discover relationships between people, companies and jobs.
As for the "everyone's life", I have always assumed that there would be a graph system to point to "forgotten" documents.
Gemini said my idea was amazing and new in its implementation, even if not in spirit, but I'm assuming it was being sycophantic as usual.
I always find it better to ask LLMs why this is bad and to explain itself why it thinks so. Sometimes it might hallicunate stuff but forcing it to find out the negatives is better than asking it for opinion since i am guessing they found early in training that an agreeable LLM is better received than one which is constantly truthful and considers you to be pretty dumb.
> i am guessing they found early in training that an agreeable LLM is better received than one which is constantly truthful and considers you to be pretty dumb
My sense is that this is sort of accurate, but more likely it's a result of two things:
1. LLMs are still next-token predictors, and they are trained on texts of humans, which mostly collaborate. Staying on topic is more likely than diverging into a new idea.
2. LLMs are trained via RLHF which involves human feedback. Humans probably do prefer agreeable LLMs, which causes reinforcement at this stage.
So yes, kinda. But I'm not sure it's as clear-cut as "the researchers found humans prefer agreeableness and programmed it in."
this has been my sort of big tent alignment with AI people. If I'm getting good CLI tooling that _actually works_ (or fixes to existing ones that have been busted forever) then I'm pretty happy.
Things that make systems more understandable to the LLMs ... usually make things more understandable for humans as well. Usually.
The biggest issue I've found is that vibed up tooling tends to be pretty bad at having the right kind of "sense" for what makes good CLI UX. So you still have awkward argument structures or naming. Better than nothing though
It never made sense to me why cars and pedestrians need to share the same spaces. Why can't we have more efficient walking routes that are away from cars?
if you have roads shared with pedestrians and cars (and bikes!) you can build denser cities.
I lived real downtown in Tokyo and my street was like "1.5" lanes wide (if cars were coming in both directions one basically needs to pull over and stop). I could just walk in the middle of the street. There was no sidewalk. No street parking of course. Cars would drive down at 15km/h or whatever, and slow to a crawl if people were in the street.
Straight lines are efficient walking routes, and ... well... that might involve just crossing the street directly! Every layer of grade separation gets in the way of that.
End result of all of this is less pavement to maintain, slower drivers (-> safer!), good walking and cycling conditions, etc etc etc.
I've been thinking the same thing lately. It's sorta frustrating that it required bots to force tech companies to make clean simple cli driven development workflows.
It's wild that it took AI to get half the companies on the planet to actually add reasonably priced APIs to their products so I don't have to puppeteer every damn thing with a flakey harness.
We must live in different worlds. Even for professionals building high quality apps is hard. It's easier with AI, but it's still quite hard. And it was definitely harder without AI.
> Agents will allow human programmers to get what they've been begging for decades now: proper requirements and flexible, logical, tooling.
...and once this goal is finally reached the programmer will breathe a sigh of relief and then promptly be fired since now the machine can do the job as well as they could.
Making someone think they're an accomplice to torture is itself recognized as a form of psychological torture. Telling someone that they're helping to advance science proves nothing, except that people can be deceived, manipulated, and exploited by bad actors.
Milgram decided to repeat his gross ethical violation 30 times(!), with dozens of test subjects each time. Overall, the majority of people actually disobeyed the orders to continue with higher voltages.
I think the only reason it's become so popular is because it makes for a shocking story, with grandiose implications. The specific "agentic state theory" Milgram invented is not backed up by his data, and personally, I find it philosophically dubious and psychologically concerning that he gravitated to it.
The first point, and I can see in my own life, is valid. Not properly rich by any means, but vastly surpassed any expectations and most of my peers from earlier life (which is rather easy when coming from poor eastern Europe but somehow most folks from back home didn't, too deep in their little comfort zones or fears of risks that were mostly made up).
It can be reframed as cca discipline too, willingness to suffer a bit for later rewards. Can see this as massive success multiplier in many real world situations.
Almost every person I went to college with had this viewpoint. There's also something comforting knowing you and your friends are all doing the same thing. We all were dirt poor in college trying to support ourselves with crappy part-time jobs working delivering pizza, working in fast food joints, cleaning offices at night. The idea was we all believed we were working towards something better than our current situation. The suffering some how made you a better person, more resilient, made you understand what it was like to really earn something.
All of my close friends I had in college all went on to do successful things. Engineers, attorneys, stock brokers, software engineers, pharmacists. We all eventually got to where we wanted to be, but the suffering is what still binds us together to this day. Talking about some of the houses we lived in that should've been condemned. Having to work 60 hours a week, and still do well on that exam on Friday.
The willingness to suffer is eased when you have a shared experience with others around you.
The great thing is you can just focus on the one person who "worked hard" or "self disciplined" or "studied well" and got rich while ignoring all the other people who did the same thing and didn't.
Working blindly hard is rarely rewarded well. Working smart, much better success story. This can be applied across whole job market but also within white collar jobs - I saw folks around me almost burn out with little to no reward, when it was cca clear it would end up that way. I didn't at that point, and leaned into stuff in other areas of my life and that worked much better.
I only write about myself and my perspective, have nothing to sell here, just sharing experience. No need to be so dismissive. There is always a factor of luck, but much less so if that approach spans across decades and generally works for me.
I don't think experimental psychology ever validated those extremely simplistic conclusions. I'd rather these simplistic conclusions are a "folk summary"/mythical-version of a few experiments and they come from already existing cultural tropes, cultural tropes that were simplified and made more cruel and ruthless by various self-marketing consultants.
Alot of the problem with these “disproven” things is over broad scope or abused in the popular media beyond comprehension.
The delayed gratification thing in particular is correlation vs. causation. It was really more about trust. Forcing kids to delay gratification is meaningless or counterproductive.
Agree. But according to Gemini [for what's worth] the final 1990 Mashmallow's study [since first versions were cautious] did indeed jump to conclusions to point there was a causation to a better later life. The media might have amplified, but the wrong (or misleading) conclusion was already present in the _scientific_ paper.
If a scientific paper makes a conclusion, that doesn't mean its a correct, valid, or properly supported conclusion.
You instead look at the claim and the data and the experiment methodology. It often says something far far less generalizable or significant than the conclusion section of the paper.
The thing about experimental science is that you should not make much conclusions from one study or one paper. Those should wait till consensus is reached, till there are many independent studies confirming the same thing under various conditions.
NASA operates as a terminal, bloated monopoly that has completely severed its feedback loops with physical reality in favor of preserving a 25-year-old architectural fantasy. The Orion heat shield is essentially a buggy hardware release being pushed into a mission-critical production environment despite the fact that its own internal telemetry is screaming about a catastrophic failure. By choosing to ignore the spalling and the melted structural bolts, the agency is deliberately discarding the engineering equivalent of core dump data to maintain a schedule that satisfies political optics rather than Newtonian physics.
I like Cryptomator's solution: donate to get a pretty banner.
Also, it didn't work -- Mountain Duck is closed source.
Personally I donate €50 every now and then when the average of the donation goes below a certain value (varies by project) but it requires tracking in a Spreadsheet.
Actually it's the wrong question. Implement rebootless updates is the right ask.
You'll have to reboot like once a month still but it's better than how it is now.
I had to reboot my laptop only once since 02-20. Similar the month before that. The only exception was around mid January. So this shouldn’t be much worse on average even now.
That’s completely on your IT. There was only 1 single day with patches released: March 10. There was only 1 in the month to that date: February 10.
My guess is that the shit ton of only-for-legal-reasons-useful “security” and surveillance programs demand way more restarts. My company laptop and VM are similar.
I get really annoyed at those articles which advocate the developer to sacrifice themselves towards a better future.
Companies externalize costs. I refuse to be the one, as an individual, with the burden of fixing society ills to my own detriment.
Tell me to get into politics, join an association, whatever. Now, as an individual, lose money for morals? No thank you. I may, and probably will, do it -- but don't expect I do it. I have no business, in a society with less and less public services, to harm myself and my family for refusing to do well paying jobs.
I will externalise those costs as much as possible. I will bring awareness. I will write letters. But don't ask me to leave a well paying job -- that's someone else job to fix.
But that's the problem. Your logic applies to everyone in an organization (a business, a family, a country, and so on). The organizations actions are not the result of any single actors decisions, even if weight isn't equal. The decisions of an organization are made of the decisions of the collective. The agglomeration of them. And that's why everyone's decisions matter. Because you don't know when your actions have more weight than when they have less.
We're all in this together. One way or another, your actions affect others. Your actions aren't in isolation. Conversely this is true for others, and I suspect you would rather others treat you well, right? So which feedback loop do you want you contribute to? That's the only question there is
For 2 or 3 newspapers it works; my idea was to use it as grounding to discover relationships between people, companies and jobs.
As for the "everyone's life", I have always assumed that there would be a graph system to point to "forgotten" documents.
Gemini said my idea was amazing and new in its implementation, even if not in spirit, but I'm assuming it was being sycophantic as usual.
reply