Hacker Newsnew | past | comments | ask | show | jobs | submit | blackcatsec's commentslogin

people were worried about deepfakes with AI but instead the propaganda is doing pretty well, and arguably better, when it's not a deepfake but instead silly, catchy, youthful, and is playing up existing beliefs. The invasion is deeply unpopular in the US, and these videos only serve to amp that up.

Deepfakes were never necessary, people have been making incredible propaganda forever though the same few tactics. For instance, presenting footage out of context.

The deepfakes haven't gotten really good yet. Give it a year.

deepfakes have been pre-subverted by having leaders that can't put coherent sentences together, or be trusted when they say literally anything.

A trump deepfake will be just as reliable about trump policy as actually listening to him speak. maybe even more reliable than from the horse's mouth


You're right and it's not even hyperbole. Big sigh.

I saw a video of some French politician's speech on Reddit; most commenters were praising the guy's English. It was AI dubbed...

Invasion?

Ground troops are going to be deployed

That would be a precondition, yes.

They've fallen victim to a catastrophically easy scare tactic, unfortunately. "The sun only shines during the day therefore solar is bad!" Dumb, but easy.

In Toronto there is only daylight for 9 hours in winter

Yes surely some days are cloudy

So some days you get 5% capacity factor, and need some other energy source as well

So it harms the economics of the venture

Look at the profitability of companies building utility scale solar farms, they cost 100 million and the company hopes to get a 10% return and pay a 3% dividend.

They still have to contend with moving parts for tracking the angle of the sun, fans on inverters, contactors, clearing snow, mowing grass, site drainage, tornadoes etc, so sometimes it is not as easy as it sounds

All for a 7%? Why shouldn’t they just buy the s&p 500 and call it a day


And in my experience as someone who is actually trying to DO something, is exactly right.

But to be clear, it's less about night vs day and more about summer vs winter.


^ This.

I had a 20kWh array and 18kWh of batteries in Texas and it was GREAT in the summer. It'd start charging by 6am and be charged by 9am, even with simultaneous usage. Then we'd live off solar for the day (even with HVAC), go back on batteries around 9pm and they'd be out around 4am. No problem.

But during an overcast winter day, the stack wouldn't get power until 8/9, not make it to 50%, start discharging by 4/5pm, and be out by 10/11pm. It would easily be 8-10 hours where we were wholly dependent on the grid.

Not a problem, just a constraint to acknowledge and plan for.


To be fair, though, electricity is usually cheaper at night. So discharging solar charged batteries well into the evening is still a net benefit for the grid (and your wallet).

> I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers.

I think it comes from a position of arrogance/ego. I'll speak for the US here, since that's what I know the most; but the average 'techie' in general skews towards the higher intelligence numbers than the lower parts. This is a very, very broad stroke, and that's intentional to illustrate my point. Because of this, techie culture gains quite a bit of arrogance around it with regards to the masses. And this has been trained into tech culture since childhood. Whether it be adults praising us for being "so smart", or that we "figured out the VCR", or some other random tech problem that literally almost any human being can solve by simply reading the manual.

What I've found, in the vast majority of technical problem solving cases that average people have challenges with, if they just took a few minutes to read a manual they'd be able to solve a lot of it themselves. In short, I don't believe as a very strong techie that I'm "smarter than most", but rather that I've taken the time to dive into a subject area that most other humans do not feel the need nor desire to do so.

There are objectively hard problems in tech to solve, but the amount of people solving THOSE problems in the tech industry are few and far in between. And so the tech industry as a whole has spent the last decade or two spinning circles on increasingly complex systems to continue feeding their own egos about their own intelligence. We're now at a point that rather than solving the puzzle, most techies are creating incrementally complex puzzles to solve because they're bored of the puzzles that are in front of them. "Let me solve that puzzle by making a puzzle solver." "Okay, now let me make a puzzle solver creation tool to create puzzle solvers to solve the puzzle." and so forth and so forth. At the end of the day, you're still just solving a puzzle...

But it's this arrogance that really bothers me in the tech bro culture world. And, more importantly, at least in some tech bro circles, they have realized that their target to gathering an exponential increase in wealth doesn't lie in creating new and novel ways to solve the same puzzles, but to try and tout AI as the greatest puzzle solver creation tool puzzle solver known to man (and let me grift off of it for a little bit).


It's funny because the fundamental thing I'm speaking out against is the arrogance of human exceptionalism.

This whole debate about what it means to be intelligent or human just seems like we're making the same mistakes we've made over and over.

Earth as the center of the universe, sun as the center of the universe, man as the only animal with consciousness and intellect, the anthropomorphic nature of the majority of the deities in our religions and the anthropocentric purpose of the universe within those religions...

I think this desire to believe that we are special, that the universe in some way does ultimately revolve around us, is seemingly a deep need in our psyche but any material analysis of our universe shows that it is extremely unlikely that we hold that position.


The need for human exceptionalism doesn't come from the psyche or anything like that, it's just basic survival skills. Humans believe themselves to be special because that's the only belief that isn't self-destructive.

You can choose to believe humans are not exceptional, in the same way I can choose to cut off all my fingers and eat them. Why would I do that?

If what you say about LLMs is true, that's bad for me. And for you. And for our families. Because it means our instrinic value of living just went down a lot. I choose not to believe it because I am not suicidal. And, ultimately, I think the people who do believe it can only ever make their lives worse. Probably my life worse too, but maybe if I'm all the way over here I'll avoid the blast radius.


Sure, I agree it's a valuable survival skill. Being fully consumed by the idea that you are the only thing that matters and is of value, or at least you are at the top of the pyramid of what matters and is of value, is how you survive mortal conflict and competition with others be they animals, humans, future AIs, whatever.

That said, an objective assessment of reality reveals that you are not in fact the only thing that matters, or the thing that matters the most, in this universe. There's no way to argue your life is more valuable than the lives of the other humans you're in competition with that isn't ex post facto rationalization.

I agree that LLMs/AI pose many threats to us. I don't think the intrinsic value of living objectively went down, although the perception of it may be in the process of doing so. I think it's objectively always been what it is and we've held useful (to us) delusions about it, which AI now threatens to shatter some part of these.

I've thought a few times lately about beneficial delusion - things like blind, baseless, even counterfactual confidence in yourself are genuinely helpful in so many aspects of life from interpersonal relationships, to sales, to success in business and various pursuits.

You may not actually be the smartest and most capable person but by genuinely believing that you are you will trend towards positive outcomes.

Anyway, I think this belief in our central role in the universe/exceptionalism is another example of this, or as you put it just another useful survival skill.

That said, I do like to stay grounded as much as I can in as objective an assessment of reality as I can muster, otherwise I start to feel unmoored and like I'm going insane.


I largely agree with you, but I also see this same type of thinking appear in people who I know are not arrogant - at least in the techbroisk way.


Sure, but this is absolutely not how people are viewing the AI lol.


This is a way too simplistic model of the things humans provide to the process. Imagination, Hypothesis, Testing, Intuition, and Proofing.

An AI can probably do an 'okay' job at summarizing information for meta studies. But what it can't do is go "Hey that's a weird thing in the result that hints at some other vector for this thing we should look at." Especially if that "thing" has never been analyzed before and there's no LLM-trained data on it.

LLMs will NEVER be able to do that, because it doesn't exist. They're not going to discover and define a new chemical, or a new species of animal. They're not going to be able to describe and analyze a new way of folding proteins and what implication that has UNLESS you basically are constantly training the AI on random protein folds constantly.


I think you are vastly underestimating the emergent behaviours in frontier foundational models and should never say never.

Remember, the basis of these models is unsupervised training, which, at sufficient scale, gives it the ability to to detect pattern anomalies out of context.

For example, LLMs have struggled with generalized abstract problem solving, such as "mystery blocks world" that classical AI planners dating back 20+ years or more are better at solving. Well, that's rapidly changing: https://arxiv.org/html/2511.09378v1


No idea how underestimate things are, but marketing terms like "frontier foundational models" don't help to foster trust in a domain hyperhyped.

That is, even if there are cool things that LLM make now more affordable, the level of bullshit marketing attached to it is also very high which makes far harder to make a noise filter.


>Hey that's a weird thing in the result that hints at some other vector for this thing we should look at

Kinda funny because that looked _very_ close to what my Opus 4.6 said yesterday when it was debugging compile errors for me. It did proceed to explore the other vector.


> Especially if that "thing" has never been analyzed before and there's no LLM-trained data on it.

This is the crucial part of the comment. LLMs are not able to solve stuff that hasn't been solve in that exact or a very similar way already, because they are prediction machines trained on existing data. It is very able to spot outliers where they have been found by humans before, though, which is important, and is what you've been seeing.


""Hey that's a weird thing in the result that hints at some other vector for this thing we should look at." "

This is very common already in AI.

Just look at the internal reasoning of any high thinking model, the trace is full of those chains of thought.


But just like how there were never any clips of Will Smith eating spaghetti before AI, AI is able to synthesize different existing data into something in between. It might not be able to expand the circle of knowledge but it definitely can fill in the gaps within the circle itself


> LLMs will NEVER be able to do that, because it doesn't exist.

I mean, TFA literally claims that an AI has solved an open Frontier Math problem, descibed as "A collection of unsolved mathematics problems that have resisted serious attempts by professional mathematicians. AI solutions would meaningfully advance the state of human mathematical knowledge."

That is, if true, it reasoned out a proof that does not exist in its training data.


It generated a proof that was close enough to something in its training data to be generated.


That may be, and we can debate the level of novelty, but it is novel, because this exact proof didn't exist before, something which many claim was not possible with AI. In fact, just a few years ago, based on some dabbling in NLP a decade ago, I myself would not have believed any of this was remotely possible within the next 3 - 5 decades at least.

I'm curious though, how many novel Math proofs are not close enough to something in the prior art? My understanding is that all new proofs are compositions and/or extensions of existing proofs, and based on reading pop-sci articles, the big breakthroughs come from combining techniques that are counter-intuitive and/or others did not think of. So roughly how often is the contribution of a proof considered "incremental" vs "significant"?


Well, for one the proof would have to use actual proof techniques.

What really happened here was that the LLM produced a python script that generated examples of hypergraphs that served as proof by example.

And the only thing that has been verified are these examples. The LLM also produced a lot of mathematical text that has not been analyzed.


I see, thanks for the explanation!


Do you know that from reading the proof, or are you just assuming this based on what you think LLMs should be capable of? If the latter, what evidence would be required for you to change your mind?

- Edit: I can't reply, probably because the comment thread isn't allowed to go too deep, but this is a good argument. In my mind the argument isn't that coding is harder than math, but that the problems had resisted solution by human researchers.


1) this is a proof by example 2) the proof is conducted by writing a python program constructing hypergraphs 3) the consensus was this was low-hanging fruit ready to be picked, and tactics for this problem were available to the LLM

So really this is no different from generating any python program. There are also many examples of combinatoric construction in python training sets.

It's still a nice result, but it's not quite the breakthrough it's made out to be. I think that people somehow see math as a "harder" domain, and are therefore attributing more value to this. But this is a quite simple program in the end.


One of the possible outcomes of this journey is that “LLMs can never do X”. Another is that X is easier than we thought.


Or that some quixotic problems nobody cared about to the extent to actually work on them do have some solution.


This is it right here. I've long thought about this one and whether I should bother with an AI agent that can do all of this stuff for me, but the reality is both what you said and I'm not rich enough.

Do I want the AI Agent to take my bank account and automatically pay some bill every month in full? What if you go a little over that month due to an emergency expense you weren't prepared for? And it's not a matter of "I don't have enough in my bank account for this one time charge", but it's "I don't have enough in my bank account for this charge and 3 others coming at the end of the month." type deal.

Agents aren't going to be very good at that. "Hey I paid $3,000 on your credit card in order to prevent you from incurring interest. Interest is really bad to carry on a credit card and you should minimize that as much as possible." Me: "Yeah but I needed that money for rent this month." Agent: "Oh, yeah! I should have taken that into account! It looks like we can't reverse the charge for the payment."

Yeah, no fucking thank you LOL.


>Do I want the AI Agent to take my bank account and automatically pay some bill every month in full?

Also this supposed use case is called "Autopay" and requires zero AI. A lot of people still don't use it. Even when it includes a discount!


Could you imagine hitting a rest api and like 25% of the bytes are comments? lol


Worse than that - people will start tagging "this value is a Date" via comments, and you'll need to parse ad-hoc tags in the comments to decode the data. People already do tagging in-band, but at least it's in-band and you don't have to write a custom parser.


See also: postscript. The document structure extensions being comments always bothered me. I mean surely, surely in a turing complete language there is somewhere to fit document structure information. Adobe: nah, we will jam it in the comments.

https://dn790008.ca.archive.org/0/items/ps-doc-struc-conv-3/...


Not sure it's a fair comparison. The spec says:

"Use of the document structuring conventions... allows PostScript language programs to communicate their document structure and printing requirements to document managers in a way that does not affect the PostScript language page description"

The idea being that those document managers did not themselves have to be PostScript interpreters in order to do useful things with PostScript documents given to them. Much simpler.

For example, a page imposition program, which extracts pages from a document and places them effectively on a much larger sheet, arranged in the way they need to be for printing 8- or 16- or 32-up on a commercial printing press, can operate strictly on the basis of the DSC comments.

To it, each page of PostScript is essentially an opaque blob that it does not need to interpret or understand in the least. It is just a chunk of text between %%BeginPage and %%EndPage comments.

This is tremendously useful. A smaller scale of two-up printing is explicitly mentioned as an example on p. 9 of the spec.


Reminds me how old versions of .net used to serialize dates as "\/Date(1198908717056)\/".


> Could you imagine hitting a rest api and like 25% of the bytes are comments? lol

That's pretty much what already happens. Getting a numeric value like "120" by serializing it through JSON takes three bytes. Getting the same value through a less flagrantly wasteful format would take one.

I guess that's more than 25%. In the abstract ASCII integers are about 50% waste. ASCII labels for the values you're transferring are 100% waste; those labels literally are comments.

If you're worried about wasting bandwidth on comments, JSON shouldn't be a format you ever consider, for any purpose.

lol


HTML and JS both have comments, I don't see the problem


And both are poor interchange formats. When things stay in their lane, there is no "problem." When you try to make an interchange format using a language with too many features, or comments that people abuse to add parsable information (e.g. "type information") then there is a BIG problem.


« HTML is a poor interchange format. » - quote of the century -


It caused all kinds of problems, though those tend to be more directly traceable to the "be liberal in what you accept" ethos than to the format per se.


Likewise. I once got pulled over by the police because they insisted that my license plate had been turned in and I was driving without valid plates.

They called other officers, ran the plate, ran the VIN, ran the plate, ran the VIN. I dunno I think we sat there for almost an hour before they told me why they pulled me over and what was up.


While I'll make no judgment specifically on whether or not she is telling the truth, because the article itself isn't enough validation to say she is telling the truth here; I'll comment more on the comments in this thread.

At what point is automated enforcement a good or a bad thing for law breaking? We have yet to grapple with that as a society, and the short answer is there's no easy answer to this problem. Both for precisely the reason this article calls out (that overnight location of car is not a 100% accurate representation of residency, and fixing it seems like a mess); but also because people ARE inherently selfish and REALLY do not like the rules applying to them equally.

A great many people in the United States, particularly white (sorry, I'm going to bring race into this because it's important) enjoy some level of flexibility on what laws they follow and when. Certainly more flexibility than the average black experience. In fact, this problem is so bad that states like California have had to institute policies that allow things like license plate lights being out to exist because the profiling is so catastrophically bad that it's completely unfair.

So now, we have an automated system that at least tries to provide some level of fair enforcement. At least for now, things like speed cameras, red light cameras, license plate readers, etc. don't appear to openly consider racial bias in the immediate decision making process on whether the law is enforced or not. (There are other biases, of course, and even indirect bias with regards to where these things are placed, but I'll digress a bit here).

But even aside from the racial divide, the class divide on enforcement is a problem. And the upper classes have generally enjoyed a level of insulation from complying with laws, which just continues to go up the higher you climb (See: Epstein files). But that's on the more extreme end.

At any rate, better enforcement of laws that are now crossing the lower to middle class divide because automation allows us to do so is certainly an interesting social problem.


Is putting a bunch of red light cameras in a black neighborhood to catch and fine red-light runners an anti-black policy because it imposes automatic punishment on black drivers who are running red lights? Or pro-black because it helps secure the safety of black pedestrians who deserve not to have people breaking traffic laws around them? What if it turns out that even though the neighborhood is black the car traffic on that street has a greater percentage of non-black drivers than the neighborhood population? What if it turns out that black people run red lights at a rate much higher than other races everywhere in the country, so no matter where you put up red light cameras it will always catch and fine a disproportionate number of black drivers?

Regardless of whether you approve or disapprove of automatic red light cameras, you can construct an argument that either having them or not having them is the policy that is actually racist against blacks.

More generally, whether automated law enforcement is good or bad depends highly on how good or bad the law is, which people legitimately disagree about; and also how reliable the automatic enforcement is.


To be fair, the first point is a good point. But I'd argue that you should deploy them everywhere in order to not be racist since we already generally know that the red light cameras are revenue generating devices. Is there some data on whether they increase safety? Preferably unbiased (probably not). Unsure.

Nonetheless, a fair point that deserves analysis. (My vote, to be fair, is ask the community what they want and put it up to a vote. With honest information on safety data versus revenue generation)


What are the boundaries of the community that votes? What if the racial demographics of that community have changed recently, in ways that affect how the vote turns out? What if some people in that community are aware of these voting patterns and explicitly bring up race when engaging in public discussion about the merits or demerits of the red-light-camera-policy, because it's important? What if they try to change the boundaries of the voting district in order to include/exclude more people who they think will vote with/against them on the red-light-camera issue, in ways that highly correlate with race?


I hadn't considered this so eloquently with LLM text output, but you're right. "LLMs make everything sound profound" and "well-written bullshit".

This has severe ramifications for internet communications in general on forums like HN and others, where it seems LLM-written comments are sneaking in pretty much everywhere.

It's also very, very dangerous :/ Because the structure of the writing falsely implies authority and trust where there shouldn't be, or where it's not applicable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: