Hacker Newsnew | past | comments | ask | show | jobs | submit | d3ckard's commentslogin

Personally I believe static allocation has pretty huge consequences for theoretical computer science.

It’s the only kind of program that can be actually reasoned about. Also, not exactly Turing complete in classic sense.

Makes my little finitist heart get warm and fuzzy.


I'm not an academic, but all those ByteArray linked lists have me feeling like this is less "static allocation" and more "I re-implemented a site-specific allocator and all that that implies".

Also it's giving me flashbacks to LwIP, which was a nightmare to debug when it would exhaust its preallocated buffer structures.


This is still a more dependable approach with resource constraints. Fragmentation is eliminated and you can monitor pools for usage in a worst case scenario. The only other risk here versus true static allocation is a memory leak which can be guarded against with suitable modern language design.

LwIPs buffers get passed around across interrupt handler boundaries in and out of various queues. That's that makes it hard to reason about. The allocation strategy is still sound when you can't risk using a heap.


Personally, I see dynamic allocation more and more as a premature optimization and a historical wart.

We used to have very little memory, so we developed many tricks to handle it.

Now we have all the memory we need, but tricks remained. They are now more harmful than helpful.

Interestingly, embedded programming has a reputation for stability and AFAIK game development is also more and more about avoiding dynamic allocation.


Also not a game dev, but my understanding there is that there there's a lot of in-memory objects whose lifetimes are tied to specific game-time entities, like a frame, an NPC, the units of octtree/bsp corresponding to where the player is, etc.

Under these conditions, you do need a fair bit of dynamism, but the deallocations can generally be in big batches rather than piecemeal, so it's a good fit for slab-type systems.


I think most software is like this if you sit and reason about the domain model long enough. It's just easier to say "fuck it" and allocate each individual object on its own with a lifetime of ???.

Also, is easier to refactor if you do the typical GC allocation patterns. Because you have 1 million different lifetimes and nobody actually knows them, except the GC kind of, it doesn't matter if you dramatically move stuff around. That has pros and cons, I think. It makes it very unclear who is actually using what and why, but it does mean you can change code quickly.


> AFAIK game development is also more and more about avoiding dynamic allocation.

That might have been the case ~30 years ago on platforms like the Gameboy (PC games were already starting to use C++ and higher level frameworks) but certainly not today. Pretty much all modern game engines allocate and deallocate stuff all the time. UE5's core design with its UObject system relies on allocations pretty much everywhere (and even in cases where you do not have to use it, the existing APIs still force allocations anyway) and of course Unity using C# as a gameplay language means you get allocations all over the place too.


Preciselly because C# uses GC is common to just allocate everything in a chunk to not trigger the gc later.

Aka you minimize allocations in gameplay.


This is far from common in practice and it is only applied sporadically. Something like allocating formatted strings for the HUD is IME much more common (and done in UE5/C++ too, so not even a C# forcing GC excuse).

> It’s the only kind of program that can be actually reasoned about.

Theoretically infinite memory isn't really the problem with reasoning about Turing-complete programs. In practice, the inability to guarantee that any program will halt still applies to any system with enough memory to do anything more than serve as an interesting toy.

I mean, I think this should be self-evident: our computers already do have finite memory. Giving a program slightly less memory to work with doesn't really change anything; you're still probably giving that statically-allocated program more memory than entire machines had in the 80s, and it's not like the limitations of computers in the 80s made us any better at reasoning about programs in general.


Yes, but allocations generate ever increasing combinatorial space of possible failure modes.

Static allocation requires you to explicitly handle overflows, but also by centralizing them, you probably need not to have as many handlers.

Technically, all of this can happen as well in language with allocations. It’s just that you can’t force the behavior.


Sure, but let's be clear: it's a tradeoff. If every program reserved as much memory at startup as needed to service 100% of its theoretically-anticipated usage, the amount of programs we could run in parallel would be drastically reduced. That is to say, static allocation makes OOM conditions dramatically more likely by their very nature, because programs are greedily sitting on unused memory that could be doled out to other processes.

You don't need to go balls to the wall and allocate 100% upfront. The typical split we see is either "allocate all the things" or "allocate every object, even if it's 16 bytes and lives for 100 microseconds".

Most programs have logical splits where you can allocate. A spreadsheet might allocate every page when it's created, or a browser every tab. Or a game every level. We can even go a level deeper if we want. Maybe we allocate every sheet in a spreadsheet, but in 128x128 cell chunks. Like Minecraft.


> It’s the only kind of program that can be actually reasoned about.

What do you mean? There are loads of formal reasoning tools that use dynamic allocation, e.g. Lean.


i think you mean "exactly not Turing complete"

Nice correction :)

It’s actually quite tricky though. The allocation still happens and it’s not limited to, so you could plausibly argue both ways.


I’m confused. How is a program that uses static allocation not Turing complete?

A Turing machine has an unlimited tape. You can’t emulate it with a fixed amount of memory.

It’s mostly a theoretical issue, though, because all real computer systems have limits. It’s just that in languages that assume unlimited memory, the limits aren’t written down. It’s not “part of the language.”


If we get REALLY nitpicky, zig currently (but not in the future) allows unbounded function recursion with "theoretically" assumes unlimited stack size, so it's potentially "still technically theoretically turing complete". For now.

What about IO? Just because I have a statically allocated program with a fixed amount of memory doesn’t mean I can’t do IO. My fixed memory can just be a cache / scratchpad and the unlimited tape can work via IO (disk, network, etc).

Yes, good point.

It's not, if it can do Io to network/disk..?

Technically, your computer is not Turing Complete because it does not have access to infinite memory. Technically, once all the input has been given to a program, that program is a finite state automaton.

That "once all the input has been given to the program" is doing a bit of heavy lifting since we have a number of programs where we have either unbounded input, or input modulated by the output itself (e.g., when a human plays a game their inputs are affected by previous outputs, which is the point after all), or other such things. But you can model all programs as their initial contents and all inputs they will ever receive, in principle if not in fact, and then your program is really just a finite state automaton.

Static allocation helps make it more clear, but technically all computers are bounded by their resources anyhow, so it really doesn't change anything. No program is Turing complete.

The reason why we don't think of them this way is several fold, but probably the most important is that the toolkit you get with finite state automata don't apply well in our real universe to real programs. The fact that mathematically, all programs can in fact be proved to halt or not in finite time by simply running them until they either halt or a full state of the system is repeated is not particularly relevant to beings like us who lack access to the requisite exponential space and time resources necessary to run that algorithm for real. The tools that come from modeling our systems as Turing Complete are much more practically relevant to our lives. There's also the fact that if your program never runs out of RAM, never reaches for more memory and gets told "no", it is indistinguishable from running on a system that has infinite RAM.

Technically, nothing in this universe is Turing Complete. We have an informal habit of referring to things that "would be Turing Complete if extended in some reasonably obvious manner to be infinitely large" as simply being Turing Complete even though they aren't. If you really, really push that definition, the "reasonably obvious manner" can spark disagreements, but generally all those disagreements involve things so exponentially large as to be practically irrelevant anyhow and just be philosophy in the end. For example, you can't just load a modern CPU up with more and more RAM, eventually you would get to the point where there simply isn't enough state in the CPU to address more RAM, not even if you hook together all the registers in the entire CPU and all of its cache and everything else it has... but such an amount of RAM is so inconceivably larger than our universe that it isn't going to mean anything practical in this universe. You then get into non-"obvious" ways you might extend it from there, like indirect referencing through other arbitrarily large values in RAM, but it is already well past the point where it has any real-world meaning.


> It’s the only kind of program that can be actually reasoned about.

No. That is one restriction that allows you to theoretically escape the halting problem, but not the only one. Total functional programming languages for example do it by restricting recursion to a weaker form.

Also, more generally, we can reason about plenty of programs written in entirely Turing complete languages/styles. People keep mistaking the halting problem as saying that we can never successfully do termination analysis on any program. We can, on many practical programs, including ones that do dynamic allocations.

Conversely, there are programs that use only a statically bounded amount of memory for which this analysis is entirely out of reach. For example, you can write one that checks the Collatz conjecture for the first 2^1000 integers that only needs about a page of memory.


Common misconception IMO.

High marginal taxes and high inheritance taxes do not affect the rich - they eliminate competition for them.

I do agree on antitrust and antimonopoly though.


If they don't affect the rich, why have the rock didn't so much time, effort, and money eroding such taxes over decades?


Eroding them is beneficial to other groups of society, not the rich.

It's like with corporations. Corporations love complex legal systems, as they are the only ones with money to deal with them. Simplification actually benefits smaller enterprises.


How is high marginal taxes and high inheritance taxes not simple?

If complexity is the problem then close the loopholes that let people get out of this.

America was not supposed to be a country of monarchs and wealthy dynasties, and high inheritance taxes helped towards that goal.


Because only poor people need income. If you have enough assets, income is optional.


How come they eliminate competition? What if inheritance taxes are progressive?


Yeah, it seems like most people assume there is a reach and scope of taxation that isn't really possible. Wealth can be expatriated, it can be in non-fungible objects (paintings, &c), it can be in goods held in common such that no transfers occur (for example, a house that people live in together and jointly own).

There isn't anywhere an index or lookup table of all legal rights a particular person has to wealth (or, in truth, to "things", since anything can be worth something and contribute to wealth). There are things they may have a right to that they don't even know about.


They can hide wealth (at their own risk) but it prevent them from extracting money from the country:

The cannot own houses, factories, monopolistic contracts o media. It makes harder to influence politics ( in a legal way).

The housing issue is specially important because city space is limited and the demand is very inelastic


I am not talking about hiding wealth. How do you find all of a person's wealth in a principled way? There isn't a central clearinghouse of this information.

People can own houses, factories, &c, in indirect ways, or in other jurisdictions, and these are all basically legal and make it hard to say what, exactly, people own.

The easiest people to tax are people whose inflows are simple wage income, who own a house and a car in their own country, and don't have a business. In other words, ordinary people. They make up a bulk of the financial activity in a country and the bulk of the tax revenues (most of the time).

It is easy to imagine that the way to capture greater tax revenue from wealthy people is simply to scale this system up -- tax the wealthy people more on their income, their expensive car, &c. However, wealthy people are also wealthy in structurally different ways from ordinary people.


Money is important as a vector for power. It doesn't matter that much whether a person has a bunch of paintings in a Swiss vault when they're an institutional investor directing a substantial sector of the economy. And that industrial power is relatively easy to divest them of, as compared to vault paintings.


That's true, but most of those can be cracked down on simply by saying that any undeclared wealth is forfeit. Also, the great proportion of most rich people's actual wealth is in forms that are easier to trace (e.g., shares of corporations, real estate).


There is no country where a person has to declare all their possessions or they are otherwise forfeit. That is transparently bad policy. Possessions are one important basis of wealth.

This is, I think, another example of people's intuitions about tracking wealth just not being very robust.


Problem does not lay in not declaring wealth.


I'd be curious to know more, this is quite unintuitive


Generally, there are no systems that are 100% bulletproof. This applies to everything. So, the more power you have, the more likely you are to exploit the existing loops.

Who is actually affected? Those less powerful. Progressive tax system hits the middle class (actual middle class, la petite bourgeoisie, not the modern bullshit redefinition of the term) hardest, making it harder for them to make it rich and compete with actual rich people.

As the effect, rich protect inheritance by trusts and avoid taxes by not having income (plenty of tricks available with borrowing), while people like doctors, lawyers, small business owners fund the state and hit hard limits on what can they make.

Don't believe me? Check how much of the tax income comes from top brackets. You may be surprised. Pro tip: system is very skewed to the top.


If the problem is that the system is very skewed to the top, then isn't the solution to be found in addressing that skew? In closing those particular loopholes?

Shouldn't everyone pay their fair share of taxes? Warren Buffett and others seem to think that they should.


You don’t get it. Tax system is already very skewed to the top, as in majority of the income comes from a few.

The problem is that the top paying those taxes are not the rich people.


Exactly. Income taxes hit those with high income like baseball players and doctors and such.

Super-wealthy can take the time to figure out ways to have no income.

And if you do inheritance taxes and similar things, you get more “universities” and other non-profit “finagling”.

IKEA is an example.


If no system is bulletproof then you're not really arguing against progressive tax, the same way "there will always be murderers" is not an argument against policing.


Honestly, have been excited about Zig for quite a while, dabbled a bit a while back and was waiting for it getting closer to 1.0 to actually do a deep dive... but that moment doesn't seem to come.

I don't mind, it's up to the maintainers on how they want to proceed. However, I would greatly appreciate if Zig news was a bit clearer on what's happening, timelines etc.

I think it takes relatively little time to do so, but optics would be so much better.


My biggest gripe with your reasoning is a hidden assumption that everything humans do is easily encodable in what we call AI today.

I don’t think this is the case.


AI and technology is already replacing jobs.

The way this manifests isn’t mass layoffs after an AI is implemented, it’s fewer people being hired at any given scale because you can go further with fewer people.

Companies making billions in revenue with under 10k employees, some under 5k or even under 1k.

This is absorbed by there being more and more opportunities because the cost of starting a new company and getting revenue decreases too as labor productivity increases.

Jobs that would otherwise exist get replaced. Jobs at companies that otherwise wouldn’t exist get created.

And in the long run until it’s just unprofitable to employ humans (when the max their productivity is worth relative to AI falls below a living wage), humans will continue working side by side with AGI as even relatively unproductive workers (compared to AI) will still be net productive.


> AI and technology is already replacing jobs

I don’t think this is true. I think CEOs are replacing people on the assumption that AI will be able to replace their jobs. But I don’t think AIs are able to replace any jobs other than heavily scripted ones like front-line customer support… maybe.

I think AI can automate some tasks with supervision, especially if you’re okay with mediocre results and don’t need to spend a lot of time verifying its work. Stock photography, for example.

But to say AI is replacing jobs, I think you’d need to be specific about what jobs and how AI is replacing them… other than CEOs following the hype, and later backtracking.


> (when the max their productivity is worth relative to AI falls below a living wage), humans will continue working side by side with AGI as even relatively unproductive workers

This assumes that humans will be unwilling to work if their wage is below living. It depends on the social programs of the government, but if there is none, or only very bad ones, people will probably be more desperate and thus be more willing to work in even the cheapest jobs.

So in this overabundance of human labor world, the cost of human labor might be much closer to zero than living wage. It all depends on how desperate to find work government policy will make humans.


We can't prove why people are being replaced, and the people who claimed to have replaced people with AI don't have a lot of good outcomes. Now there is some success but... it is bespoke to that environment often, so, your reasons would be sound if the premise was. We need more information.


I've seen AI replacing a lot of jobs already in regulatory/consultancy business making billions. A lot of people producing paperwork for regulative etc purposes have been replaced by language models. My question – should this business really exist at all?


> because you can go further with fewer people

Can you though? From my experience this is just a wishful thinking. I am yet to see actual productivity gains from AI that would objectively justify hiring less or laying off people.


This is pretty obvious when you know what to look for.

How many people did it take to build the pyramids? Now how many would it take today?

Look at revenue per head and how it’s trended

Look at how much AUM has flowed into asset management while headcount has flatlined


How about a concrete example. What jobs at Bank of America will humans have?

I can not imagine a scenario other than complete model stagnation that would lead to the current workforce there of 213,000 people still having jobs to do.


Also it’s such a strawman to assume that “absolutely everything” needs to be covered by systems to basically eliminate office jobs from humanity. In this context they only need to do enough to make hiring humans pointless at most businesses. Even if say you need some strategically important humans for a long time, I don’t see an incoming world where huge job loss doesn’t happen. David filing HR paperwork at Big Corp is not suddenly going to be doing strategy work.

Like it’s a strawman to assume I’m arguing that your nanny or the local firefighters are going to be replaced by an AI soon.

And it also seems like people expect some normative assumption when taking about job loss. I’m not making a normative claim, nor a policy one, just pointing out it seems stupid to not prepare or expect this to happen.


It will be encodable in what we call AI tomorrow


Except not literally tomorrow, of course. So you might as well say 1 million years from now...


What can be asserted without proof, can be dismissed without proof.

The proof burden is on AI proponents.


It's more that "thinking" is a vague term that we don't even understand in humans, so for me it's pretty meaningless to claim LLMs think or don't think.

There's this very cliched comment to any AI HN headline which is this:

"LLM's don't REALLY have <vague human behavior we don't really understand>. I know this for sure because I know both how humans work and how gigabytes of LLM weights work."

or its cousin:

"LLMs CAN'T possibly do <vague human behavior we don't really understand> BECAUSE they generate text one character at a time UNLIKE humans who generate text one character a time by typing with their fleshy fingers"


To me, it's about motivation.

Intelligent living beings have natural, evolutionary inputs as motivation underlying every rational thought. A biological reward system in the brain, a desire to avoid pain, hunger, boredom and sadness, seek to satisfy physiological needs, socialize, self-actualize, etc. These are the fundamental forces that drive us, even if the rational processes are capable of suppressing or delaying them to some degree.

In contrast, machine learning models have a loss function or reward system purely constructed by humans to achieve a specific goal. They have no intrinsic motivations, feelings or goals. They are statistical models that approximate some mathematical function provided by humans.


Are any of those required for thinking?


In my view, absolutely yes. Thinking is a means to an end. It's about acting upon these motivations by abstracting, recollecting past experiences, planning, exploring, innovating. Without any motivation, there is nothing novel about the process. It really is just statistical approximation, "learning" at best, but definitely not "thinking".


Again the problem is that what "thinking" is totally vague. To me if I can ask a computer a difficult question it hasn't seen before and it can give a correct answer, it's thinking. I don't need it to have a full and colorful human life to do that.


But it's only able to answer the question because it has been trained on all text in existence written by humans, precisely with the purpose to mimic human language use. It is the humans that produced the training data and then provided feedback in the form of reinforcement that did all the "thinking".

Even if it can extrapolate to some degree (altough that's where "hallucinations" tend to become obvious), it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".


Humans are also trained on data made by humans.

> it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".

That's creativity which is a different question from thinking.


> Humans are also trained on data made by humans

Humans invent new data, humans observe things and create new data. That's where all the stuff the LLMs are trained on came from.

> That's creativity which is a different question from thinking

It's not really though. The process is the same or similar enough don't you think?


I disagree. Creativity is coming up with something out of the blue. Thinking is using what you know to come to a logical conclusion. LLMs so far are not very good at the former but getting pretty damn good at the latter.


> Thinking is using what you know to come to a logical conclusion

What LLMs do is using what they have _seen_ to come to a _statistical_ conclusion. Just like a complex statistical weather forecasting model. I have never heard anyone argue that such models would "know" about weather phenomena and reason about the implications to come to a "logical" conclusion.


I think people misunderstand when they see that it's a "statistical model". That just means that out of a range of possible answers, it picks in a humanlike way. If the logical answer is the humanlike thing to say then it will be more likely to sample it.

In the same way a human might produce a range of answers to the same question, so humans are also drawing from a theoretical statistical distribution when you talk to them.

It's just a mathematical way to describe an agent, whether it's an LLM or human.


I dunno man if you can't see how creativity and thinking are inextricably linked I don't know what to tell you

LLMs aren't good at either, imo. They are rote regurgitation machines, or at best they mildly remix the data they have in a way that might be useful

They don't actually have any intelligence or skills to be creative or logical though


They're linked but they're very different. Speaking from personal experience, It's a whole different task to solve an engineering problem that's been assigned to you where you need to break it down and reason your way to a solution, vs. coming up with something brand new like a song or a piece of art where there's no guidance. It's just a very different use of your brain.


I guess our definition of "thinking" is just very different.

Yes, humans are also capable of learning in a similar fashion and imitating, even extrapolating from a learned function. But I wouldn't call that intelligent, thinking behavior, even if performed by a human.

But no human would ever perform like that, without trying to intuitively understand the motivations of the humans they learned from, and naturally intermingling the performance with their own motivations.


Thinking is better understood than you seem to believe.

We don't just study it in humans. We look at it in trees [0], for example. And whilst trees have distributed systems that ingest data from their surroundings, and use that to make choices, it isn't usually considered to be intelligence.

Organizational complexity is one of the requirements for intelligence, and an LLM does not reach that threshold. They have vast amounts of data, but organizationally, they are still simple - thus "ai slop".

[0] https://www.cell.com/trends/plant-science/abstract/S1360-138...


Who says what degree of complexity is enough? Seems like deferring the problem to some other mystical arbiter.

In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal. A human went and put minimal effort into making something with an AI and put it online, producing slop, because the actual informational content is very low.


> In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal

And you'd be disagreeing with the vast amount of research into AI. [0]

> Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget.

[0] https://machinelearning.apple.com/research/illusion-of-think...


This article doesn't mention "slop" at all.


But it does mention that prompt complexity is not related to the output.

It does say that there is a maximal complexity that LLMs can have - which leads us back to... Intelligence requires organizational complexity that LLMs are not capable of.


This seems backwards to me. There's a fully understood thing (LLMs)[1] and a not-understood thing (brains)[2]. You seem to require a person to be able to fully define (presumably in some mathematical or mechanistic way) any behaviour they might observe in the not-understood thing before you will permit them to point out that the fully understood thing does not appear to exhibit that behaviour. In short you are requiring that people explain brains before you will permit them to observe that LLMs don't appear to be the same sort of thing as them. That seems rather unreasonable to me.

That doesn't mean such claims don't need to made as specific as possible. Just saying something like "humans love but machines don't" isn't terribly compelling. I think mathematics is an area where it seems possible to draw a reasonably intuitively clear line. Personally, I've always considered the ability to independently contribute genuinely novel pure mathematical ideas (i.e. to perform significant independent research in pure maths) to be a likely hallmark of true human-like thinking. This is a high bar and one AI has not yet reached, despite the recent successes on the International Mathematical Olympiad [3] and various other recent claims. It isn't a moved goalpost, either - I've been saying the same thing for more than 20 years. I don't have to, and can't, define what "genuinely novel pure mathematical ideas" means, but we have a human system that recognises, verifies and rewards them so I expect us to know them when they are produced.

By the way, your use of "magical" in your earlier comment, is typical of the way that argument is often presented, and I think it's telling. It's very easy to fall into the fallacy of deducing things from one's own lack of imagination. I've certainly fallen into that trap many times before. It's worth honestly considering whether your reasoning is of the form "I can't imagine there being something other than X, therefore there is nothing other than X".

Personally, I think it's likely that to truly "do maths" requires something qualitatively different to a computer. Those who struggle to imagine anything other than a computer being possible often claim that that view is self-evidently wrong and mock such an imagined device as "magical", but that is not a convincing line of argument. The truth is that the physical Church-Turing thesis is a thesis, not a theorem, and a much shakier one than the original Church-Turing thesis. We have no particularly convincing reason to think such a device is impossible, and certainly no hard proof of it.

[1] Individual behaviours of LLMs are "not understood" in the sense that there is typically not some neat story we can tell about how a particular behaviour arises that contains only the truly relevant information. However, on a more fundamental level LLMs are completely understood and always have been, as they are human inventions that we are able to build from scratch.

[2] Anybody who thinks we understand how brains work isn't worth having this debate with until they read a bit about neuroscience and correct their misunderstanding.

[3] The IMO involves problems in extremely well-trodden areas of mathematics. While the problems are carefully chosen to be novel they are problems to be solved in exam conditions, not mathematical research programs. The performance of the Google and OpenAI models on them, while impressive, is not evidence that they are capable of genuinely novel mathematical thought. What I'm looking for is the crank-the-handle-and-important-new-theorems-come-out machine that people have been trying to build since computers were invented. That isn't here yet, and if and when it arrives it really will turn maths on its head.


LLMs are absolutely not "fully understood". We understand how the math of the architectures work because we designed that. How the hundreds of gigabytes of automatically trained weights work, we have no idea. By that logic we understand how human brains work because we've studied individual neurons.

And here's some more goalpost-shifting. Most humans aren't capable of novel mathematical thought either, but that doesn't mean they can't think.


We don't understand individual neurons either. There is no level on which we understand the brain in the way we very much do understand LLMs. And as much as people like to handwave about how mysterious the weights are we actually perfectly understand both how the weights arise and how they result in the model's outputs. As I mentioned in [1] what we can't do is "explain" individual behaviours with simple stories that omit unnecessary details, but that's just about desiring better (or more convenient/useful) explanations than the utterly complete one we already have.

As for most humans not being mathematicians, it's entirely irrelevant. I gave an example of something that so far LLMs have not shown an ability to do. It's chosen to be something that can be clearly pointed to and for which any change in the status quo should be obvious if/when it happens. Naturally I think that the mechanism humans use to do this is fundamental to other aspects of their behaviour. The fact that only a tiny subset of humans are able to apply it in this particular specialised way changes nothing. I have no idea what you mean by "goalpost-shifting" in this context.


> And as much as people like to handwave about how mysterious the weights are we actually perfectly understand both how the weights arise and how they result in the model's outputs

we understand on this low level, but LLMs through the training converge to something larger than weights, there is a structure of these weights which emerged and allow to perform functions, and this part we do not understand, we just observe it as a black box, and experimenting on the level: we put this kind of input to black box and receive this kind of output.


> We actually perfectly understand both how the weights arise and how they result in the model's outputs

If we knew that, we wouldn't need LLMs; we could just hardcode the same logic that is encoded in those neural nets directly and far more efficiently.

But we don't actually know what the weights do beyond very broad strokes.


The proof burden is on AI proponents.

Why? Team "Stochastic Parrot" will just move the goalposts again, as they've done many times before.


On vodka: Sobieski is not a good brand and "Luksusowa" is fine, but it's made of potatoes.

Personally, I strongly prefer vodkas made of wheat, they tend to be smoother and sit better (and the hangover is anegdotically less of a problem).

Also, people who think vodka quality doesn't matter clearly never drank a lot of vodka.


Sobieski is my fav as I prefer rye grain vodka. Russian Standard Platinum was my go to for "clean" tasting vodka.


Recommend some good brands, you have your opportunity here.


Absolut (Swedish) is my go to vodka, as even in Poland it's widely available and good quality.

From Polish brands, black Żubrówka, Ostoya and Chopin are good. Normal Żubrówka (the one with grass) is nice as well, but it's not neutral vodka (recommend with apple juice).

And obviously Belvedere is the fancy brand.


Personally I found Krepkaya the best I have ever tasted. Unfortunately very hard to find in my country these days


Finlandia and Absolut


I call bullshit. This is like saying an average person can be the driving hand for legal documents or medical diagnosis.

The whole point is that as a specialist you vouch for what has been created. Yes, your time moves away from writing code to reviewing it, but it still requires competence to figure out whether what code is doing is exactly what is supposed to be doing.


Software is a bit unique in that the vouching process is really worth nothing at all. A licensed structural engineer, attorney, or doctor has professional liability for acts of negligence and malfeasance. The last time I checked, most commercial software is expected to have large numbers of defects. There are some costly products I can think of that are barely fit for purpose, and yet somehow the bad actors responsible for them aren’t sued out of existence or prohibited by law from practice.

I think if the industry trend is toward paying developers to verify or certify programs are logically sound and fit for purpose, then users will be getting a lot more value for the cost of developers’ time.


Reliability is worth everything. You never ever want to do work with unreliable people. If someone is convincingly lying without any incentive, you have to check every single thing they do and this is even more difficult than doing the work yourself.


I agree here. The magic word you’re looking for is liability. The world will always need people to hold accountable for when things don’t meet expectations… and while they seem to be pretty good at holding their own in court, I wouldn’t count on OpenAI’s or MSFT’s legal reps sitting in on your behalf when your chatGPT-food-critic startup, coded with co-pilot, tells someone with a shellfish allergy to “try the shrimp”.


You need to block Siri in it and then it works fine again.


This is a very good point.

I have very similar thoughts after working with Cursor for a month and reviewing a lot of “vibe” code. I see the value of LLMs, but I also see what they don’t deliver.

At the same time, I am fully aware of different skill levels, backgrounds and approaches to work in the industry.

I expect two trends - salaries will become much higher, as an individual leverage will continue to grow. At the same time, demand for relatively low skill work will go to zero.


Not challenging, but undoable. All parties capable of it will never do it.

My (quite pessimistic) prediction for Israel is that the moment US is busy somewhere else, neighbors will attack en masse.


Neighbours have always attacked en masse: in 1948 they attacked en masse and lost and a Jewish state was established, in 1967 they attacked en masse and lost and Israel gained Judea and Samaria / West bank, in 1973 they attacked en masse and lost. Plus various combinations of Iran, Hamas and Hezbollah attacking since, and losing.


Dear, past performance does not predict the future one. Israel hasn't done anything to build good will, on the opposite, they are increasingly negatively viewed, at least in Europe.

Some wars you lose.


Sure, they’ve signed peace accords with egypt, saudi arabia and jordan, the UAE, Bahrain and Morocco.

Europe has always disliked Jews, they’re just more open about it now that europe is increasingly Islamified.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: