Hacker Newsnew | past | comments | ask | show | jobs | submit | killingtime74's commentslogin

Is a LLM logic in weights derived from machine learning?

Well, yes. That's literally what it is.

What what is? The article has nothing to do with LLMs. It even explicitly says they don’t use LLMs.

> Is a LLM logic in weights derived from machine learning?

I was just answering this question. LLM logic in weights is fundamentally from machine learning, so yes. Wasn't really saying anything about the article.


Good one… but Is a DB query filter AI? I forgot to say though is sounds like a really cool thing to do

Strictly speaking, expert systems are AI as well, as in, an expert comes up with a bunch of if/else rules. So yes technically speaking even if they didn’t acquire the weights using ML and hand-coded them, it could still be called AI.

It is 100% valid to label an algorithm that plays tic-tac-toe as "AI"

Much of the early AI research was spent on developing various algorithms that could play board games.

Didn't even need computers, one early AI was MENACE [1], a set of 304 matchboxes which could learn how to play noughts and crosses.

[1] https://en.wikipedia.org/wiki/Matchbox_Educable_Noughts_and_...


Yup this is exactly my point, in the 80s there were plenty of “AI” companies and “fuzzy logic” was the buzzword of the day.

I built the Matchbox for Hexapawn, detailed in National Geographic Kids!

I didn't know what a Jujube was, but I got the idea.


I think this happened with airline pilots and they're experiencing a boom now


Have you heard of paying with PayPal/credit card?

while possibly too sneery for this site, paypal and a real credit card will have buyer protections. Debit cards, and basically anything else will not.

I love it. I love having agents write SQL. It's very efficient use of context and it doesn't try to reinvent informal retrieval part of following the context.

Did you find you needed to give agents the schema produced by this or they just query it themselves from postgres?


so most analyses already have a CLI function you can just call with parameters. for those that don't, in my case, the agent just looked at the --help of the commands and was able to perform the queries.

Sorry but the graphs are completely unreadable. There are four code names for each of the lines. Which is jit and which is cpython?

They are all JIT on different architectures, measured relative to CPython. https://doesjitgobrrr.com/about: blueberry is aarch64 Raspberry Pi, ripley is x86_64 Intel, jones is aarch64 M3 Pro, prometheus is x86_64 AMD.

Thanks

I remember a teardown where weights were taped inside. I think it was a beats headphone.

It was Beats. At first it was found in counterfeit Beats, but later the same was found in genuine Beats. And then guess who bought Beats for their exquisite metal weight technology? That's right, it was Apple.

The weights are an impedance match to your wallet

They bought them for the streaming service that came with it. Not for fake weights in headphones.

Streaming service?


> And then guess who bought Beats for their exquisite metal weight technology? That's right, it was Apple.

It's self-evidently extremely disingenuous to claim that Apple bought Beats for their "exquisite metal weight technology", so I thought I'd double check your claim that there are "metal weights" inside Beats headphones.

All of this appears to stem from two blog posts, written by the same VC.[^1] The first time they accidentally tore down counterfeit Beats, and when they managed to repeat the process, they "stuck by [their] claim" that:

> "…these metal parts are there to add a bit of weight and increase perceived quality with a nice look."

The BOM estimate they provide lists the following metal parts:

* Inner cast metal separator

* Springs

* Torx screw

* Self tapping screw

* Cast metal supports

* Stamped metal ear cup

None of these are extraneous weights not serving a purpose. The claim of the author might be better presented as:

"Beats headphones use heavier metal components instead of plastic ones, and I think it's because they add weight."

There are a lot of very good reasons to use materials that dampen unwanted interference like parasitic vibrations. Stiffer materials such as metal parts typically flex less, and have fewer (but usually more pronounced) resonances than plastic parts, which have intrinsic damping but might distort.

A good example of this is that the driver in your headphones is moving. Therefore the housing it is placed in must consider sprung/unsprung mass. Adding metal components increases the mechanical impedance.

So:

1. It is entirely possible that your claim about the weights is correct, and Beats chose to use metal components rather than plastic purely to add weight to the product.

2. There are a great many other possible explanations for using metal rather than plastic, and I don't think that you're likely to be party to them. For example: maybe they had the parts in-chain already and didn't want to have to tie up hardware engineering or supplier quality engineering for a new plastic part.

[1]: https://beneinstein.com/how-it-s-made-series-yup-our-beats-w... (the one where they tear down real Beats)


Thanks for doing the legwork. Any “nehhh apple BAD they make products for IDIOTS!” comment should be treated with skepticism, as usual.

Idk im not sure why they bought Beats aside from marketing hype in the first place...

Beats is a $1B+ a year business. Investment-wise it was a no-brainer.

Cultural cachet of Beats - note how Apple kept the brand.

Jimmy Iovine & Dr Dre showed them how to tap into a new demographic.

It helped Apple get up and running with streaming faster, they needed to compete with Spotify.


The Beats brand was a great entrée to an entire market segment that Apple was trying to better access. I'd say it was a masterful acquisition (and integration).

I completely disagree.

Beats brand basically disappeared after that or at the least has become "uncool".

Apple had iTunes already, they very well could've acqui-hired and improved their service themselves.

Apple music slowly died and is only becoming resurgent now, many years later.


Beats (or I guess Apple under the Beats name) still make H1/2 based in-ears that are generally well received.

Yep! I own both a pair of AirPods and a pair of Beats. The Beats were designed for a lower price point, without noise cancellation, than the AirPods so I can’t offer a head-head comparison.

For Apple Music

Exactly. Look at something like the Sony XM5s that have a defective design that breaks in a light wind. There a class action against them for the crap they pulled and refusal to warranty. Not that I’m bitter at them or anything.

Nothing new here then. Back when I used to DJ some 20+ years ago, people would complain back then that Sony headphones would constantly break on them.

Meanwhile I had Sennheisers and they could take an absolute beating and still work fine. While also being plastic and cheap looking in comparison to other brands in the same price packet.


Yeah, I had a pair of MDR-V700 back in the day, and they broke in about 2-3 years max, without any abuse, just randomly.

I gave them to a friend who "quick-fixed" them with a screw at the pivot point, but they lost all their flexibility after that. He didn't mind because he was using them solely for drumming, but I couldn't use them anymore.

That being said, I have had some nasty experiences with Sennheiser's IEMs as well. Had to send 2 of them in warranty within a year, products that were in the 300-600 euro range back around 2010!


> Cast metal supports

Seems excessive. They should do something like forged carbon to cut weight and have removable gravity enhancement.


Oh is that why my wife’s cheapo crappy Beats earbuds have a special UI for pairing with my iphone…

All genuine Beats as far I know come with the H1 chips and pair just like AirPods - even my cheap $60 Beats Flex I use on planes since I don’t have to worry about them falling out - they just fall around my neck

Beats.

But my favourite hack was a Sennheiser model which had foam inserts to dampen the sound. 555 - foam = 595


If there's a model that's as good as Claude 4.5 (not even 4.6) I would pay tens of thousands to run it locally. To my knowledge there isn't yet. Benchmarks may say so but I haven't used one that does yet. I always try new models that come out on openrouter

If they can buy a house and leave it empty they can buy a car and leave it empty.


What euphemism do you prefer then...


There's a difference between dead (i.e. "unmaintained") and low activity ("not under active development"). From what I can see PyPy is in the latter category (and being in that category does not mean it's going to die soon), so choosing to claim it is unmaintained is notable.


Being three major versions behind CPython is definitely not a great sign for the long-term viability of it.


It's always been about that many versions behind.

There is more churn in those versions than you'd think.


I'd genuinely be curious what fraction of those changes actually requires porting to other Python implementations. The free-threading changes are inherently interpreter specific, so we can ignore those. A significant change in Python 3.12 is dropping "dead batteries", so that can be ignored as well. From what I can see, the main language changes are typing-based (so could have parser implications), and the subinterpreter support being exposed at the Python level (I don't know whether that makes sense for PyPy either). I think this hints that while certain area of Python are undergoing larger changes (e.g. typing, free-threading), there is no obvious missing piece that might drive someone to contribute to PyPy.

Also, looking at the alternate (full) interpreters that have been around a while, PyPy is much more active than either Jython or IronPython. Rust-python seems more active than PyPy, but it's not clear how complete it is (and has going through similar periods of low activity).

Would I personally use PyPy? I'm not planning to, but given how uv is positioning itself, this gives me vibes of youtube stating it will drop IE 6 at some unspecified time in order to kill IE 6 (see https://benjamintseng.com/2024/02/the-ie6-youtube-conspiracy...).


The problem is the million small paper cuts. The stdlib changes are not all in pure python, many have implications for compiled modules like _ssl. The interpreter changes, especially compatibility with small interpreter changes that are reflected in the dis module, also require work to figure out


I'm not sure "major versions" is the most correct term here, but I think your point is spot on


For Python, 0.1 increases are major versions and 1.0 increases are cataclysmic shifts.


I don't know about that. For me, f-strings were the last great quality-of-life improvement that I wouldn't want to live without, and those landed in Python 3.6. Everything after that has not really made much of a difference to me.


This reads like you think that "major" version bumps should ony happen when things make a big difference to you personally. At least that's where you land when you follow the logic of your statement. I think you may overrate the importance of your particular use case, and misunderstand what GP meant by "major".

The gist of what GP meant is that Python does not exactly follow SemVer in their numbering scheme, and they treat the middle number more like what would warrant a major (left-most) number increase in SemVer. For example, things will get deprecated and dropped from the standard library, which is a backwards-incompatible change. Middle number changes is also when new features are released, and they get their own "what's new" pages. So on the whole, these middle-number changes feel like "major" releases.

That being said, the Python docs themselves [0] call the left-most number the "major" one, so GP is not technically correct, while I'd say they're right for practical, but easier to misunderstand, purposes.

> A is the major version number – it is only incremented for really major changes in the language.

> B is the minor version number – it is incremented for less earth-shattering changes.

> C is the micro version number – it is incremented for each bugfix release.

The docs do not seem to mention you, though. :P

[0]: https://docs.python.org/3/faq/general.html#how-does-the-pyth...


> That being said, the Python docs themselves [0] call the left-most number the "major" one, so GP is not technically correct, while I'd say they're right for practical, but easier to misunderstand, purposes.

That's ultimately the point I was trying to make; my inner pedant can't help but feel the need to push back on people using versioning terminology inconsistently, but in practice I don't think it really made much of a difference in this case.


Oh, you are right, I forgot that "major version" is a technical term and incorrectly read it as "For Python, 0.1 increases make a big difference". My bad!


If you want your code to run, you need a python interpreter that supports the newest of your dependencies. You may not use features that came after 3.6 (though you obviously do), but even if just one dependency or sub-depdendency used a python 3.10 specific feature you now need interpreter at least this new.


That is true, and it is also a huge pet peeve of mine. If more library maintainers showed some restraint it using the newest and hottest features, we'd have much less update churn. But on the other hand, this is what keeps half of us employed, so maybe we should keep at it after all.


That's like saying the last tax that affected you was passed in 2006...


I don't understand. Could you elaborate?


It means there are lots of changes in each “minor” version that the poster is ignoring because they are not personally affected.

Match case and even the walrus operator come to mind.


They are de facto semantic major versions - think of recent-ish additions like f-strings and match-case (3.7 and 3.11, I think), you'd get a syntax error in an older parser. PyPy targeting 3.9 for example would would support f-strings but not match-case.

Or at runtime, you can import things from the standard library which require a minimum 3.x. - .x releases frequently if not always add things, or even change an existing API.


> They are de facto semantic major versions - think of recent-ish additions like f-strings and match-case (3.7 and 3.11, I think), you'd get a syntax error in an older parser. PyPy targeting 3.9 for example would would support f-strings but not match-case.

Are you saying that you'd get an error using the new feature on an old version, or that code that used to parse on old versions would not longer work on the newer version? The former is pretty much a textbook example of a minor version update in "traditional" semver; a single new API function is enough to potentially make new code not work on old versions, since any calls to that function will not work on versions predating it. The latter is what would constitute a "breaking change" in semver; old code that used to work can't be made to no longer work without a major version bump.

I say "traditional" semver because in practice it seems like there are fairly few cases in which people actually seem to fully agree on what semver means. I've always found the "official" definition[1] to be extremely clear, but from what I can tell most people don't really adhere to it and have their own personal ideas about what it means. I've proposed things in projects that are fully within both the letter and spirit of semver quite a few times over the years only for people to object on the basis that it "isn't semver" because they hadn't fully read through the description before. Typically I'll mention that what I'm suggesting is something that semver allows and bring up the page and show them the clause that specifically allows what I'm saying but clarify that I recognize we still might not want to do it for other reasons, and the other person will end up preferring to stick with their initial instinct independent of what semver actually says. This is totally fine, as semver is just one of many versioning scheme and not some universal optimum, but my point is that it's probably more confusing for people to use the same term to describe fairly inconsistent things.

[1]: https://semver.org/


True - I don't think I really had my head screwed on there. It just 'feels different' because it's language level, the actual syntax, I suppose, but no - you're right.


Undermaintained might be more suited since it does have life but doesn't appear commercially healthy nor apparently relevant to other communities.


Underphrased like a pro.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: