Hacker Newsnew | past | comments | ask | show | jobs | submit | sillymath's commentslogin

theoretically you can introduce new words to be more precise, for example eskimos have many words for snow, from (1) that is because they use polysynthesis: a base word is attached to many different suffixes which change its meaning. So, while in the English language we might have a sentence describing snow, fusional languages such as the Eskimo-Aleut family will have long, complex words.

So, you may solve the ambiguity problem by introducing a huge numbers of words, that could help a LLM, but humans need a relatively small vocabulary tailored to everyday use. So the tradeoff is reducing ambiguity and not increasing a lot the vocabulary.

(1) https://readable.com/blog/do-inuits-really-have-50-words-for...


A more common example is the meme about women and men colors. Women have magenta and salmon, while men call both pink. This has not much to do with linguistics or eliminating ambiguity.

You say 'pink', you mean 'pink', and if the other person wants to know which pink, they ask?


edit: this was a strawmanning, colors are a non-issue. But there can be situations where there are 2 objects, you say a word to refer one of the objects, and your partner thinks about the other object, and neither of you think that it was ambiguous, so you don't clarify until it's too late. Like there is a dead bear and a bear cub, you say 'bring the bear' and you mean the cub, and they mean the dead bear.


I think that when law is introduced the consequences are not clear so the small print is used to introduce modifications to the rules, so the problem is about adapting a rule to the everyday use of it.


> What does volume mean in higher dimensions?

I think you need a scalar product to define volume. But fuzzy thinking an analogy: Think friendship, define three concepts related to friendship and that they are orthogonal, now establish some numeric scale to measure each concept, then you have just defined a volume to measure the degree of friendship. Unfortunately there is no canonical way to establish those orthogonal features, but they could be obtained applying a linear model to big data sets of measures of friendship.


> Like, you can inscribe a circle inside a square and have it touch all sides without extending outside the square, and you can similarly “inscribe” a sphere inside a cube such that the sphere touches all sides of the cube without extending outside the cube. Does this hold true in higher dimensions?

Of course, points with only one nonzero coordinates xi in $\{1,-1}$ are in both the sphere and the sides of the cube, and those are the minimal distance points from the sides of the cube to the center of the circle (since they are orthogonal to the hyperplane xi=+1, or xi=-1 that contains it). That also shows that no side of the cube extends outside of the circle. Some more math: A well known result in math is that the minimum distance from the origin O to a linear subspace of R^n is attained at points P such that OP is orthogonal to the direction of the subspace. In this case, since the linear space is a hyperplane there is just one orthogonal direction to that hyperplane, and there is just one point in the intersection of the hyperplane and the line defined by the point O and the orthogonal direction to the hyperplane, and that intersection is just the points we alluded before.


I tried to use the demo of the github page, but it doesn't work. It seems the page is stalemate since 2013.


What language don't require type annotations to achieve good performance?, more specifically tell me any programming language that can beat sbcl at speed without using type annotations.


Not having used them, I'd expect the Truffle implementations of Python and Ruby to do well, seeing as Graal handles #2-5 of moonchild's list. From there #1 might fall out. (Apparently the fancy GCs for #6 are only in the Enterprise Edition though?)

I'm working on a better, parallel but still far from Java state-of-the-art GC for SBCL <https://zenodo.org/record/7816398> - which presumably is some skin in the game.


Look at this real-time garbage collector for C++: https://github.com/pebal/sgcl


Is there any documentation on the algorithm used?


The Mark and Sweep algorithm with tri-color marking variation was used. There is no documentation yet, but I am happy to answer your questions ([email protected]).


Yeah, state of the art GC, but SBCL and ECL run circles on performance compared to any Java turd, even if being AOT compiled with GRAAL.

Just compare the Nyxt web browser with any Java monster out there.


Do they? I had to help SBCL with bounds checks (read: disable them) when porting the Java NonBlockingHashMap to Common Lisp. Perhaps still too micro- a benchmark, and I've indeed made turds with Spring, but the rest of HotSpot would do wonders on non-turd programs. (Would also expect Graal with JIT to be faster than AOT after warmup, but no experience there.)


self, javascript (v8), java (hotspot; generics are latently monomorphised according to hotness; also see 'invokedynamic'), apl (apl\3000)


I don't think you can say that java "doesn't use type annotations".


Well, technically you do have the var keyword now.

  var x = MyAwesomeClassFactoryBeanTemplate.getBean().getFactoryInstance().createNewMyAwesomeClass(foo, bar);


I am not super familiar with Java but that syntax typically implies type inference, which is not the same as not needing type annotations.


Not only is it type inference, it's limited to locally declared variables.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: