Hacker Newsnew | past | comments | ask | show | jobs | submit | tadkar's commentslogin

Here’s my understanding. It is intimidating to write a response to you, because you have an exceptionally clear writing style. I hope the more knowledgable HN crowd will correct any errors in fact or presentation style below.

Old school word embedding models (like Word2Vec) come up with embeddings by using masked word predictions. You can embed a whole sentence by taking the average of all the word embeddings in the sentence.

There are many scenarios where this average fails to distinguish between multiple meanings of a word. For example “fine weather” and “fine hair” both contain “fine” but mean different things.

Transformers are great at producing better embeddings by considering context, and using the words in the rest of the sentence to produce a better representation of each word. BERT is a great model to do this.

The problem is that if you want to use BERT by itself to compute relevance you need to perform a lot of compute per query because you have to concatenate the query and the document vector to produce a long sequence that can then be “embedded” by BERT. Figure 2c in the ColBERT paper [1]

What ColBERT does is to use the fact that BERT can use context from the entire sentence and its attention heads to produce a more nuanced representation of any token in its input. It does this once for all documents in its index. So for example (assuming “fine” was a token) it would embed the “fine” in the sentence “we’re having fine weather today” to a different vector than the fine in “Sarah has fine blond hair”. In ColBERT the size of the output embeddings are usually much smaller than the typical 1024 you might expect from a Word2Vec.

Now, if you have a query, you can do the same and produce token level embeddings for all the tokens in the query.

Once you have these two contextualised embeddings, you can check for the presence of the particular meaning of a word in the document using the dot product. For example the query “which children have fine hair” matches the document “Sarah had fine blond hair” because the token “fine” is used in the exact same context in both the query and the document and should be picked up by the MaxSim operation.

[1] https://arxiv.org/pdf/2004.12832


If this is true, my understanding of vanilla token vector embeddings is wrong. my understanding was that the vector embedding was the geometric coordinates of the token in the latent space with respect to the prior distribution. So adding another dimension to make it a "multivector" doesn't (in my mind) seem like it would add much. What am I missing?


I think the important thing is that the first approach to converting complete sentences to an embedding was done by averaging all the embeddings of the tokens in the sentence. What ColBERT does is store the embeddings of all the tokens before then using dot products to identify the most relevant tokens to the query. Another comment in this thread says the same thing in a different way. Feels funny to post a stack exchange reference, but this is a great answer!

[1] https://stackoverflow.com/questions/57960995/how-are-the-tok...


I have a theory that organizations that grow fast and scale well all have this “cellular model” at their core.

Investment bank trading desks in the pre-2008 era, partnership at the big strategy consulting firms and even “multi-strategy hedge funds” now are actually all collections of very incentive aligned businesses. They share the Creo quality of making lots of millionaires and people looking back on their time there as one of great freedom and achievement.

In all these places, employees are paid according to the revenue they generate, with seemingly no ceiling to what you can take home. It is true that the size of any one cell doesn’t scale beyond a small number of people. But all the organisations I mentioned above scale by having units tackling small pieces of vast markets.

The main lesson I took away from reading “Barbarians at the Gate” is that big companies hugely suffer from the principal agent problem, where management is mostly out to enrich themselves at the expense of shareholders and employees (sometimes). This looting is however only possible at a company that was established by a founder with a deep vision and passion for the product and has set up systems and culture that generates sufficient cash for the professional management to leech off.

What I have not read yet is a systematic study of these “cellular organizations” and what the common features are that make them successful. My guess is that the key is that each “unit” or “cell” has measurable economics that makes it possible to share the economic value over a sustained period of time. A bit like why sales people get paid a lot.


Agreed on cellular, but life is not a picture, rather a movie… what works at a stage starts attracting people, and eventually you let the wrong people in. A bit like the hype curve. These wrong people start poisoning internal processes and culture, seeking to cash out. And then the model blows up.

The migration of shitheads from Wall Street 80s 90s to Silicon Valley - technobros is for me a solid example of this.


Indeed, an organism can only survive long enough if it can resist infection. For this, an organization should be openly hostile to certain forms of conduct, and should have a way to expel people who bring in wrong values, especially in management.

This is a really hard problem, due to its influence on the morale, and the danger of weaponization of these mechanisms by bad actors.


And don’t forget, people change too, sometimes becoming misaligned over time. Maybe that’s a little like cancer. Org resilience is hard to model.


This is a great blog to give you things to get started. https://easyperf.net/

As with all things, practice is an essential part of improving!

Then, there's learning from some real achievements. Fast inverse square root, or the 55GB/s Fizzbuzz example: https://codegolf.stackexchange.com/questions/215216/high-thr...


I came here to say exactly the same thing. There are also a couple of other options: the MIR project from RedHat [1], libjit [2], lightning [3] and Dynasm [4] 1. https://github.com/vnmakarov/mir 2. https://www.gnu.org/software/libjit/ 3. https://www.gnu.org/software/lightning/manual/lightning.html 4. https://corsix.github.io/dynasm-doc/tutorial.html

But in general it seems to be very hard to beat the bang for buck from generating C and compiling that - even with something simple like tcc


I suspect that for most Bloom filters, the most commonly used hash functions are “good enough”. There’s also some literature to suggest that using just 2 hash functions and recombining the results is plenty. See kirsch-mitzenmacher [1] and [2]

[1] https://www.eecs.harvard.edu/%7Emichaelm/postscripts/tr-02-0... [2] https://stackoverflow.com/questions/70963247/bloom-filters-w...


Depending on the function you may even be able to use "two for one" hashing, splitting e.g. a single 64 bit hash into two different 32 bit ones and combine those.

https://arxiv.org/abs/2008.08654


https://en.wikipedia.org/wiki/Double_hashing is probably the canonical name for this


I think JetBrains have something where you can do something like what you’re looking for. https://www.jetbrains.com/mps/


I think this is the paper https://arxiv.org/abs/1204.6079


How does preflight deal with selectors used to identify elements changing? This is the critical piece to solve before systems like this are useful. As identified in the paper discussed here: https://blog.acolyer.org/2016/05/30/why-do-recordreplay-test...


From an employer’s perspective, I’d be super interested in hearing about the experience of applying for this visa too! Does the UK government make it easy to apply? What are the interviews like? Anything your employer did that made the whole process easier for you?


If you're an employer you'd be looking at the skilled worker visa: https://www.gov.uk/skilled-worker-visa

If you're on this forum all your potential employees will probably meet the requirements easily.


A hyperLogLog is for counting distinct elements. This and Bloom filters are more about checking whether an element has been seen before; a very different use case.


Cuckoo filter is the one I thought it would be compared to since I see mentioned on HN a lot: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

And the title seems to be a reference to it too, "Cuckoo Filters, Practically Better Than Bloom"


The paper has a great figure where they illustrate areas of the overhead vs false positive trade-off space where each filter type performs best. Cuckoo filters make an appearance there


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: