Hacker Newsnew | past | comments | ask | show | jobs | submit | korbip's commentslogin

I can share a similar PhD story (the result being visible here: https://github.com/NX-AI/flashrnn). Back then I didn't find any tutorials that cover anything beyond the basics (which are still important). Once you have understood the principle working mode and architecture of a GPU, I would recommend the following workflow: 1. First create an environment so that you can actually test your kernels against baselines written in a higher-level language. 2. If you don't have an urgent project already, try to improve/re-implement existing problems (MatMul being the first example). Don't get caught by wanting to implement all size cases. Take an example just to learn a certain functionality, rather than solving the whole problem if it's just about learning. 3. Write the functionality you want to have in increasing complexity. Write loops first, then parallelize these loops over the grid. Use global memory first, then put things into shared memory and registers. Use plain matrix multiplication first, then use mma (TensorCore) primitives to speed things up. 4. Iterate over the CUDA C Programming Guide. It covers all (most) of the functionality that you want to learn - but can't be just read an memorized. When you apply it you learn it. 5. Might depend on you use-case but also consider using higher-level abstractions like CUTLASS or ThunderKitten. Also, if your environment is jax/torch, use triton first before going to CUDA level.

Overall, it will be some pain for sure. And to master it including PTX etc. will take a lot of time.



There is a LOT of effort in the research community currently:

1. Improving the Self-Attention in the Transformer as is, keeping the quadratic complexity, which has some theoretical advantage in principle[1]: The most hyped one probably DeepSeek's Multi-Latent Attention[15], which kind of is Attention still - but also somehow different.

2. Linear RNNs: This starts from Linear Attention[2], DeltaNet[3], RKWV[4], Retention[5], Gated Linear Attention[6], Mamba[7], Griffin[8], Based[9], xLSTM[10], TTT[11], Gated DeltaNet[12], Titans[13].

They all have an update like: C_{t} = F_{t} C_{t-1} + i_{t} k_{t} v_{t}^T with a cell state C and output h_{t} = C_{t}^T q_{t}. There's a few tricks that made these work and now being very strong competitors to Transformers. The key here is the combination of an linear associative memory (aka Hopfield Network, aka Fast Weight Programmer, aka State Expansion...) and pushing it into a sequence with gating similar to the original LSTM (input, forget, output gate) - while here this is only dependent on the current input not the previous state for linearity. The linearity is needed to make it sequence-parallelizable, there are efforts now to add non-linearities again, but let's see. Their main benefit+downside both is that they have a fixed-size state, and therefore linear (vs Transformer-quadratic) time complexity.

For larger sizes they have become popular in hybrids with Transformer (Attention) Blocks, as there are problems with long context tasks [14]. Cool thing is they can also be distilled from pre-trained Transformers with not too much performance drop [16].

3. Along the sequence dimension most things can be categorized in these two. Attention and Linear (Associative Memory Enhanced) RNNs are heavily using Matrix Multiplications and anything else would be a waste of FLOPs on current GPUs. The essence is how to store information and how to interact with it, there might be still interesting directions as other comments show. Other important topics that go into the depth / width of the model are: Mixture of Experts, Iteration (RNNs) in Depth[17].

Disclaimer: I'm author of xLSTM and we recently released a 7B model [18] trained at NXAI, currently the fastest linear RNN at this scale and performance. Happy to answer more questions on this or the current state in this field of research.

[1] https://arxiv.org/abs/2008.02217

[2] https://arxiv.org/abs/2006.16236

[3] https://arxiv.org/pdf/2102.11174

[4] https://github.com/BlinkDL/RWKV

[5] https://arxiv.org/abs/2307.08621

[6] https://arxiv.org/pdf/2312.00752

[7] https://arxiv.org/abs/2312.06635

[8] https://arxiv.org/pdf/2402.19427

[9] https://arxiv.org/abs/2402.18668

[10] https://arxiv.org/abs/2405.04517

[11] https://arxiv.org/abs/2407.04620

[12] https://arxiv.org/abs/2412.06464

[13] https://arxiv.org/abs/2501.00663

[14] https://arxiv.org/abs/2406.07887

[15] https://arxiv.org/abs/2405.04434

[16] https://arxiv.org/abs/2410.10254

[17] http://arxiv.org/abs/2502.05171

[18] https://huggingface.co/NX-AI/xLSTM-7b


This was formulated a bit unclear. It is not possible to parallelize in the sequence dimension for training as it is possible for Transformers. In the batch dimension you can always do it.


Disclaimer: I'm shared first author of this paper.

As a clarification: The speed for training will be on par with FlashAttention-2, when fully optimized and only including the mLSTM. For decoding/inference both are very close to Mamba as xLSTM is a recurrent architecture. The sLSTM has memory mixing, that is state tracking capabilities, for problems Transformers and State Space Models (and any other sequence-parallelizable architecture) cannot solve fundamentally.


Congrats on the paper, very interesting.

Can you opine on how the model will fare on hardware that is optimized for transformers? There is so much investment in accelerating the transformer arch[1][2], will xLSTM / sLSTM benefit as well, or will the hardware optimizations give transformers enough of an advantage that it’s hard to compete on general purpose hardware?

1. https://www.etched.com/

2. https://www.embedded.com/ai-chip-features-hardware-support-f...


Fascinating work, very promising.

Can you summarise how the model in your paper differs from this implementation of xLSTM ?

https://github.com/huggingface/transformers/issues/27011


Thanks! I don't see any implementation there. In any case, we are planning a code release soon.


Can you expand on the "cannot solve fundamentally" part?



So does anything do proper state tracking? And don’t point to the OP since very often purportedly better new architectures end up being basically vaporware (like mamba or rkwv, which still don’t have good quality pre trained models yet)


How do you mean vaporware?

Surely whether a big model using a certain system exists is only a matter of the choices of those with sufficient resources to train it. That's only a matter of their beliefs, not about actual model performance.


Transformers and SSMs can't do long computations that are inherently sequential.

Unless you give them chain of thought. In which case they do great.


Congratulations on the paper. That's some very interesting work!

But you would want to include sLSTM as well to get the best performance, right? How does the speed compares in that case? Specifically when scaling up.


Thank you! I can say that it is not really a diminishing factor at the scales reported in the paper. So, xLSTM[7:1] is pretty much on par with xLSTM[1:0] in speed. We show that it is helpful on toy tasks, and it shows even better sequence extrapolation performance, so yes.


Great work! I'd love to start using the language model variant of your work. Do you know when/if it will be open sourced? I'd start using it today if it were that soon.


> For decoding/inference both are very close to Mamba as xLSTM is a recurrent architecture

Can you explain this statement more if you have time? Are you saying the recurrent architecture of xLSTM enables fast inference on par with Mamba? Or the xLSTM architecture slows it down so that its inference is as slow as mamba?


When you talk about "c" or "scalar memory" in the paper, does that refer to a single unit in the vector usually referred to as c?

So in mLSTM, each unit of the vector c is now a matrix (so a 3d tensor)? And we refer to each matrix as a head?

Having a bit of issue understanding this fundamental part


You mainly got it right. Usually one does have many scalar 'c' cells, that talk to each other via memory mixing. For the sLSTM, you group them into heads, talking only to cells within the same head. The reason that we referred to scalar cells here is that these are that fundamental building block. Many of them can and are usually combined and vector notation is useful in this case.

For the matrix 'C' state, there are also heads/cells in that sense that you have multiple, but they don't talk to each other. So yes, you can view that as a 3D tensor. And here, the matrix is the fundamental building block / concept.


To clarify, is the sLSTM strictly necessary (to achieve better accuracy than those other architectures), or is the mLSTM good enough? The [1/0] model in the paper seemed to do quite well.


For language in general it seems fine. But there might be specific tasks where it is necessary indeed.


Feel free to add them. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: