Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is great. I really appreciate visual explanations and the way you build up the motivation. I'm using a few resources to learn linear algebra right now, including "The No Bullshit Guide to Linear Algebra", which has been pretty decent so far. Does anyone have other recommendations? I've found a lot of books to be too dense or academic for what I need. My goal is to develop a practical, working understanding I can apply directly.


I’ve really enjoyed this book:

Introduction to Applied Linear Algebra – Vectors, Matrices, and Least Squares

https://web.stanford.edu/~boyd/vmls/


Started this over the weekend. It's truly excellent. Thanks so much for the recommendation.


Thanks for the suggestion.


Ok, boy, I'm also reviewing LinAlg textbooks as we speak. Coming in with a similar interest for ML / AI.

I've done math on KA academy up to linear algebra, with other resources / textbooks / et al. depending on the topic.

People will recommend 3B1B, Strang (MIT OCW Lin Alg lessons). For me the 3B1B is too "intuitionist" for a first serious pass, and Strang can be wonderful but then go off on a tangent during a lecture that I can't follow, it's a staple resource that I use alongside others.

LADR4e is also nice but I can't follow the proofs there sadly (yet). There is also 'Linear Algebra done wrong', as well as the Hefferon book, which all end up being proof-y quite quickly. They seem like they'll be good for a second / third pass at a linear algebra.

Side note - for a second or a third pass in LA it seems there is such a thing as 'abstract linear algebra' as a subject and the texbooks there don't seem that much harder to follow than the "basic" linear algebra ones designated for a second pass.

I've gotten off to the most of a start with ROB101 textbook (https://github.com/michiganrobotics/rob101/blob/main/Fall%20...), up until linear dependence / independence, along the MIT Strang lectures. ROB101 is nice as it deals with the coding aspect of it all, and I can follow in my head as I am used to the coding aspect of ML / AI.

I also have a couple obscure eastern european math texbook(s) for practice assignments.

Most lately I have been reviewing this course / book - https://www.math.ucdavis.edu/~linear/ (which has cool notes at https://www.math.ucdavis.edu/~linear/old), and getting a lot of mileage from https://math.berkeley.edu/~arash/54/notes/.


Thank you very much I'll check out these resources. ROB101 looks really great.

I love the 3B1B videos, but I've noticed my attention tends to drift when watching videos. I've learned that I absorb information best through text. For me, videos work well as a supplement, but not as the main way to learn.

Thanks again.


That's quite the list! How does this one compare? Anything you think is missing?


https://www.math.ucdavis.edu/~linear/ (authors David Cherney, Tom Denton, Rohit Thomas and Andrew Waldron) - reminds me of category theory articles, so good.


Suggestions for books/articles from a couple of my previous comments;

https://news.ycombinator.com/item?id=45110857

https://news.ycombinator.com/item?id=45088830

The OP's article though simple, still does not really explain things intuitively. The key is to understand the concept of a Vector from multiple perspectives/coordinate systems and map the operations on vectors to movements/calculations in the coordinate space (i.e. 2D/3D/n-space). Only then will Vector Spaces/Matrices/etc. become intelligible and we can begin to look at Physical problems naturally in terms of vectors/vector calculus.

The following are helpful here;

1) About Vectors by Banesh Hoffmann.

2) A History of Vector Analysis: The Evolution of the Idea of a Vectorial System by Michael Crowe.


>My goal is to develop a practical, working understanding I can apply directly.

Apply directly... to what? IMO it is weird to learn theory (like linear algebra) expressly for practical reasons: surely one could just pick up a book on those practical applications and learn the theory along the way? And if in this process, you end up really needing the theory then certainly there is no substitute for learning the theory no matter how dense it is.

For example, linear algebra is very important to learning quantum mechanics. But if someone wanted to learn linear algebra for this reason they should read quantum mechanics textbooks, not linear algebra textbooks.


You're totally right. I left out the important context. I'm learning linear algebra mainly for applied use in ML/AI. I don't want to skip the theory entirely, but I've found that approaching it from the perspective of how it's actually used in models (embeddings, transformations, optimization, etc.) helps me with motivation and retaining.

So I'm looking for resources that bridge the gap, not purely computational "cookbook" type resources but also not proof-heavy textbooks. Ideally something that builds intuition for the structures and operations that show up all over ML.


Strang's Linear algebra and learning from data is extremely practical and focused on ML

https://math.mit.edu/~gs/learningfromdata/

Although if your goal is to learn ML you should probably focus on that first and foremost, then after a while you will see which concepts from linear algebra keep appearing (for example, singular value decomposition, positive definite matrices, etc) and work your way back from there


Thanks. I have a copy of Strang and have been going through it intermittently. I am primarily focused on ML itself and that's been where I'm spending most of my time. I'm hoping to simultaneously improve my mathematical maturity.

I hadn't known about Learning from Data. Thank you for the link!


Since you're associating ML with singular value decomposition, do you know if it is possible to factor the matrices of neural networks for fast inverse jacobian products? If this is possible, then optimizing through a neural network becomes roughly as cheap as doing half a dozen forward passes.


Not sure I am following; typical neural network training via stochastic gradient descent does not require Jacobian inversion.

Less popular techniques like normalizing flows do need that but instead of SVD they directly design transformations that are easier to invert.


The idea is that you already have a trained model of the dynamics of a physical process and want to include it inside your quadratic programming based optimizer. The standard method is to linearize the problem by materializing the Jacobian. Then the Jacobian is inserted into the QP.

QPs are solved by finding the roots (aka zeroes) of the KKT conditions, basically finding points where the derivative is zero. This is done by solving a linear system of equations Ax=b. Warm starting QP solvers try to factorize the matrices in the QP formulation through LU decomposition or any other method. This works well if you have a linear model, but it doesn't if the model changes, because your factorization becomes obsolete.


> My goal is to develop a practical, working understanding I can apply directly

Same, and I think ML is a perfect use case for this. I also have a series for that coming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: