Okay, why do you use a hash table instead of a linked list when quick retrieval is more important than in-order traversals?
Now, it's pretty obvious when we're discussing something as simple as this, but this is the fundamental essence of Big O. Certainly, we don't need to calculate it on a daily basis, especially past the general case, but it also doesn't hurt to have common terminology when speaking about an edge case of an algorithm.
And just having a general feel of a graph of how quickly an O(n^2) algorithm can spiral out of control versus an O(log n) algorithm is useful. (That is, if you have a small amount of elements, it's not going to matter, but it will matter quickly as the number of elements grow.)
Eh, as a web programmer those certainly aren't a concern for me (and I hope this convo won't devolve into "web programmers aren't real programmers").
For both PHP and JS, there really isn't a difference; you're just given some basic data structures that handle pretty much everything under the sun, and you go from there. You can have an array with numeric keys (list), or you can have an array with string keys (dictionary), and it's only in your implementation that will determine if you use it as an iterative structure or as a kind of hash-lookup structure.
While PHP does have some advanced data structures provided by SPL, and some JS implementations offer typed arrays and such, they're rarely used in the wild for various reasons. I think the main reason, though, is probably that they're not really needed for 99.9% of web apps.
> you're just given some basic data structures that handle pretty much everything under the sun, and you go from there
This only works because the size of your n is small, possibly a few hundred, so it doesn't matter. When you start dealing with millions or billions of records this stuff matters. Quite a lot.
So really, it's not the language, it's the size of your data - or the size of n that matters.
Exactly, and how many web apps deal with millions of data points? Not many, as far as the view layer is concerned. Perhaps you'll have millions of rows in your DB, but you typically won't process all of those, at once, within PHP or JS. At least in my experience, most data processing on that scale happens in your OLAP layer (and thus is fully removed from the jurisdiction of PHP and JS).
Especially given single-page apps, you should never be dealing with millions of objects; with pagination and such, it's usually under a 1000 at a time, more typically 100 or so.
Well, I wouldn't phrase it that way, but it's not a fallacious argument.
If I were to rephrase, I would say, application developers aren't full stack developers.
Modern languages and frameworks hide a lot of complexity, allowing application developers to focus on business problems, which is a good thing.
But if you want to continue to grow as a programmer, and understand the tools you use, or use them to maximum efficiency, understanding things like Big-O analysis are crucial.
I don't often do complex "math" or analysis using Big-O... but understanding the core tenants are crucial, especially as you move from building apps to building frameworks themselves.
Now, it's pretty obvious when we're discussing something as simple as this, but this is the fundamental essence of Big O. Certainly, we don't need to calculate it on a daily basis, especially past the general case, but it also doesn't hurt to have common terminology when speaking about an edge case of an algorithm.
And just having a general feel of a graph of how quickly an O(n^2) algorithm can spiral out of control versus an O(log n) algorithm is useful. (That is, if you have a small amount of elements, it's not going to matter, but it will matter quickly as the number of elements grow.)