Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Surely a student who is learning them and thus by definition doesn’t know them would have even greater difficulties

But a student who is learning them will not start by looking at the equation, but at the concept, and that concept will disambiguate between sin() and s·i·n.

The example you use is interesting. Yes, that would remove ambiguity between sin() and sin. But how would that notation evolve? If in most instances it's clear when you have function application and when it's multiplication, people will stop writing it. Same with the multiplication operator.

Not to mention that notation also introduces ambiguity, because the dot is part of the written language, so you'll have instances where it isn't clear whether the dot means "apply function" and when it means "sentence stop".

Even assuming that the notation stays and doesn't devolve to something that's faster to write and to read, what did we actually achieve? We wouldn't have removed the complexity of learning trigonometric functions. You wouldn't stop a student from doing sin.(a + b) = sin.a + sin.b, for example, or trying to use the same formulas for sin and cos.

My point is that while you will have some instances where notation could be improved (and naming too, for example closed and open sets are confusing because they are not inverse properties) because most of the time what's difficult is the concept itself, notation is like an extra step after having gone up four flights of stairs.



> But a student who is learning them will not start by looking at the equation, but at the concept, and that concept will disambiguate between sin() and s·i·n.

Some symbolic (broadly construed, including drawings, vocalizations, etc) representation is going to be required to communicate any concept. Why not use one that's minimally ambiguous? I get your point that we can hope the student will not struggle too much with the meaning of sin on the day the day the sine function is being taught. Even so, concepts once introduced usually appear elsewhere. We shouldn't be surprised if after learning both the sine and the invisible multiplication operator, the student might be confused as to whether or not some string is a sequence of multiplications or a function name. This isn't just hypothetical either, I've been in enough math classes to see the students of ordinary intelligence struggle with this.

> The example you use is interesting. Yes, that would remove ambiguity between sin() and sin. But how would that notation evolve? If in most instances it's clear when you have function application and when it's multiplication, people will stop writing it. Same with the multiplication operator.

I consider using invisible operators to be generally unwise. I wouldn't consider adding ambiguity to save a key or pen stroke a wise trade-off. I'm aware that many mathematicians do, and all I can say to that is that it baffles me. In humility I'm willing to allow that they know something I don't, so perhaps my bafflement is a personal defect. Even so I don't think I want to repair it. In my own work I appreciate the clarity too much. And since that work is just reasoning about programs I want to write and amounts to personal notes and not something I have any interest in publishing, it doesn't much matter to anyone else what notation I use.

> Not to mention that notation also introduces ambiguity, because the dot is part of the written language, so you'll have instances where it isn't clear whether the dot means "apply function" and when it means "sentence stop".

LaTeX and other comparable typesetting software adequately solve for this. As for manuscript, there are also ways to indicate whether a portion thereof is a formula or explanatory text. What I really think is important though isn't the choice of the glyph "." but avoiding using the same symbol for two completely unrelated concepts like precedence and function application.

> Even assuming that the notation stays and doesn't devolve to something that's faster to write and to read, what did we actually achieve? We wouldn't have removed the complexity of learning trigonometric functions. You wouldn't stop a student from doing sin.(a + b) = sin.a + sin.b, for example, or trying to use the same formulas for sin and cos.

Reading speed is by chunk and not character count. I challenge the notion that f(x+y) is faster to read than f.(x+y). As for being slower to write, I doubt that any mathematics beyond the most basic arithmetic are constrained by typing speed. I accept that there may be a stronger argument for some kind of shorthand in manuscript, but I still doubt the savings are worth it.

As an aside I find this pleasantly parseable:

  sin.(a + b) = 1/csc.(a + b)
although a student who failed to recognize that application binds more strongly than division might suffer. That mathematics deals with parsed expressions and not substrings is certainly a vital concept.

> My point is that while you will have some instances where notation could be improved (and naming too, for example closed and open sets are confusing because they are not inverse properties) because most of the time what's difficult is the concept itself, notation is like an extra step after having gone up four flights of stairs.

I think a better analogy is that it's like going up a set of stairs where the occasional step is false and drops into a pit. Once one gets used to the pits one can navigate the stairs virtually as well as if they weren't there, but that's hardly an argument in favor of booby trapping the stairs. It certainly will make things considerably harder for first time climbers.

Nevertheless, I continue to agree that learning concepts is the more challenging and interesting part of mathematics. I also welcome improvements in clarifying concepts. Sadly, making a complicated concept easier to understand is a much greater challenge than making an ambiguous and muddled syntax unambiguous and clear. My preference is that we pursue both, because they're complementary.


> This isn't just hypothetical either, I've been in enough math classes to see the students of ordinary intelligence struggle with this.

I honestly have not seen that. Maybe some minor confusion between sin and asin maybe, but most of the time it's clear what it's meant.

> I consider using invisible operators to be generally unwise. I wouldn't consider adding ambiguity to save a key or pen stroke a wise trade-off. I'm aware that many mathematicians do, and all I can say to that is that it baffles me. In humility I'm willing to allow that they know something I don't, so perhaps my bafflement is a personal defect. Even so I don't think I want to repair it. In my own work I appreciate the clarity too much.

I have seen tons of times authors omitting notation to make it less cumbersome. It's usually preceded by something like "we omit X for brevity/simplicity in the following". The reason is that symbolic notation exists for density of information and focus. When clarity, details and specifics are required, mathematicians use text.

> LaTeX and other comparable typesetting software adequately solve for this.

Funnily enough, they also solve for sin() and sin if you use \sin (or \mathrm{sin}).

> Reading speed is by chunk and not character count. I challenge the notion that f(x+y) is faster to read than f.(x+y)

The dot is short enough to not change things too much, but compare "xy + yz + zy + xyz" to "xy + yz + zy + xy*z". And this happens a lot, because often you want only symbols for things that matter and remove the redundant things. For example, if you're doing calculus you'll often write down the arguments for the functions, but in differential equations you'll omit them because they're not really important.

> Nevertheless, I continue to agree that learning concepts is the more challenging and interesting part of mathematics. I also welcome improvements in clarifying concepts. Sadly, making a complicated concept easier to understand is a much greater challenge than making an ambiguous and muddled syntax unambiguous and clear. My preference is that we pursue both, because they're complementary.

Yes, but my point is that while notation can sometimes be improved, the relation effort/gains is usually small. For starters, notation is not the hardest things one faces when learning mathematics. Then, you have the issue of improvements in one aspect of notation causing problems in other aspects because the set of symbols we have is limited (for example, dot is used as the dot product in vector spaces too). And of course, the problem of changing notation that is already written. Sometimes the gains are worth the effort, such as the ceiling/floor notation of Iverson (and the bracket, although I don't think it's as standard). But that's reasoning mathematicians use for/against notation changes. It's not because having difficult notation is enjoyable or because it acts as gatekeeping.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: