Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There are so many _symbols_ used in mathematics. It's all as if they're spells written by wizened greybeard wizards and passed down to apprentices throughout the centuries.

Think of it as jumping into a new codebase you have never worked with before. It is full of strange functions calling each other for unknown reasons. It takes a while to get a good intuition for why.

M/L could be written Field-extension(M, L). That might make it easier to Google what's going on, rather than having to pick up a group theory book. And indeed in this case it would be extra helpful, because M/L could also mean "quotient group" or even "division" depending on the type of the objects M and L. I personally prefer M:L for field extensions.

But Googling individual concepts is hardly a good way to learn group theory. So I don't think it matters too much. For anyone that spends enough time working in the given field, the notation is by far the easiest thing to figure out.

In general it is nice that mathematics - still being largely a handwritten endeavor - has the freedom to introduce new notation where it makes sense. See https://mathoverflow.net/questions/42929/suggestions-for-goo... for cool examples of new notation improving the lives of everyone.



> For anyone that spends enough time working in the given field, the notation is by far the easiest thing to figure out.

I want to emphasize this because the weirdness of math notation comes up often. The difficulty of grasping mathematical concepts far outweighs notational concerns. Getting used to notation in a specific field is an absolute drop in the bucket compared to understanding the theorems and proofs, and often the reason notation is so idiosyncratic and inconsistent is because it's meant to try to reflect human intuition, which is idiosyncratic and inconsistent in comparison to the mathematics from which it is derived.

Once you really get into mathematics, notation is so far down the list of things that make learning mathematics hard that I don't think it's worth the associated cost of trying to change it on a large scale.


I'm not a computer scientist but as a programmer I enjoy dabbling in computer science papers and while I often do understand the underlying concepts (sometimes very well since I've been using these concepts daily for decades) I often find myself incapable of following a paper just because the notation is hopelessly obtuse.

The immediate example for me would be lambda calculus which makes a lot of sense to me when expressed in code (be it lisp, python or Rust) but just looks like vomit in theoretical papers, for instance:

        (λy.M)[x := N] = λy.(M[x := N]), if x ≠ y and y ∉ FV(N)
This looks like a perl one liner from an obfuscation contest.

Why is it that software engineers in a few decades have come to the conclusion that proper variable naming is important and abuse of sigils a bad idea but mathematics don't feel the same way?


Variable naming is important because variables are used to represent almost anything, you have autocomplete and search so longer variable names are quickly typed, and most lines of codes will have one, two or three of these names.

On the other hand, notation in mathematics is used to represent only a limited set of common concepts for the field, you don't have autocomplete and a single line can contain a lot of concepts. For example, compare two expressions of Stokes' theorem (given HN typesetting limitations):

    ∫_A dω = ∫_∂A ω
versus

   integral(A, differential(ω)) = integral(boundary(A), ω)
While the second one is easier to understand at a first glance, the problem of that equation is not the symbols but the concepts behind them. And to understand the concepts you're going to go over similar equations time and time again, and at that point the extra letters and extra space used is going to complicate both writing and reading the equations. Of course, there is always people who overuse and underuse notation, but if mathematics relies heavily on notation it's for a reason: it's useful.

Edit: Also, the second one is only "easy" if you're familiarized with function calling in programming. One could argue that to go full notation-less you should only write in proper sentences, and that would make it even more complex.


> One could argue that to go full notation-less you should only write in proper sentences, and that would make it even more complex.

Reading Newton’s principia is humbling. He does that the whole book; it takes him pages to explain concepts we now learn before university and that can be condensed in a couple of equations. Finding words to describe the maths he uses is difficult, you can almost feel the insane amount of work just to translate the concepts into sentences. It’s a wonderful example, and of course a masterpiece. But reading it is also very difficult because of all the noise around the concepts. Words are fine to give an intuitive understanding, but at some point you need equations.

Same for Maxwell’s equations, for example. You’d spend less time learning basic calculus and algebra than trying to understand what the hell he was meaning.

Or 19th-century chemists, who used fancy and complicated names for compounds that are described much more clearly and succinctly with a chemical formula. Yes, you need to learn the formalism, but it pays off.

Some counter examples are Einstein and Feynman, who could communicate complex concepts with words in an understandable way. Though even them need some abstract notation at some point (lots of it, actually).


I can search the words. I can ask people what the words mean. If don’t even know what the symbols are called then how do I begin?

I had this problem when I started higher level maths and the Greek alphabet was used. I couldn’t ask what a symbol meant when I couldn’t say what the symbol was. I couldn’t write down a symbol whose name was read out when I didn’t know what it should look like.

I tried learning the Greek alphabet but flash cards and spaced repetition didn’t work for my ADHD brain and I just had to stop doing maths, which was a shame, because up to that point I was good at it.


> I can search the words. I can ask people what the words mean. If don’t even know what the symbols are called then how do I begin?

Usually by looking at the words that surround the equation. Most math texts will introduce notation in words, and it's common to repeat something in words and notation (e.g. 'let the field extension L/k...'). I have studied math for five years and I don't think I ever had the problem of not knowing how to search for something. The only problem I had with notation was finding the LaTeX code for a specific symbol, but other than that the bigger problem were the concepts.

There are also glossaries at the end of books and in some specialized pages too (https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbo...)

> I couldn’t ask what a symbol meant when I couldn’t say what the symbol was. I couldn’t write down a symbol whose name was read out when I didn’t know what it should look like.

In my classes these problems were usually solved by asking "what's that wiggly thingy" or "how do you write 'alpha'".

And the greek alphabet is not used because mathematicians like to annoy people. There are certain customs for how objects are notated. x,y,z for unknowns/variables; f,g,h for variables; α,β for angles; A,B for matrices; G, H for groups... They are used because, even though they might be harder for beginners, they aren't harder than the concepts they're describing and help a lot in reducing the cognitive load when reading things.


Great to hear you didn’t find these things a problem and that they aren’t really that hard to understand.


Not sure whether you're being sarcastic, but in case you are:

The internet, including this forum, is chock-full of people making useless comments like "well I like it" or "it works for me", just asserting their views or experiences while advancing no argument and offering no discussion.

The post you just replied to is not doing that.

The poster you replied to is taking the time to describe and explain his experiences and their implications. He's contributed a lot more to the conversation than you have, and doesn't deserve sarcasm.


You weren't sure whether I was being sarcastic, yet you /are/ sure that the post I sarcastically responded to was written in the spirit of contribution, and was not itself insincere.


Sincerity doesn't matter when the commenter is correct. I've taken plenty of math classes, and everything he describes is accurate.


There's only 24 letters in the Greek alphabet, and any given piece of math doesn't use more than a few of them at a time.

You're clearly capable of learning at least one alphabet, why was the second so much harder?


First one probably took hundreds of hours or practice and a ton of exposure, to be fair.


> First one probably took hundreds of hours or practice and a ton of exposure, to be fair.

Learning the Greek alphabet shouldn't take more than 10 hours. Not that one needs to. I never did. However, in a given discipline, practitioners tend to be consistent on which Greek letters they use for which concepts. As long as you spend time on a given topic, the Greek symbols should become ingrained - just as knowing how to write a for loop in C would become if you do it often enough.

And I would hope people studying math spend a lot more than hundreds of hours on it. If someone came and complained about the arcane syntax used in programming languages and hadn't spent, say, 100 hours programming, I don't think you would give much weight to their views.


Actually you have an example of the unexplained oddness there: what does the underscore mean?


I can't put subscript on HN (or at least I don't know how), see https://en.wikipedia.org/wiki/Generalized_Stokes_theorem (first equation) for how it should be typeset.


> (λy.M)[x := N] = λy.(M[x := N]), if x ≠ y and y ∉ FV(N)

As someone who spent a lot of time in mathy subjects, this is very readable to me - even though I don't know lambda calculus. I'd posit that if you have trouble with this, it is merely due to not spending much time in math.

Imagine someone who spent all his time in BASIC and he suddenly reads a Java codebase, and complains about the syntax.

> Why is it that software engineers in a few decades have come to the conclusion that proper variable naming is important and abuse of sigils a bad idea but mathematics don't feel the same way?

Because they've been doing it for a few hundred years longer than SW engineers have. I hear this refrain often here on HN. I would love to see someone write a textbook on electromagnetics or quantum mechanics using this verbose notation. The derivation of a harmonic oscillator (without ladder operators) takes a few pages of this concise notation. I shudder how lengthy it would be when more verbose.


I would recommend two things:

1. Since you seem to someone who reads papers sometimes, I would encourage you define your "programming notation" and start proving theorems in that notation. Do it for sufficiently complex proofs: the ones that require a few different lemmas, span several pages, and use some non-trivial algebra. Then, attempt to present your proof to someone in your notation. Judge for understanding.

2. You might very well succeed. But if you don't, wonder why out of many many software engineers (or profs at software engineering oriented degrees) over the past few decades who read math/cs papers, no one attempted to use "programming notation" to write a book or notes to communicate mathematics to others?


WRT 2, plenty of people use more "programming" style notation. Most cryptography papers these days define algorithms as sequences of steps that look quite a bit like code, just with Greek letters for many of the variable names (but italicized words for function names). EG this paper[1] picked from the last week of IACR preprints shows the style. That's just the first one I clicked, the title looked interesting. Most of the rest use a similar style.

The big reason for single-letter variable names is that historically multiplication can be denoted by concatenation. It makes formulae shorter, so they don't need line breaks. Personally I don't think that's a huge benefit, particularly when there's more than one sort of product possible so you end up needing to explicitly denote multiplication anyway.

[1] https://eprint.iacr.org/2022/106.pdf (Page 7-8)


Not everything in math is suitably expressed (or, perhaps, discussed) in an algorithmic form. Much (most?) of mathematics is operating at a level of abstraction that is far removed from an underlying machine model (in contrast to algorithms, like, well, the cryptographic algorithms you mention).

Take matrix multiplication, imagine seeing this:

  C = zero(m,p)
  for k from 1 to n
    for i from 1 to m
      for j from 1 to p
        C_ij += A_ik * B_kj
Now suppose that B is invertible and we actually want to know A in terms of B and C. Oops, we can't figure it out because we've tied ourselves to this algorithmic expression. In contrast to a more typical algebraic expression which describes not a computation but a relation:

  C = AB
  CB^-1 = ABB^-1
  CB^-1 = A
Perform that manipulation with the algorithmic description that helpfully obfuscates the relationship between the parts.

[And this is a small example, algorithmic expressions of algebraic ideas, like plain English expressions of the same, does not scale very well.]


Sure, as a physicist, I have written papers with pseudocode for algorithms, and I think we would do well to express algorithms in this way. That's definitely an improvement that is slowly percolating various fields. But most mathematical theorems and their proofs are not algorithms, and not suitable for expressing this way.

The reason for single-letter variable names is that mathematics is best learned by manipulating ideas by hand on paper and pen. And that many mathematical expressions are long and if we started using longer names, we would end up writing only a couple of statements per page, which would be much annoying than just using symbols.


> 1. Since you seem to someone who reads papers sometimes, I would encourage you define your "programming notation" and start proving theorems in that notation.

Dijkstra did just that[1]. That same site has hundreds of examples of him proving things in that very programmer friendly notation. Bear in mind that Dijkstra was trained as a “mathematical engineer” for his higher education.

I’m not trying to be controversial, but traditional mathematicians are decades behind the best computing scientists when it comes to crafting formalisms. To be fair the typical practicing programmer is even further behind.

I can only speculate about the appeal of a notation that makes it difficult or impossible to just let symbol manipulation do the heavy lifting. Perhaps mathematicians enjoy the intellectual exercise of holding all those concepts in mind? Perhaps they, like many guilds, appreciate the barriers to outsiders that they feel increase their own prestige? Or perhaps it’s just sheer inertia? I really don’t know.

[1] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/E...


> I’m not trying to be controversial, but traditional mathematicians are decades behind the best computing scientists when it comes to crafting formalisms.

Could you provide examples? The Dijkstra article doesn't seem like a marked improvement.

> Perhaps mathematicians enjoy the intellectual exercise of holding all those concepts in mind?

Mathematics is not symbol manipulation. I wish it was as easy as that. Mathematics requires you to hold all the concepts in mind while studying and working.

> Perhaps they, like many guilds, appreciate the barriers to outsiders that they feel increase their own prestige?

I am incapable of finding an example where the barrier to outsiders is the notation and not the concepts themselves.


> Could you provide examples? The Dijkstra article doesn't seem like a marked improvement.

As I said, there are hundreds of examples on that site.

> Mathematics is not symbol manipulation. I wish it was as easy as that. Mathematics requires you to hold all the concepts in mind while studying and working.

Of course concepts must be understood, but that doesn't mean they need to be held in mind for the vast majority of the derivation. In many cases for example it's sufficient to know that an operation is associative to perform some manipulation, without needing to fuss over the specifics of the concept that operation captures. Unless of course you insist on a notational convention where the syntax is semantically ambiguous, as is common practice among traditional mathematicians.

> I am incapable of finding an example where the barrier to outsiders is the notation and not the concepts themselves.

What does the following mean? What concepts must be held in mind to understand it?

  sin(s + i + n)

Even the average graduate mathematician is absurdly above the general population average for raw cognitive ability. Because of this, the frankly primitive approach to syntax is less of an impediment to doing interesting work than it would otherwise be. On the other hand if one desires to make mathematics more accessible to persons of ordinary intelligence, then reducing the cognitive load by clearly defining the semantics of a formula using an unambigous syntax is certainly necessary. Evidently though that's of little importance to the mathematical guild.


> As I said, there are hundreds of examples on that site.

In that article or in the whole site? Because the article just seems like different notation, no great improvements.

> In many cases for example it's sufficient to know that an operation is associative to perform some manipulation

Whenever that's the case, mathematicians have already found a way to use notation to that advantage. See for example the case of groups and rings, where the operations are usually denoted by '+' and '·' (addition and multiplication) even when they might not necessarily be those operations just because it makes operations more intuitive.

> What does the following mean? What concepts must be held in mind to understand it?

Well, you need to know what the sinus is and what is addition. If you don't have those concepts it doesn't matter that I write sin(s + i + n) or 'The sinus of s plus i plus n' (not to mention that there's ambiguity there too, does 'of' refer to 's' or 's plus i plus n).

> On the other hand if one desires to make mathematics more accessible to persons of ordinary intelligence, then reducing the cognitive load by clearly defining the semantics of a formula using an unambigous syntax is certainly necessary. Evidently though that's of little importance to the mathematical guild.

Notation is not the issue when explaining mathematics. The mathematical guild has done a ton of effort in improving math education, and you'll be able to find videos and texts that barely have any notation or equations. Notation exists because once you learn the concepts, it reduces the cognitive load of transmitting those concepts.


> Well, you need to know what the sinus is and what is addition. If you don't have those concepts it doesn't matter that I write sin(s + i + n) or 'The sinus of s plus i plus n' (not to mention that there's ambiguity there too, does 'of' refer to 's' or 's plus i plus n).

Incorrect. In the example I gave one merely needed to understand multiplication, addition, and the distributive property. And you, who are observably skilled in the art of mathematics, failed to follow a grade school level formula. So much for ambiguous syntax not being an impediment to understanding even when all concepts involved are comprehended.

> Notation is not the issue when explaining mathematics. The mathematical guild has done a ton of effort in improving math education, and you'll be able to find videos and texts that barely have any notation or equations.

It's baffling to me that anyone would attempt to argue that ambiguous syntax is not an issue for explaining mathematics. Please understand that the following unflattering comparison is intended merely for clarity and not to insult. To me it feels like trying to argue with a flat earther that the world is round. The absurdity makes argumentation virtually impossible.

> Notation exists because once you learn the concepts, it reduces the cognitive load of transmitting those concepts.

I agree. In fact that's germane to my original point. The mathematics guild is using primitive notation compared to what computing scientists discovered in the latter half of the 20th century. I'm not saying that notation is bad or even that the traditional amgbigous syntax isn't an improvement over just writing everything out in some natural language. I'm saying the mathematics guild is stubbornly ignoring notational advances that further reduces cognitive load.

> The mathematical guild has done a ton of effort in improving math education

Then how do we explain the complete lack of measurable progress in the average American secondary school student's mathematical ability? It appears to me that regardless of how much effort has been expended, little has come of it.


> Incorrect. In the example I gave one merely needed to understand multiplication, addition, and the distributive property. And you, who are observably skilled in the art of mathematics, failed to follow a grade school level formula. So much for ambiguous syntax not being an impediment to understanding even when all concepts involved are comprehended.

So which notation would you propose that would allow me to understand the formula without knowing those concepts?

> It's baffling to me that anyone would attempt to argue that ambiguous syntax is not an issue for explaining mathematics. Please understand that the following unflattering comparison is intended merely for clarity and not to insult. To me it feels like trying to argue with a flat earther that the world is round. The absurdity makes argumentation virtually impossible.

I am not saying it's not "an issue", I am saying it's not the issue. Ambiguous notation is a problem, yes, but when our language is ambiguous too I don't think it's a problem you can fully solve. And definitely one that will be solved by removing notation.

> I'm saying the mathematics guild is stubbornly ignoring notational advances that further reduces cognitive load.

Such as? Because I still haven't seen those advances.

> Then how do we explain the complete lack of measurable progress in the average American secondary school student's mathematical ability? It appears to me that regardless of how much effort has been expended, little has come of it.

I'd guess that has more to do with local policies and resources than any effort of any mathematician. It seems weird to evaluate the work of a global community based on a measurements of schools in a specific country.


First I want to say that I really appreciate your responses. You're obviously writing in good faith and I'm finding this interesting. Thank you for helping me clarify my own understanding.

> So which notation would you propose that would allow me to understand the formula without knowing those concepts?

We agree on the importance of understanding concepts. I showed the difficulty for one who does understand the concepts. Surely a student who is learning them and thus by definition doesn’t know them would have even greater difficulties. I propose that a notational convention that forgoes the invisible multiplication operator and that distinguishes between ordering operations and function application rather than using parentheses for both would be considerably more clear. Such as, for example, the one Dijkstra adopted. I’m confident that you would have easily understood either of the following:

  sin.(s + i + n)
Or

  s*i*n*(s + i + n)
A student who doesn't understand one or more of the necessary concepts will at least be able to see from the syntax that different concepts are being expressed. I don’t think 1-3 extra glyphs is too high a price to pay for that reduction in ambiguity. There is still ambiguity of course, such as are the expressions in R or C? I like type declarations for that reason. By the way I do agree with you that no syntactic improvement can completely eliminate ambiguity. I just think we should still do our level best to avoid introducing it gratuitously.

Once the student has achieved proficiency with the requisite concepts using a more sensible notation, supposing they have an interest in further study, they will then need to learn the customary notation on account of the great and valuable body of work that uses it. I think that’s acceptable though since learning new syntax will always be a part of learning math and as you say the real challenge is mastering concepts and not syntax.

> It seems weird to evaluate the work of a global community based on a measurements of schools in a specific country.

You make a good point here. It would be better to make a global comparison using something like PISA math scores and seeing if didactic innovation has resulted in any improvement in the places it has been implemented. I’m not aware of any such improvement, but the world is a big place so that’s hardly evidence that there isn’t any.


> Surely a student who is learning them and thus by definition doesn’t know them would have even greater difficulties

But a student who is learning them will not start by looking at the equation, but at the concept, and that concept will disambiguate between sin() and s·i·n.

The example you use is interesting. Yes, that would remove ambiguity between sin() and sin. But how would that notation evolve? If in most instances it's clear when you have function application and when it's multiplication, people will stop writing it. Same with the multiplication operator.

Not to mention that notation also introduces ambiguity, because the dot is part of the written language, so you'll have instances where it isn't clear whether the dot means "apply function" and when it means "sentence stop".

Even assuming that the notation stays and doesn't devolve to something that's faster to write and to read, what did we actually achieve? We wouldn't have removed the complexity of learning trigonometric functions. You wouldn't stop a student from doing sin.(a + b) = sin.a + sin.b, for example, or trying to use the same formulas for sin and cos.

My point is that while you will have some instances where notation could be improved (and naming too, for example closed and open sets are confusing because they are not inverse properties) because most of the time what's difficult is the concept itself, notation is like an extra step after having gone up four flights of stairs.


> But a student who is learning them will not start by looking at the equation, but at the concept, and that concept will disambiguate between sin() and s·i·n.

Some symbolic (broadly construed, including drawings, vocalizations, etc) representation is going to be required to communicate any concept. Why not use one that's minimally ambiguous? I get your point that we can hope the student will not struggle too much with the meaning of sin on the day the day the sine function is being taught. Even so, concepts once introduced usually appear elsewhere. We shouldn't be surprised if after learning both the sine and the invisible multiplication operator, the student might be confused as to whether or not some string is a sequence of multiplications or a function name. This isn't just hypothetical either, I've been in enough math classes to see the students of ordinary intelligence struggle with this.

> The example you use is interesting. Yes, that would remove ambiguity between sin() and sin. But how would that notation evolve? If in most instances it's clear when you have function application and when it's multiplication, people will stop writing it. Same with the multiplication operator.

I consider using invisible operators to be generally unwise. I wouldn't consider adding ambiguity to save a key or pen stroke a wise trade-off. I'm aware that many mathematicians do, and all I can say to that is that it baffles me. In humility I'm willing to allow that they know something I don't, so perhaps my bafflement is a personal defect. Even so I don't think I want to repair it. In my own work I appreciate the clarity too much. And since that work is just reasoning about programs I want to write and amounts to personal notes and not something I have any interest in publishing, it doesn't much matter to anyone else what notation I use.

> Not to mention that notation also introduces ambiguity, because the dot is part of the written language, so you'll have instances where it isn't clear whether the dot means "apply function" and when it means "sentence stop".

LaTeX and other comparable typesetting software adequately solve for this. As for manuscript, there are also ways to indicate whether a portion thereof is a formula or explanatory text. What I really think is important though isn't the choice of the glyph "." but avoiding using the same symbol for two completely unrelated concepts like precedence and function application.

> Even assuming that the notation stays and doesn't devolve to something that's faster to write and to read, what did we actually achieve? We wouldn't have removed the complexity of learning trigonometric functions. You wouldn't stop a student from doing sin.(a + b) = sin.a + sin.b, for example, or trying to use the same formulas for sin and cos.

Reading speed is by chunk and not character count. I challenge the notion that f(x+y) is faster to read than f.(x+y). As for being slower to write, I doubt that any mathematics beyond the most basic arithmetic are constrained by typing speed. I accept that there may be a stronger argument for some kind of shorthand in manuscript, but I still doubt the savings are worth it.

As an aside I find this pleasantly parseable:

  sin.(a + b) = 1/csc.(a + b)
although a student who failed to recognize that application binds more strongly than division might suffer. That mathematics deals with parsed expressions and not substrings is certainly a vital concept.

> My point is that while you will have some instances where notation could be improved (and naming too, for example closed and open sets are confusing because they are not inverse properties) because most of the time what's difficult is the concept itself, notation is like an extra step after having gone up four flights of stairs.

I think a better analogy is that it's like going up a set of stairs where the occasional step is false and drops into a pit. Once one gets used to the pits one can navigate the stairs virtually as well as if they weren't there, but that's hardly an argument in favor of booby trapping the stairs. It certainly will make things considerably harder for first time climbers.

Nevertheless, I continue to agree that learning concepts is the more challenging and interesting part of mathematics. I also welcome improvements in clarifying concepts. Sadly, making a complicated concept easier to understand is a much greater challenge than making an ambiguous and muddled syntax unambiguous and clear. My preference is that we pursue both, because they're complementary.


> This isn't just hypothetical either, I've been in enough math classes to see the students of ordinary intelligence struggle with this.

I honestly have not seen that. Maybe some minor confusion between sin and asin maybe, but most of the time it's clear what it's meant.

> I consider using invisible operators to be generally unwise. I wouldn't consider adding ambiguity to save a key or pen stroke a wise trade-off. I'm aware that many mathematicians do, and all I can say to that is that it baffles me. In humility I'm willing to allow that they know something I don't, so perhaps my bafflement is a personal defect. Even so I don't think I want to repair it. In my own work I appreciate the clarity too much.

I have seen tons of times authors omitting notation to make it less cumbersome. It's usually preceded by something like "we omit X for brevity/simplicity in the following". The reason is that symbolic notation exists for density of information and focus. When clarity, details and specifics are required, mathematicians use text.

> LaTeX and other comparable typesetting software adequately solve for this.

Funnily enough, they also solve for sin() and sin if you use \sin (or \mathrm{sin}).

> Reading speed is by chunk and not character count. I challenge the notion that f(x+y) is faster to read than f.(x+y)

The dot is short enough to not change things too much, but compare "xy + yz + zy + xyz" to "xy + yz + zy + xy*z". And this happens a lot, because often you want only symbols for things that matter and remove the redundant things. For example, if you're doing calculus you'll often write down the arguments for the functions, but in differential equations you'll omit them because they're not really important.

> Nevertheless, I continue to agree that learning concepts is the more challenging and interesting part of mathematics. I also welcome improvements in clarifying concepts. Sadly, making a complicated concept easier to understand is a much greater challenge than making an ambiguous and muddled syntax unambiguous and clear. My preference is that we pursue both, because they're complementary.

Yes, but my point is that while notation can sometimes be improved, the relation effort/gains is usually small. For starters, notation is not the hardest things one faces when learning mathematics. Then, you have the issue of improvements in one aspect of notation causing problems in other aspects because the set of symbols we have is limited (for example, dot is used as the dot product in vector spaces too). And of course, the problem of changing notation that is already written. Sometimes the gains are worth the effort, such as the ceiling/floor notation of Iverson (and the bracket, although I don't think it's as standard). But that's reasoning mathematicians use for/against notation changes. It's not because having difficult notation is enjoyable or because it acts as gatekeeping.


Regarding using some notation for multiplication:

You'll be hard pressed to find any symbol/notation that isn't overloaded with other operators in math. Things like '*' and the 'x' symbols have different meanings in different contexts, and when you're doing those things, and need multiplication, you run into problems.

Personally, and I know I'm not alone, when I convert physics equations to code, the '*' symbol is one of the ones that make reading code challenging.

For function application, I can see the problem, and I'm sure most have gotten confused at one point or other where they're not sure if it's a function application vs multiplication. But as others have pointed out, the context makes it clearer.

I think one of the key differences between many here and those who favor the notation is that in SW, the wish is the formalized version (i.e. the code) is readable enough to understand, without much prose (i.e. comments). In mathematics, it usually is not the case - in fact, one of my math professors often had to point out to students that it's better if they wrote part of their answer in prose rather than purely logic symbols. In that sense, there often is little ambiguity. It really should be clear from the context whether you're dealing with the sine function or multiplying symbols. If it isn't, the problem isn't the notation, but the lack of understanding of the context. Having a guide to the notation will not elucidate much.

And for the one who uses a lot of math (some engineers, physicists, and of course mathematicians) - having to write (any) symbol for multiplication is crazy tedious - even if it is just a dot. There is a reason they opt not to put it.


I don't see anything in that link that is markedly different from math notation used by one math community or the other. Sure, there are difference that are "improvements" in some sense of the word, and I think some of those can be adopted. But nothing major, that will somehow massively reduce the complexity of understanding mathematics by non-experts.


> But nothing major, that will somehow massively reduce the complexity of understanding mathematics by non-experts.

To give just one example, when done properly the use of hinting in the multi-line equational proof format will certainly ease the non-expert’s task of following a proof.


The two-column proof has been a thing in math education for over a century [1]. I have been told by math educators that it is almost universally detested by school students, because of how it restricts their thinking. Nevertheless, it would be useful at the research level to include more of this hinting. But hinting is not a notational issue at all.

[1] https://deepblue.lib.umich.edu/bitstream/handle/2027.42/4265...


The equational and two column proof formats aren’t the same. Perhaps you should read Dijkstra’s monograph’s section on it to understand the differences. And to say they are somehow isomorphic is to miss the point entirely. Sure “x is equal to y” means the same as “x = y”, but even the traditionalists recognize the merits of Recorde’s notational device, even if they fail to follow the greater implications thereof.


Heck, I think that Dijkstra invented his own notation for numbers for a presentation/teaching system that he was developing. (I can't find any references to it - I'm desperately hoping that someone else who might have any idea what vague thing I'm alluding to has actually seen it and can link it here)


I can completely understand using shorthand for complicated proofs (defining what they mean at the start). After all that's what I do in code as well: I often import and alias variables in the local scope, but it's limited to a very well defined context and explicit.

What I disapprove of is using these very terse syntax in definitions like, say, in Wikipedia articles.

In other words it's like how I have no issue using a variable named "int i;" locally in a function, but I'd consider it a very bad practice if a library exported a global "extern int I;" in their public interface.


Usually mathematics is hard to understand if a subject matter is new. Most of the time there are some common examples that everyone learns about in a subject, and you learn to notate them along the way. When you write new mathametics you try to stick to the conventions from those examples.

The common way you learn mathematics, the notation comes automatically, and new mathematics is often understood as a variation on the examples you learn.

Some of these conventions you probably know, like having i for the varying number in a sum or product (or loop in programming) which goes up to an integer n.

If you called an integer f or a complex number n it would make it much harder to read.


> Why is it that software engineers in a few decades have come to the conclusion that proper variable naming is important and abuse of sigils a bad idea but mathematics don't feel the same way?

If yo read code in standard libraries, you'll see a lot of either single-letter variables or extremely generic variable names.

Most mathematics is dealing with things at least one level of abstraction higher than a standard library.

In most programs outside of things like standard libraries, a variable usually stands for something concrete and specific. A customer. An order. A specific type of element in a UI. Etc.

In theory papers, a variable usually stands in for something generic and general. An arbitrary program. An arbitrary finite set. Etc. Sometimes even an arbitrary program in a programming language that is not defined in particular but only in general (e.g., "any language with parametric polymorphism", "any language with a specific sort of binding structure between things in these two syntactic categories", etc.).

Again, standard libraries already start using more generic variable names, and most theory papers are dealing with an abstraction level higher than standard libraries.


I'll give some examples in the most strong-man way possible: by choosing the standard library of a famously verbose language (Java). Here are a few examples, with increasing levels of abstraction mapping onto increasing use of single-variable names:

1. A byte is pretty damn concrete. The Java byte implementation [1] uses almost exclusively single-letter names (b,s) or names that are so generic that they might as well be single-letter names (e.g., anotherByte instead of b). When more meaningful names are used, it's because they are public type names (String, int), which is, again, pretty damn concrete.

2. The next level of abstraction is Generics. Here, even Java -- a language whose verbosity is a long-standing joke -- starts using single-letter variable names for both types and values [2].

3. Finally, we dive into things that abstract over generics [3] and start seeing weird sigils in addition to single-letter names. (What does Predicate<? super E> mean?!)

And, again, this is a strong-manned example, since Java is famously verbose and I'm choosing some of the most-used and therefore most verbosely documented .java files in the world.

Notice, btw, that natural language documentation increases as the verbosity of names decreases. This is the same in math, where those symbols are small pieces of 20+ page papers full of english prose explaining the meaning of the symbols.

And, again most mathematics is dealing with things at least one level of abstraction higher than anything you find in a standard library. Sometimes several levels of abstraction.

[1] Byte.java

[2] Dequeue.java

[3] Collection.java


People are different. For the kind of person who's likely to get heavily into math, any infelicities of notation are lower-order problems, that's true.

For other people, the function of math in the school system seems to be mainly to sort them by math ability -- and for that, artificial barriers aren't a problem. So there's little demand for better UX design for nonspecialists. Sucks if you think society would be better off if more people appreciated/understood/applied more math.


I have to vehemently disagree. Notation does more to keep mathematics the domain of nerds than anything else. It's unintuitive, uninformative, excessively complicated jargon which exists only to gatekeep and save paper. It's simply backwards to continue to use such an error-prone system in times of modern technology.


> It's unintuitive, uninformative, excessively complicated jargon which exists only to gatekeep and save paper.

As a mathematician, notation exists because it's easier to read things with good notation than without. I have some advanced mathematics books in my table and in most of the pages you'll find far more text than notation. Notation is usually explained when it starts being used, usually in the definition. For example, the problem with understanding the notation "Aut(L|k)" is not the notation, because if the author wrote "group of field automorphisms of L fixing k elementwise" I'd be equally lost if I don't understand those words. And once I understand those words, the notation makes it easier to read and understand things whenever the author is talking about that group, because your brain automatically identifies the concept associated with the notation instead of having to read the words and linking them with the concept.


People who don't understand math have this phantasy that they would understand everything only if mathematicians decided to use normal language instead of math symbols. This is just a phantasy, there is no simplification if you stop using math symbols, in fact it is just the opposite. Try to describe some intricate math without symbols and you'll see this.


I do complex math without using greek letters and other untypable characters all the time. It's called programming. The order of operations is explicit, the operations themselves are explicit, and there is very little room for ambiguity. The notation is only for saving space, not because it's the be-all end-all absolute best solution for representing mathematics.


Sorry to tell you, but most programming is just elementary school math and boolean operations. In many cases it doesn't even require high school math. I'm not saying it is easy, but it certainly doesn't require any sophisticated math.


> I do complex math without using greek letters and other untypable characters all the time. It's called programming.

This is simple math.

Start writing formal proofs for your programs. You will then be in the realm of something more complex (and still far lower than the level in a research paper).

Even a proof of the complexity of your algorithm (let alone the correctness) will get ugly without this notation.


My first step to trying to understand an unfamiliar equation or proof is, if I am to have any hope of figuring it out, always breaking it down by term and operation and figuring out WTF each one means in context. Usually on paper. In normal words. Typically I end up turning it into something more "algorithmic" (OK, what happens as the value "passes through" this operation, and then on to this next one...). I'm 1000% useless at understanding equationally- or identity-focused writing, and have to both painstakingly decipher every term and translate it to something more algorithm-like before I can do anything more than stare at the page and drool.

So... yes, I do think it would help if at least some of that were already done for me. As it is it takes so damn long that I don't bother unless I have a good reason.


> Think of it as jumping into a new codebase you have never worked with before. It is full of strange functions calling each other for unknown reasons. It takes a while to get a good intuition for why.

Having done consulting, I frankly disagree. Except pathological cases most of the codebases I've been involved in it's possible to open a file at random at quickly make sense of what happens even without prior exposure. With math papers my experience is that you have to go 3/4 references deep before startit to understand where the notation comes from which wastes hours and hours. They should provide normalized names that could be searched for in e.g. a coq-based database of math concepts or something like that.


> With math papers my experience is that you have to go 3/4 references deep before startit to understand where the notation comes from which wastes hours and hours.

Most of the time you'd need even more hours to understand not the notation but the concept themselves. In this list there isn't that much weird notation compared to weird words. For example, 'The Krull dimension of a noetherian integral domain is finite'. Even without notation, I'd expect a non-math major to have to go read 8-10 wikipedia pages just to understand "Krull dimension".


And in codebases you can always search/grep for function names, so it easy to find the definition of everything.

In math it is a strange symbol in a PDF and you cannot search for at all


This is more a limitation of computers and search engines. The move to using more Unicode like in Julia (and hopefully away from PDF to (m)HTML) should make this issue better.


It is also because the notation with single symbols is too short

In code someone would write "lightspeed" and that can be searched. In a paper, they write "c" and when you search for it you find thousands of unrelated things.


> Think of it as jumping into a new codebase you have never worked with before. It is full of strange functions calling each other for unknown reasons. It takes a while to get a good intuition for why.

I think about this analogy from time to time, when I get frustrated by maths. Myself not being mathematician but on occasion needing to read a research paper, my frustration is typically begins with the raw syntax of maths. It’s so obviously optimized for the combined limitations of the human brain’s limited short term memory and the human hand’s speed of writing/typing. Which probably is the correct optimization for high level practitioners.

For communication with non experts, it’s dreadful, but I think fixable post-facto with a auto-notation tool. Think black.py for maths.


Here's the lens I use, hopefully it will help.

A terse equation or statement in mathematics represents what you will understand, the rest of the paper is to get you there.

Wanting to read the main equations and understand them, so as to use that knowledge reading the paper, is backward. If the main equation was ax^2+bx+c=0, you wouldn't need to read the paper, right? It's about the quadratic equation, it's aimed at primary students.

But that's how it works, we keep pounding away at the paper: taking notes, following references, thinking a lot about each of the definitions, then eventually, you look at the main equation and say "oh. ok. I know what this says now".


> Think of it as jumping into a new codebase you have never worked with before. It is full of strange functions calling each other for unknown reasons. It takes a while to get a good intuition for why.

I've basically abandoned all languages/ecosystems that aren't, at the very least, extremely grep-friendly. Static types preferred so I can get that sweet, sweet auto-refactoring, jump-to-references/definition, docs-on-hover, et c.

If we expected people to read code like they're expected to read math, I'd have taken the huge pay hit and left the profession by now. It'd be a choice between that or a spiral into depression ending in tragedy. I don't know how math-loving folks can stand it.


For textbooks, it's a non-concern because the meaning of each equation is explained in the surrounding text. The amount of new notation (by which I mean symbols such as / for group division or notation for differentiation and integration, rather than variable names) introduced by any given textbook is not that much, and as others have pointed out, by the time you finish reading the associated theorems, examples, and have done some of the exercises (very much underrated part of learning mathematics), if you understand the concept represented by the notation, the notation itself is trivial to remember.

Variables, as opposed to notation, tend to be local in scope, so 1) it isn't vital to have a memorable name and 2) it wouldn't really help to have a 'proper' name, either. As an example, if we're talking about two elements from a group, we usually talk about two elements, a and b, in a group, G. Could you call them elements 1 and 2 from an ExampleGroup? Maybe, but there's no clarity provided since these variables are only scoped for a given section. Additionally, longer equations means that it takes longer to read through and check steps like algebraic manipulation in a proof, or application of a lemma or theorem.

What is globally scoped are things like theorems, lemmas, definitions, etc., and those are easy to track down since they are by convention numbered after chapter and section.


>Think of it as jumping into a new codebase you have never worked with before. It is full of strange functions calling each other for unknown reasons. It takes a while to get a good intuition for why.

Software engineers have built careers off of making code more readable, both in terms of the language itself and the codebase it applies to. Mathematicians seem to go out of their way to make their work as obtuse as possible.

Can you imagine if software engineers had stopped at C? If we were all still trying to apply the spiral rule [0] to read everything? That's what reading math papers feels like. It feels like you stopped at C and are unwilling to even consider that there could be a better way to express things.

[0]http://c-faq.com/decl/spiral.anderson.html


> Mathematicians seem to go out of their way to make their work as obtuse as possible.

Mathematicians go out of their way to make their work as understandable as possible. If they wanted to make it as obtuse as possible, you'd just see a picture of chaotic scribbles on a blackboard.

> That's what reading math papers feels like. It feels like you stopped at C and are unwilling to even consider that there could be a better way to express things.

Do you have any suggestions?

Mathematics is a hard subject because it deals with concepts in several levels deep of abstraction. That's a complexity that you can't escape. You can't expect to read a math paper (aimed at researchers in the same field) and understand everything, the same way it would be unreasonable to ask you to write your code in a way that someone who isn't familiar with either programming or the concepts your code works with to understand it if they went to read it.


If I give you suggestions, you're just going to say, "well that's the way we've always done it and people will be confused if we change now", which is exactly what they said about C until someone decided enough was enough.

Even something as simple as the set of integers being the letter Z requires you to understand German to know why we chose Z (and not I) in the first place. Or d/dx being totally unintuitive for anyone not well-versed in differential calc. I'm quite certain that every student learning calc thinks "Well the d's just cancel out there". Granted, I've heard the intuition behind the notation and I understand why it's tolerated. Still, it seems less helpful than it could be if mathematicians weren't married to tradition.


> If I give you suggestions, you're just going to say, "well that's the way we've always done it and people will be confused if we change now", which is exactly what they said about C until someone decided enough was enough.

Believe me, there are a lot of people trying to find better ways to teach and communicate mathematics. There's inertia, of course, but it wouldn't be the first time people change notation because it's better.

> Even something as simple as the set of integers being the letter Z requires you to understand German to know why we chose Z (and not I) in the first place.

I mean, you'd just move from confusing people who don't speak German to confusing people who don't speak English.

> Or d/dx being totally unintuitive for anyone not well-versed in differential calc.

You are not going to find any notation for a derivative that makes the concept intuitive for people that don't know differential calculus. If anything, it's more intuitive than f'.

> Still, it seems less helpful than it could be if mathematicians weren't married to tradition.

Or maybe the tradition has been built because after decades of mathematics nobody has found anything better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: