There is an old convention in physics - from the time when Germany was world-leading in physics - to write vector-valued variables in Fraktur. Using cursive (old German cursive is weird) seems related, though AFAIU the "vectorness" of the ℘ function is just the two components of a complex number.
One thing I've always struggled with Math is keeping track of symbols I don't know the name of yet.
Googling for "Math squiggle that looks like a cursive P" is not a very elegant or convenient way of learning new symbol names.
I wish every proof or equation came with a little table that gave the English pronunciation and some context for each symbol used.
It would make it a lot easier to look up tutorials & ask questions.
As a first foot-hold I recommend highly https://detexify.kirelabs.org/classify.html
(I think I saw there was a newer one, but don't remember how)
You draw the symbol and get the TeX symbol name. I tried this one and it does give the right \wp (which in this case is confusing and you'd have to look up more about why it's named that)
But for classic ones, for instance the "upside down A" -> "forall" is very helpful and shakes newcomers to math syntax
Feynman said that his students struggled with a reverse problem: how to know that "harnew", an important part in QM equations that the lecturer talks about, actually stands for hν.
Detexify appears to be a kNN classifier. https://gist.github.com/kirel/149896/3a13825f826ec91e04d4adb...
Good stuff: worked the first time I tried to draw a ℘.
This is great, thank you! Would be even better if it had a little "click here to hear it said out loud" button.
See also: <https://shapecatcher.com/>
I can relate. Ages ago, before Safe Search and search result tailored to one‘s history and preferences, I was trying to figure out how to write that big union symbol (∪) in LaTeX and googled for Big Cup LaTeX. I got _very_ different and unexpected results.
Googling guitar-related stuff is how I learned there’s such a thing as c-string women’s underwear & bathing suit bottoms, not just g-strings.
That was, briefly, a real WTF moment.
[edit] oh my god, of course that one didn’t come from searching guitar topics, that makes no sense given the standard tuning. I’m pretty sure I was googling strings in the C language when I hit that one, lol. I did probably accidentally land on “g string” after searching without thinking about what would obviously come up, when looking up guitar topics, and must have combined the two incidents in my memory.
Haha - my favourite WTF Googling moment was when, as a callow youth first setting out in learning Javascript and HTML, I Googled "How to get head"
This reminds me of the time when searching for “c string” would probably result in “The C Programming Language” at number 1.
That’s what I get right now
In the early days of the internet, searching for things like C++ was really challenging because none of the first generation search engines could search for that particular string.
Back in the mid-1990s I extracted a small part of the GNU C++ String library into a small package, which I called "GString".
I had no idea about the garment.
Eons ago, I was exploring ways to run some outdoor overhead wire between my house and the shed.
One method I considered involved using those little self-wedging widgets that squeeze down tighter as the thing being suspended is pulled harder. (These widgets were once commonly used with overhead POTS telephone lines.)
So I asked around and the broad consensus in my area was that one of these widgets is called a "horse cock."
And while everyone who knew what I was talking could say it with a very straight face, I did not even bother with trying to Google "horse cock" before deciding to go in a different direction with that project.
As somebody who spends a fair amount of time studying math heavy material that uses math that I never studied formally, this stuff is the bane of my existence. It's one thing to see a random Greek letter, where at least I very likely know what the character "is" (eg, "rho" or "psi" or whatever) and can at least pronounce it to myself and make a mental note "go back and see what rho stands for in this equation". But exactly like you say "squiggle that looks like a cursive P" doesn't easily admit a mental placeholder, AND it's hard to look up later to find out exactly what it is. I've really wanted to tear my last hair out over this a few times. And I am pretty sure one recent such occasion involved this exact character, so this really hits home!
And never mind that cognitive load that comes from managing the use of symbols that are the "same symbol" modulo something the typeface. Trying to read something like
"Little b equals Fraktur Bold Capital B divided by (q times Cursive Capital B) all over Gothic Italic B", blah, blah... then throw in the "weird little squiggle that looks kinda like a 'p' but not quite". It's insane.
The Unicode charts for the mathematical symbol ranges can serve as a visual index: [0]
They open as PDFs and have grids of all the symbols, along with useful metadata such as related and similar symbols, common substitutes, alternate names, etc. You won't find everything in those ranges - some things are elsewhere in Unicode, in their native language or already located elsewhere in an earlier version - but it's a great resource. Even for symbols that are merely letters in some language's alphabet, Unicode sometimes provides a unique codepoint (character) for their use in mathematics.
[0] https://www.unicode.org/charts/
That provides each table as a separate PDF downlod: mathematics covers ~20 PDFs, each named for its contents. It may be faster to download the entire Unicode standard as one PDF (~140 MB):
I also find it frustrating, but I’ve come to appreciate that it’s a way to at least partially sidestep the hard problem of naming things. There are still idioms and choices to make, but using abstract symbols makes it easier to play with the abstract concepts being presented.
My most-used programming language is Go, but I’ve been writing mainly Swift for the past year or so. While there’s a lot I like about Swift, its verbosity leads me to waste an inordinate amount of time pondering what the correct verbiage ought to be, and I often miss Go’s more terse, often single-character naming convention.
> My most-used programming language is Go, but I’ve been writing mainly Swift for the past year or so. While there’s a lot I like about Swift, its verbosity leads me to waste an inordinate amount of time pondering what the correct verbiage ought to be, and I often miss Go’s more terse, often single-character naming convention.
Huh. I was expecting that comparison to go the other way given Go's notorious verbosity in terms of error handling, generics etc.. Maybe people compensate for verbosity in one area by being more concise in others (though that doesn't explain e.g. APL).
Once there was a lecture at Yale, and Serge Lang, a frequent loud critic of bad notation was in the audience. There was a function Xi, and soon it was joined by its complex conjugation. Then they were divided. Serge Lang walked out.
That's delightful, but I fear many in the audience here may not quite see why. So, at the risk of explaining too much:
The Greek letter xi is one of those where the capital and lowercase versions are very different. Lowercase is a bit like a curly E. Uppercase is (at least if you're writing it in a hurry) basically three horizontal lines on top of each other.
The operation called complex conjugation can be notated in two ways, but the more common one among mathematicians is to put a horizontal bar above the thing being conjugated.
So the conjugate of Xi is ... four parallel horizontal lines.
And now we divide Xi (three horizontal lines) by Xi-bar (four horizontal lines), getting: eight horizontal lines.
This. Related to that, I'll also never get used to mathematicians' habit to assign semantic meaning to the font that a letter is drawn in. Thanks to that, we now have R, Bold R, Weirdly Double-Lined R, Fake-Handwritten R, Fraktur R and probably another few more.
All of those you're of course expected to properly distinguish in handwriting.
I'm sure most of them have some sort of canonical name, but I'm usually tempted to read them with different intonations.
(Oh and of course each of those needs a separate Unicode character to preserve the "semantics". Which I imagine is thrilling edgy teenagers in YouTube comments and hackers looking for the next homograph attack)
"Bold R" and "Double-Lined R" (i.e. blackboard bold) are semantically equivalent. As your next paragraph hints toward, the purpose of the second one is to be distinguishable from the regular italic or Roman R in handwriting (or on a typewriter).
"Fake-Handwritten R" is an extra fancy calligraphic version which is not hard to distinguish. The Fraktur R is a pain to write, but you can write an upright "Re" as an alternative.
The basic issue is that using single symbols for variables is very convenient (both more concise and less ambiguous than writing out full or abbreviated words when writing complicated mathematical expressions), but there are infinitely many possible variables and only a small set of symbols.
only a small set of symbols
I grind hundreds of flashcards every night to learn Japanese and I can assure you that one thing we are not short of is symbols. Chinese characters use ~218 basic symbols which can be stacked and combined to form tens of thousands of characters. There are 350 symbols just for counting different kinds of things.
Yes and no. Generally blackboard bold has come to denote particular number sets while bold usually refers to vectors or matrices. There are a handful of traditionalists¹ who will use *R* for the reals or *Z* or even Z for the integers, but the trend toward blackboard bold is, I think, definitely where things are going.
⸻
1. I would put Donald Knuth in that category, given his choice to not include blackboard bold in his original inventory of characters for Computer Modern, but that might just as much have been a choice based more on limitations of the computing systems he was working with at the time (or his needs for typesetting The Art of Computer Programming which were the primary driver of TeX).
Whether you write bold R, Z, Q, C or blackboard bold for these number sets nobody at all is going to be confused – they appear in both ways all over the place in books and research papers – and if you mix ordinary bold R, Z, Q, C next to the blackboard bold versions of the same upper-case letters in a single document then your friends should tell you to knock it off.
As for "where things are going" – this has been changing extremely gradually over the past 60 years. If the trend accelerates maybe you'll stop seeing both variants in wide use in about another century.
This is what you get if you insist on using single letters for every variable. Why do that? Well, because otherwise a variable name might be confused with a bunch of variables multiplied together, because we don't use multiplication signs. Why not? Well you see, the signs might be confused with the variable x.
> I'll also never get used to mathematicians' habit to assign semantic meaning to the font that a letter is drawn in
You never learned to use capital and lowercase letters differently? Why did you capitalize the 'i' in "I'll"?
If you can get it on your clipboard, you can usually paste it into Google or append to `https://en.wikipedia.org/wiki/` and get the answer.
The "unicode" program (in Ubuntu's package repository) gives the Unicode entry for any character:
$ unicode ℘
U+2118 SCRIPT CAPITAL P
UTF-8: e2 84 98 UTF-16BE: 2118 Decimal: ℘ Octal: \020430
℘
Category: Sm (Symbol, Math); East Asian width: N (neutral)
Unicode block: 2100..214F; Letterlike Symbols
Bidi: ON (Other Neutrals)
Age: Assigned as of Unicode 1.1.0 (June, 1993)
Or you can ask Wikipedia: https://en.wikipedia.org/wiki/%e2%84%98 (manually URL-escaped for HN)If you already have a copyable version of the character it also works in the original Google search. Or any other place you can put it. The problem is when you don't have the literal character as text (say, an image, video, or non-digital source) and need to reproduce it to do that lookup in the first place.
That assumes you are in a position of being able to paste the character (and as unicode), which is not always the case.
I have seen math textbooks that have a "table of symbols" or similar, by the table of contents or the index. It's really helpful. Also nice to have - a "map of the book" (I don't know a better name for this) which indicates graphically which sections of the book depend on which other sections. "Dependency graph", maybe?
I wish music books would do this too - I've been self-teaching myself a little classical guitar, and some of the scores I'm reading have various symbols that have taken me quite a while to figure out. I eventually determined that a bold III means to play in third position in some books, but these things aren't consistent between publishers.
Classical music is relatively standardized. Roman numerals to indicate position are standard practice for all string instruments (I remember learning this as a beginning bass player as a kid). You will sometimes see Roman numerals used as a means of identifying chords relative to the tonic of the current key, but that’s uncommon and the notation is a bit different than the position notation in how it’s placed on the staff (if it even appears on the staff and not in a harmonic analysis).¹ I’ve not seen the upper- and lower-case distinction in roman numeral notation I learned somewhere which uses cases to distinguish between major and minor in any classical music harmonization texts, but I may have just paid insufficient attention.
⸻
1. I have a vague notion that it might show up in figured bass once in a great while, but I could be wrong.
I've played lots of piano and wind music and took lessons for that stuff in school, but I've been self-teaching guitar / ukulele / bass / mandolin in a smattering of different styles, which is probably part of my issue. I go through cycles where I'll focus on one instrument and style for a few weeks - in the last year I've dabbled in bluegrass banjolele, Irish fiddle tunes on mandolin, jazz on bass and keys, rock electric ukulele, funk piano, rock organ... And that's not even an exhaustive list.
This all seems right to me. As for your figured bass comment, you could have something like iii^6_4 for an e minor chord with the B in the bass, when the key is C major. But if you were writing it next to the staff you wouldn’t need to write the iii - it’s implied by the bass note and the figures, and classically figured bass writes the minimum it has to, for example just 6 instead of 63 for a first-inversion triad.
> Googling for "Math squiggle that looks like a cursive P" is not a very elegant or convenient way of learning new symbol names.
Asking ChatGPT (or the like) is a good solution nowadays though: “The mathematical symbol that resembles a cursive "p" is likely the Weierstrass p, denoted as ℘. […]”
I asked once and it just gave me “p”. I said “no more fancy like” and it produced the correct result. This is the real power of the model for narrowing search, you can interrogate the results.
At least you can send claude or openai images now of the math symbol.
There has been various modernization efforts in the past correct? I wonder why nothing has succeeded. Math has always been somehow very tied to keeping credit and personal achievement close to the "source" it feels. Conjectures always coming with names attached, or this symbol named after a whimsical stoke of a pen.
I have this little book that I cannot find right now that is pretty explicitly that. just a little math symbol dictionary. its small enough to fit in a pocket and was invaluable all through college.
I heard as many names for the "\partial" symbol as I had math professors in university. At least they all wrote them the same.
"del" is the only way
Except that "del" is also (and I think more commonly) the name for the upside-down Delta used in vector calculus for div, grad and curl.
(I usually say "partial dx by dt" or whatever, which seems OK to me.)
This is something I dislike intensely about pro mathematicians. If you need that many symbols, document them properly and work to ensure they're on the flyleaf of every mathematics textbook. You can easily picture a math paper that reads like
[ ], but [ ]; however [ ] and [ ], so [ ]....
with the spaces being filled in by signatures of famous mathematicians.Yeah I find that I reason about math in my head with names for symbols - the visual shape is not sufficient for the math part of my brain to manipulate it as a symbol.
Same. If I see a symbol and can't "say" the symbol, I can't (easily) process it. I think over time that gets better and eventually it's possible to start to "pattern match" it just off the visual representation, but at least for me when something is new, I need the "name" of it as something I can say to myself or I hit a ParseException. :-(
Well rarely do you see a symbol with zero context.
You're reading something. "Oh this has something to do with Weierstrass...."
We had a guy in class at high school who was a math prodigy,but hated the greek alphabet. He always said things like "If we multiply the symbol that I don't know with the square root of the other symbol that I don't know..." He was proof for me that you can be really good at (high school) math without knowing all the symbols' names.
Uh, symbols and math??? Uh, as I read mathematical physics, I get the impression that certain symbols in certain equations are accepted throughout physics as already defined. But in math, we have, e.g.,
"For the set of real numbers R, some positive integer n, and R^n, with both R and R^n with the usual topologies and sigma algebras, we have function f: R^n --> R, Lebesgue measurable and >= 0."
That is, in writing in math, a popular but implicit standard is, each symbol used, even if as common as R and n, is defined before being used.
Sooooo, right there in the math writing, the symbols are defined. Or, right, an author might assume that the reader knows what "the usual topology" or "Lebesgue measurable" are but does not assume the reader knows what the symbols mean. I.e., the issue is not something about helping readers with their knowledge of math but, instead, just being clear about the symbols. Or, can assume the reader DOES know about the real numbers but does NOT assume that R is the set of real numbers -- and correctly so because R might be the set of rational numbers, some group (from abstract algebra), nearly anything.
Again for physics, E = mc^2 and J = ns^2 are not the same, not within the standards, maybe even if say J is energy in Joules, n is mass, and s is the speed of light!!!
The standard is based on the expectation that you should be able take a definition or a theorem out of its context and it should still make sense. The same symbols are often used for different things in different contexts, while concepts (even advanced ones) tend to remain unambiguous.
R is unambiguous if it's in blackboard bold, but otherwise it can mean almost anything. n is likely interpreted as a non-negative integer if left undefined, but you usually need to establish explicitly if it can be 0.
But nowadays you can simply ask a vision model
yep, or even just a text model
ChatGPT 01-preview gave me:
User: what is the Math squiggle that looks like a cursive p?
Assistant: The mathematical symbol you’re referring to is likely the Weierstrass \wp function symbol, which resembles a cursive or script “p”:
I must confess that I have an irrational fondness for the use of weird symbols in math and technical documents, whether it's for a homework assignment in school or a white-paper for work.
My unit tests are literally full of hieroglyphics. My favorite design doc to this day is one where I sprinkled Sumerian cuneiform throughout the text, e.g. 𒀭𒄑𒉋𒂵𒎌 and 𒂗𒆠𒄭 (Gilgamesh and Enkidu) instead of Alice and Bob.
I'm not sure if this is a good idea, especially in code; apart from adding unnecessary confusion for the reader, this will also confuse some monospaced fonts.
Glad I’m not GPs coworker. Time to refactor Gilgamesh.
I've seen emoji used for Alice/Bob/Carol/etc which is a bit more widely supported than old dead scripts.
I await the inevitable mathematical constant unicorn emoji.
I just see boxes, not the cuneiform.
`pacman -S noto-fonts noto-fonts-cjk noto-fonts-emoji`
I left college with a math degree and a profound antipathy for weird cursive symbols. The one that nearly killed me was the Greek "xi". I couldn't pronounce it, and I couldn't write it with any fluency, and in some of the classes I took it was everywhere.
I actually find xi easy to write, whereas zeta is really hard for me. I think the middle loop of the xi provides an anchor for what I'm aiming for, but zeta ends up as a nondescript squiggle. Sometimes I can't even properly picture what zeta looks like in my head. Is it like a 2, a 5, an S, or a Z? Or a cursive C or an italic G? It's all undifferentiated in my head.
I do still remember the day our math professor taught us both symbols. He did it very purposefully, like he knew it was all riding on him, and we'd all be lost if he didn't pass the arcane knowledge down.
I encounter ξ (xi), and also ζ (zeta) a lot. Honestly, when I write them out by hand, I just make a "wild squiggly line" for ξ and a "simplified squiggly line" for ζ.
If I write it out by hand, it's most likely just for my eyes anyway, and I'd type it out on a computer if I'd want others to have a look at it. But even if I gave someone else my handwritten note, I think from context it would be pretty clear what the "squiggly lines" are supposed to be.
ζ is essentially a cursive z. ξ is near enough to a backwards 3.
ξ is literally three horizontal bars underneath each other, in cursive.
Indeed. Try to write Ξ sloppily using connected strokes and you'll end up with something vaguely like ξ.
> ξ is literally three horizontal bars underneath each other, in cursive.
And? So is 3.
Despite only having a CS degree, I was always especially fond of ξ due to its distinctiveness (and also didn’t have trouble writing or pronouncing it), moreso than letters like ν or ι, which are too close to v or i/j visually for my taste.
I think iota is fine because it's missing the dot that an i has, but nu is terrible, yeah. In fact, in some fonts nu is exactly v: https://en.wikipedia.org/wiki/Nu_(letter)
But then a lot of capital greek letters at least are identical to latin letters in those fonts, so I guess you have to choose carefully anyway... and pick the proper font/handwriting if you absolutely have to use nu. (Hopefully you don't.)
>>> import unicodedata
>>> unicodedata.name('℘')
'SCRIPT CAPITAL P'
>>> ord('℘')
8472
https://en.wikipedia.org/wiki/Letterlike_SymbolsGood enough for me.
Notably, this is distinct from ("MATHEMATICAL SCRIPT CAPITAL P").
> Books were printed in Fraktur, where the p looks quite normal, i.e., quite different from a handwritten Sütterlin p which could explain, why it hasn't been replaced in the publication of Amandus Schwarz.
Indeed. ("MATHEMATICAL FRAKTUR CAPITAL P") is also separate (but also, Unicode considers these mathematical symbols to exist separately from "text written in Fraktur script". So you get separate characters allocated for these symbols, but they're not intended to be suitable for printing in Fraktur - which is supposedly a presentation (i.e. typeface selection) issue.
Personally I'm not convinced that mathematical symbols derived from Latin or Greek (or other) scripts really have any claim to being separate "characters". Surely that's what variation selectors are for?
I think the answer by teika kazura on the linked page explains pretty thoroughly why this is not “good enough”. Most importantly:
> In Unicode the letter ℘ is given the codepoint U+2118 in the block "letterlike symbols", named "script capital p". But in fact it's lowercase.
Unicode technical note https://www.unicode.org/notes/tn27/ clarifies:
> Should have been called calligraphic small p or Weierstrass elliptic function symbol, which is what it is used for. It is not a capital "P" at all. A formal name alias correcting this to WEIERSTRASS ELLIPTIC FUNCTION has been defined.
I think one’s getting a lot of upvotes from people who meant to click on the link.
Yes, this happened to me
You can always click 'unvote' in the detail line.
In the same Unicode block is "2129 ℩ TURNED GREEK SMALL LETTER IOTA" with explanation "unique element fulfilling a description (logic)".
That seems a ridiculous choice for a symbol — turning one of the most symmetrical letters upside down!
Background: https://philosophy.stackexchange.com/questions/51563/what-is...
It could be motivated by the fact that Russell and Whitehead needed a symbol for printing that the printer had in his type case but could not be confused with anything else. Taking a iota and simply turning it upside down would then be a rather ingenious idea. But that is just my speculation ...
There are loads of examples of this, e.g. ∀ and ∃, but iota is a really poor choice.
https://math.stackexchange.com/questions/1885938/whats-meani... says 'in the article Frege, Peano and Russell on Descriptions: a Comparison, Francisco A. Rodríguez-Consuegra tracks down the source to Peano's Studii di logica matematica (1897) as where the operator first appears'
That page in 'Studii di logica matematica' appears to be https://archive.org/details/peano-studii-di-logica-matematic... .
It also uses an upside-down C and upside-down E.
And the Lambda looks like an upside-down V. The bases of all these upside-down letters do not match the baseline of the text. Obviously there were no special upside-down moveable types available of freshly cast for this book. Peano had to creatively repurpose what types were available.
In the font on my phone it looks confusingly reminiscent of Hebrew vav.
Strangely, the most comfortable I've felt with symbols was when learning quantum computing. At the time, there was no established standard (perhaps it has a standard now), but the symbols were used more intuitively than any other math class I've taken.
My first thought on seeing this title was, "this should totally be the name of a programming language descended from Go"
One thing I like about programming languages is that they usually constrain themselves to strings of ASCII characters, instead of using lots of more or less inscrutable symbols like mathematics does. For example, where a mathematician writes "Σ", a programmer simply writes "sum".
You are holding up code as an example of clarity and scrutability, and because it is mostly restricted to ASCII? Hex code is even simpler - only 16 characters.
> where a mathematician writes "Σ", a programmer simply writes "sum".
Communities develop shorthand and terms of art for things they write a lot. Mathematicians need to write lots of sums; programmers have their own shorthand and terminology.
Hex code doesn't allow you to write words. And "sum" is simply better than "Σ". There is no way to know in advance what the latter means, while for the former understanding of verbal English is enough. Mathematicians basically use an iconographic writing system like Chinese.
We can think of many other strings used by programmers that are not common English, and many strings used by mathematicians that are.
I think the difference is that you are a programmer and not a mathematician (I'm guessing) and are saying, effectively, that what you are subjectively familiar with is objectively more universally understood.
This argument works both ways, apart from it's the mathematicians who are wrong.
I wish they did, because then it would be more consistent and properly documented.
"The letter formerly known as p"