She was a CS PhD and somewhat itinerant professor with a long career who wrote a prominent CS paper about computer memory, Hitting the Memory Wall: Implications of the Obvious
https://dl.acm.org/doi/10.1145/216585.216588
on her obituary page, you will see a prominent "Memory Wall" link that is NOT a reference to her paper, but a place for sharing your thoughts about her life
you wouldn't believe how many people cite that paper as "Wulf et al." when that's practically more characters than saying "Wulf and McKee"
I notice these things a bit more as she was my PhD thesis advisor
> you wouldn't believe how many people cite that paper as "Wulf et al." when that's practically more characters than saying "Wulf and McKee"
Wulf et al.
Wulf and McKee
35% less isn't usually described as "practically more".It'd be interesting to see someone use the unabbreviated form; I have a hunch they wouldn't know to say "et alia".
How did you arrive at 35% less? The first is 11 characters, the second is 14, and 3/14 is 21%.
That is a good question. As you say, it's 21%. I had the 11 and the 14 correct; I don't remember how I got 35%.
There's only two authors! That's so rude!
It’s also not correct; et al. is conventionally applied to three or more authors (it means “and others,” plural)
No, plural can’t be deduced from how it is written.
"et alia" usually means "and others", but technically in Latin "alia" can be either plural neuter or singular female!
Pardon, you’re right
Why? For all the automatic academic score tracking systems it doesn't matter one bit if it is Wulf et al. or Wulf and McKee.
The automated ones don't care, but it absolutely matters for the informal credit assignment process that actually runs academia.
I really wish we had a better way to "name" papers. Big clinical trials often have an acronym (often hilariously forced: "CXCessoR4"). That takes the emphasis off (one) lead author but it's implausibly hard to make up one for every research paper.
What "informal credit assignment"? It's automated and it runs entirely on quantitative data.
the one where i think of a particular piece of work, and i know who did it, then tell a student "oh, see if $author's group published anything else about this."
i'm not using software for this if this is off the top of my head, and it's the sort of thing that, at scale, hurts the forgotten author and their students
There’s a cute study demonstrating this effect by comparing career success in economics and psychology.
The author lists for economics papers are traditionally alphabetized, so more of your output will be known by your name if it occurs early in the alphabet. Abbie Ableson gets lots of mentions as "Ableson et al." while Zhang Zhu will almost always be relegated to the "et al". If name recognition matters, you’d expect successful academic economists to be clustered at the beginning of the alphabet—-and this appears to be true.
In most psychology journals, the author list is instead ordered by contribution/senority, and this effect disappears. https://www.aeaweb.org/articles?id=10.1257/08953300677652608...
I see. The informal credit assignment process is something that only runs inside of your head.
Right, academics who deligate their entire intellectual life to GPT will be unaffected.
Right, and everyone else unaware of this made up "informal credit assignment process".
I don’t know that everyone would label it like that, but it’s inarguably true that success in academia comes from your reputation/name recognition.
Metrics are often attempts to formalize this but they’re not how most people actually make decisions: nobody is inviting seminar speakers or choosing collaborators because they have a high h-index. If anything, it goes the other way: name recognition gets you invited to speak or collaborate, which makes more people aware of your work, which boosts metrics.
That is false. The first thing everyone (at least everyone in CS---IDK about other fields) looks at are h-indexes, impact factors, number of papers per year, university rankings, and similar metrics. Researchers are most definitely selecting collaborators with a high h-index.
So we're talking about this woman's contribution. And you're talking about how the system is depriving her of recognition.
Do you see the inherent tension in what you're claiming vs the lived experience of everyone in this post (including you!)?
Cmon…We’re saying that a certain style of reference gives her less credit than might be due. Not none at all.
One paper doesn’t make a career (she wrote many dozens), it’s not always cited weirdly, and even if it is, some people may remember the coauthors (as they should).
But since you mention lived experience, I’ll add that I’ve certainly been asked if I’m "even aware" of results from co-authored papers where my name was listed second—-and I don’t think this is very uncommon experience.
its about respect, not about academic score tracking systems
et al should never be applied when only two authors!!!
...unless the second one is named Alfred and is an informal person
Bruce et al
Yeah tenure is nice but there's just a hint of mystery behind the title "itinerant professor." Like a wizard that just pops up in places to work computer science magic.
I was a phd student when sally was a professor at Utah. I get the feeling that a lot of people came together for an interesting project (systems/memory related, I can’t even remember the name ATM) and dispersed when the project was at its later stages. I think it’s common in our field for many phds to work as professors for just a few years and not commit to it as a career.
bit ironic i guess but unintentionally fitting
There are probably so many stories out there of interesting things she did. A few are breifly referenced at her old website here: https://web.archive.org/web/20060116130917/http://www.csl.co...
Her babysitter was Mike Bloomfield!? (the astronaut)
rip. i got a chuckle out of this trivia on her old website:
> Rob Pike didn't really name my favorite editor after me.
My dissertation was on the memory wall, and I never heard of her :/ RIP
Could you (or someone else in the know) give us a brief overview of the current state of the memory wall issue?
High bandwidth memory (HBM) can deliver TB/s of memory bandwidth and has completely shattered the memory wall for individual cores/compute elements. The only way for compute to keep up is going wide and parallel as seen in GPUs.
Despite this, massively increased memory bandwidth does not translate to material performance improvements on non-parallel compute tasks because few tasks are actually memory bandwidth bound, instead being memory latency bound.
The best known general solutions for improving memory latency are per-compute element memory caches. Unfortunately, this increases the complexity and size of your compute elements forcing you to reduce the number of compute elements, but a large number of compute elements is the only way to saturate HBM memory bandwidth.
To keep up the best known techniques are either algorithmically batch which allows you to go wide using vector/batch instructions or you go the GPU route with memory latency-hiding parallelism.
Well…. The reason there’s such a big mismatch is the memory controller. Something like 80-90% of the energy is spent moving data in and out because of the complex addressing. If you move compute into the RAM and instead shuttle instructions in and out, you might get a huge speed up. The challenge is when an instruction references some data over there - that may end up eliminating all the advantage. But people I believe are trying to commercialize this concept.
> If you move compute into the RAM and instead shuttle instructions in and out, you might get a huge speed up.
Isn't that just a per-compute cache/local memory? You're proposing a scaled-up variety of NUMA where every compute core has its local memory and going outside that will cost you more.
Correct, you can think of this like NUMA or a distributed system where you have compute colocated with storage. It’s a special purpose accelerator for very specific problems that have been optimized to take advantage of such an architecture.
It’s also not my proposal. The industry is exploring ways to cut down the energy requirements to do AI - 80-90% of the memory consumption is just moving memory back and forth across the memory controller. It has to read a row from a bank into a row buffer, access the specific cell being requested and then shuttle it over the bus to the compute and then write the data back to the cells. The current idea is to maybe do the processing on the entire row buffer but you could imagine scaling that up to do it at the bank level. The challenge is manufacturing complexity since DRAM is made different, heat from the ALU, etc.
[1] https://semiconductor.samsung.com/news-events/tech-blog/hbm-...
Oh my knowledge is woefully out of date. But I believe the memory wall is a fact of life for the most part. Like many others, I nibbled around the edges of the constraint at massive cost in increased complexity. Outside of very specific exceptions the cure tends to be worse than the disease.
[dead]
Damn, three years younger than one of my parents. A real shame.
Call your loved ones :(
I’m never heard of that term.
Thanks for your contribution, then.