I feel like normalization would be a nightmare. Consider all the mistranscriptions, OCR errors, and different names in the libraries (case, parentheticals, etc).
If we assume there's no reliable way to define a book, maybe locally sensitive hashing could help find probably same books.
The idea is pretty cool though.
Good point. Normalization is deliberately scoped to 'what a human reads off the title page' rather than reconciling all possible metadata sources. LSH as a complementary fuzzy-matching layer for catalog reconciliation is exactly what the planned resolver at openusbn.org is designed to support: deterministic identifier as the anchor, probabilistic matching as the discovery tool.
I would believe there is a nontrivial amount of books by the same author published in the same year with the same title either in different formats, different languages, and/or by different publishers.
Right. This would conflate, e.g., the British and American editions of books published in both countries in the same year; and they are frequently different, as might be different editions of a book published in the same year.