1 week ago
Mon Jan 5, 2026 6:56pm PST
Show HN: Comparing Nietzsche Translations with Sentence Embeddings
I ran 5 English translations of Beyond Good and Evil through sentence embeddings to see if NLP could detect what I felt as a reader, that each translation reads like a different book.

Findings:

- Hollingdale sits at the semantic center, closest to the German (0.806) and to all other translators

- Translators have fingerprints: UMAP separates them visually without being told who translated what

- Short aphorisms diverge most, less context means more interpretive freedom

- Nietzsche's pre-1901 spelling ("Werth" vs "Wert") confuses the model; built a 95-rule normalizers

Built with MiniLM embeddings, UMAP, Next.js

Curious whether this approach could work for other translated philosophical texts, and open to feedback on methodology.

read article
comments:
add comment
loading comments...