For me personally, my preference is still on manually marking what goes into the Gettext pot file and I believe my idea for its syntax is suitable. For example, this is a multi-part internationalized paragraph from my website: (p ,@(__ "“Don’t Hang” by default uses the words from \ the ||ic_|/usr/share/dict|| directory, but it can deal with any list of \ expressions in a text file with one expression per line. ||samplelink_|Here|| \ is an example word list file compiled with words from \ ||wiktionarylink_|Wiktionary’s list of 1000 basic English words|| which you \ can use if you want simpler words. This sample word list is available under \ the terms of ||ccbysalink_|the CC-BY-SA 3.0 Unported license||, because \ Wiktionary uses this license and the words are taken from there." `(("ic_" . ,(lambda (text) `(span (@ (class "inline-code")) ,text))) ("samplelink_" . ,(lambda (text) (a-href "sample-word-lists/english-words.txt" text))) ("wiktionarylink_" . ,(lambda (text) (a-href "https://en.wiktionary.org/wiki/Appendix: 1000_b\ asic_English_words" text))) ("ccbysalink_" . ,(lambda (text) (a-href "sample-word-lists/CCBYSA-3.0-UNPORTED.txt" text)))))) But since there seems to be a demand for automatic extraction without marking each piece of translatable text, maybe Haunt should offer that as well. This would involve deciding which parts of a Scheme file are SHTML code and which parts are just Scheme. Maybe that can mostly be deduced from context but some false positives may need to be dealt with. Also in SHTML code there is the issue of whether e.g. URLs should go into the pot file or not. I will take a look at Haunt and upstreaming next week. Currently my main concern is build system integration. I want to just run „ninja mywebsite-pot“ to build the pot file. But GNU Autotools are complicated, Meson/Ninja depend on Python and probably Haunt should integrate with each of them eventually… Regards, Florian