Hi all, I was watching https://fosdem.org/2023/schedule/event/security_where_does_that_code_come_from/ and one concern that came up was that there is no protection or mitigation against 'guix pull' servers providing machines old data, to (for example) stall security updates from reaching a server. Currently the Savannah sysadmins have the power to delay security updates for my machine. I think this should be considered a unwanted behaviour that warrant some action, either tooling improvement or documentation. There are many ways to improve the situation, even though addressing the problem completely is difficult (most if not all GNU/Linux distributions have similar issues). Some ideas: * Warn if the repository has not since a commit for > 7 days, with the delay being configurable. This may be a bad idea: warnings are generally not appreciated by users, security warnings specially so. * Have 'guix pull' show metadata for the last commit it received (e.g., show output from: git log -1) to give users a way of noticing that it is not seeing new data. Currently only the git commit id is shown which does not convey enough information. * Adopt a way for repositories to state the validity period of its content to have the 7 days a bit configurable, compare for example: https://wiki.debian.org/DebianRepository/Format#Date.2C_Valid-Until The idea being that 'guix pull' would fail if the repository hasn't been touched after the specified interval end, causing the user notice and take action. The maximum interval provided by the repository should probably be limited by a locally configured maximum delay the user is willing to only see old data. This brings up other concerns (what if someone steals an OpenPGP signing key and changes it to 70000 days and pushes that out to one machine only based on IP address, and then stalls that machine from updating again) but it seems to provide decent user experience and some good protection by default. Protecting against OpenPGP key breaches can be mitigated by other means, and shouldn't be a strong argument this improvement to stale servers. * Have a third party, or even decentralized system, monitoring service where each client can compare the commit data they got from 'guix pull' with what everyone else is seeing. This provides global consistency of what Guix machines are seeing for the Guix repositories, similar to Certificate Transparency. This protect against targetted stale data attacks only, but that may be sufficient: any non-targetted stale data attack is likely to be noticed by Guix maintainers. This would also protect against substitution attacks, although I'm not sure if Guix protects against them by other means? I'm thinking a malicious savannah could send me core-updates instead of master, but call it master to my machine, and I'll not notic that I got a different branch instead. Does 'guix authenticate' verify meta-data such as git branch in a way where the server cannot fake this data? There are many other ideas too. Thoughts? /Simon