Glenn Morris wrote: > ! (dir (or (file-name-directory to-name) > ! default-directory))) > ! ;; Can't delete or create files in a read-only directory. > ! (unless (file-writable-p dir) > ! (signal 'file-error (list "Directory is not writable" dir))) This seems a good idea, as deleting a backup file we won't be able to recreate would be a bad move. However I guess there are filesystems out there where a file might be undeletable even if its directory is writable. So be careful about assumptions. You should still be careful about exception handling later on. > + ;; If we allow for the possibility of something > + ;; creating the file between delete and copy > + ;; (below), we must also allow for the > + ;; possibility of something deleting it between > + ;; a file-exists-p check and a delete. > (condition-case nil > (delete-file to-name) > (file-error nil)) You left the possible cause for the loop in place, again relying on catching an error in the normal course of events when there is no backup. I can see your point, but I still think this is dangerous. One reason is given above, and the second reason is this: If we keep thinking about other processes creating or deleting files in the middle of the operation, we might as well consider other processes changing permissions as well. So who says that the directory will still be writable once we are here? Is there some reliable way by which we could discern a file-error because the file does not exist from a file-error because we can't delete it? Because we can recover from one, but not from the other. > + ;; FIXME does that every actually happen in practice? > + ;; This is a potential infloop, which seems bad... The more I think about it, the rarer this seems to me. In my last mail I voted for a fixed maximum loop count, but by now I would even drop all loops; they are simply not worth the effort I guess. Greetings, Martin von Gagern