End to end testing of pre-defined scenarios are a good way to check for regression. Here we introduce testing of some examples available in the documentation. That way, we're sure that new users should be able to run them without problems. Each scenario is a different test and is run in a different temporary directory which get destroyed if the scenario succeeded. --- tests/examples.scm | 39 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 38 insertions(+), 1 deletion(-) diff --git a/tests/examples.scm b/tests/examples.scm index 6f7a64e..38dac3e 100644 --- a/tests/examples.scm +++ b/tests/examples.scm @@ -30,6 +30,8 @@ (lambda (port) (wisp-reader port (user-module-for-file "test.w")))) +(define top-srcdir (getenv "abs_top_srcdir")) + (test-equal "wisp syntax produces the expected S-expression" '(process haiku (outputs "haiku.txt") @@ -39,8 +41,43 @@ (with-output-to-file (unquote outputs) (lambda () (display "the library book\noverdue?\nslow falling snow")))))) - (call-with-input-file (string-append (getenv "abs_top_srcdir") + (call-with-input-file (string-append top-srcdir "/doc/examples/haiku.w") read-wisp)) +(define-syntax-rule (with-directory-excursion dir body body* ...) + (let ((old-dir (getcwd))) + (dynamic-wind + (lambda () (chdir dir)) + (lambda () body body* ...) + (lambda () (chdir old-dir))))) + +(with-directory-excursion + (string-append top-srcdir "/doc/examples") + + (define (process-success? status) + (= 0 (or (status:exit-val status) + (status:term-sig status)))) + + (define scenarios + (list "extended-example-workflow.w")) + + (for-each (lambda (example) + (test-assert example + (let* ((tmp-dir (mkdtemp + (format #f "gwl-example-~a.XXXXXX" example))) + (abs-example (canonicalize-path example)) + (success? + (with-directory-excursion tmp-dir + (process-success? + (system + (format #f "guix workflow run -fc ~a -l all" + abs-example)))))) + (if success? + (system* "rm" "-rf" tmp-dir) + (format (error-output-port) + "Example directory: ~a\n" tmp-dir)) + success?))) + scenarios)) + (test-end "examples") -- 2.35.1
Hi Olivier, thanks for the patch! > End to end testing of pre-defined scenarios are a good way to check for > regression. > > Here we introduce testing of some examples available in the documentation. That > way, we're sure that new users should be able to run them without problems. > > Each scenario is a different test and is run in a different temporary > directory which get destroyed if the scenario succeeded. It’s a good idea to run the examples. However, this requires a fully functional Guix installation (which we don’t have when building with Guix, for example), so we should not include them in “make check” but add a separate target for these tests. An alternative may be to do what Guix does for system tests, but I’d be okay with just having a separate target that is run manually before releases or in CI. > +(define-syntax-rule (with-directory-excursion dir body body* ...) > + (let ((old-dir (getcwd))) > + (dynamic-wind > + (lambda () (chdir dir)) > + (lambda () body body* ...) > + (lambda () (chdir old-dir))))) > + > +(with-directory-excursion > + (string-append top-srcdir "/doc/examples") > + > + (define (process-success? status) > + (= 0 (or (status:exit-val status) > + (status:term-sig status)))) Use ZERO? here. > + (define scenarios > + (list "extended-example-workflow.w")) Should these better be discovered automatically via SCANDIR? > + (for-each (lambda (example) > + (test-assert example > + (let* ((tmp-dir (mkdtemp > + (format #f "gwl-example-~a.XXXXXX" example))) > + (abs-example (canonicalize-path example)) > + (success? > + (with-directory-excursion tmp-dir > + (process-success? > + (system > + (format #f "guix workflow run -fc ~a -l all" > + abs-example)))))) Please don’t use SYSTEM. How about (system* "guix" "workflow" "run" "--force" "--container" "--log-events=all" (canonicalize-path example)) ? > + (if success? > + (system* "rm" "-rf" tmp-dir) Why shell out to RM when we have DELETE-FILE and its recursive friend in Guix? I’d also rather move clean-up work to a DYNAMIC-UNWIND handler. > + (format (error-output-port) > + "Example directory: ~a\n" tmp-dir)) Nitpick: ~% instead of \n. > + success?))) > + scenarios)) This FOR-EACH loop combines test definition with test running, which seems wrong to me. Maybe SRFI-64 is not the best fit for tests that only care about whether a shell command was run successfully. Perhaps we should do as Guix does and just have a shell script to run these tests. -- Ricardo
On Fri, 29 Apr 2022, Ricardo Wurmus <rekado@elephly.net> wrote: > Hi Olivier, > > thanks for the patch! > >> End to end testing of pre-defined scenarios are a good way to check for >> regression. >> >> Here we introduce testing of some examples available in the documentation. That >> way, we're sure that new users should be able to run them without problems. >> >> Each scenario is a different test and is run in a different temporary >> directory which get destroyed if the scenario succeeded. > > It’s a good idea to run the examples. However, this requires a fully > functional Guix installation (which we don’t have when building with > Guix, for example), so we should not include them in “make check” but > add a separate target for these tests. An alternative may be to do what > Guix does for system tests, but I’d be okay with just having a separate > target that is run manually before releases or in CI. > >> +(define-syntax-rule (with-directory-excursion dir body body* ...) >> + (let ((old-dir (getcwd))) >> + (dynamic-wind >> + (lambda () (chdir dir)) >> + (lambda () body body* ...) >> + (lambda () (chdir old-dir))))) >> + >> +(with-directory-excursion >> + (string-append top-srcdir "/doc/examples") >> + >> + (define (process-success? status) >> + (= 0 (or (status:exit-val status) >> + (status:term-sig status)))) > > Use ZERO? here. I learn everyday with you. I did not know about it ^^. >> + (define scenarios >> + (list "extended-example-workflow.w")) > > Should these better be discovered automatically via SCANDIR? Well some example are not full workflow. Only processes. So I think that yes, if we set a common prefix for full workflow: (scandir "." (cut string-prefix "exemple-workflow" <>)) However, it's nice -- even more true when debugging -- to be able to cherry-pick the tests. This is easier with the list above by commenting tests that are passing. >> + (for-each (lambda (example) >> + (test-assert example >> + (let* ((tmp-dir (mkdtemp >> + (format #f "gwl-example-~a.XXXXXX" example))) >> + (abs-example (canonicalize-path example)) >> + (success? >> + (with-directory-excursion tmp-dir >> + (process-success? >> + (system >> + (format #f "guix workflow run -fc ~a -l all" >> + abs-example)))))) > > Please don’t use SYSTEM. How about > > (system* "guix" "workflow" "run" "--force" "--container" > "--log-events=all" (canonicalize-path example)) Yes I like it better too thanks. >> + (if success? >> + (system* "rm" "-rf" tmp-dir) > > Why shell out to RM when we have DELETE-FILE and its recursive friend in > Guix? I’d also rather move clean-up work to a DYNAMIC-UNWIND handler. I hesitated to include Guix's module in test. If you're okay with it, I will use the helpers available in Guix. I concur that the cleanup should be in dynamic-unwind. >> + (format (error-output-port) >> + "Example directory: ~a\n" tmp-dir)) > > Nitpick: ~% instead of \n. Is there a reason why? I don't mind I just never really understood why scheme has this special format rule for newline. >> + success?))) >> + scenarios)) > > This FOR-EACH loop combines test definition with test running, which > seems wrong to me. Maybe SRFI-64 is not the best fit for tests that > only care about whether a shell command was run successfully. Perhaps > we should do as Guix does and just have a shell script to run these > tests. What do you find wrong about it? We could re-write it as: (define (run-example path) ...) (test-assert (run-example "extended-workflow.w")) (test-assert (run-example "...")) if you like it better. However, we lose the power of SCANDIR mentioned above. Let me know, I will send a v2 with your above recommendations and answers to my questions. -- Olivier Dion oldiob.dev
Olivier Dion <olivier.dion@polymtl.ca> writes: >>> + (define scenarios >>> + (list "extended-example-workflow.w")) >> >> Should these better be discovered automatically via SCANDIR? > > Well some example are not full workflow. Only processes. So I think that > yes, if we set a common prefix for full workflow: > > (scandir "." (cut string-prefix "exemple-workflow" <>)) > > However, it's nice -- even more true when debugging -- to be able to > cherry-pick the tests. This is easier with the list above by commenting > tests that are passing. That makes sense. >> Why shell out to RM when we have DELETE-FILE and its recursive friend in >> Guix? I’d also rather move clean-up work to a DYNAMIC-UNWIND handler. > > I hesitated to include Guix's module in test. If you're okay with it, I > will use the helpers available in Guix. I concur that the cleanup > should be in dynamic-unwind. Yes, using the helpers from Guix is fine. The GWL doesn’t make much sense without Guix anyway :) >>> + (format (error-output-port) >>> + "Example directory: ~a\n" tmp-dir)) >> >> Nitpick: ~% instead of \n. > > Is there a reason why? I don't mind I just never really understood why > scheme has this special format rule for newline. I don’t know if there’s a reason for it, but I prefer it for consistency. >>> + success?))) >>> + scenarios)) >> >> This FOR-EACH loop combines test definition with test running, which >> seems wrong to me. Maybe SRFI-64 is not the best fit for tests that >> only care about whether a shell command was run successfully. Perhaps >> we should do as Guix does and just have a shell script to run these >> tests. > > What do you find wrong about it? We could re-write it as: > > (define (run-example path) > ...) > > (test-assert (run-example "extended-workflow.w")) > (test-assert (run-example "...")) > > if you like it better. However, we lose the power of SCANDIR mentioned > above. Hmm, you’re right about that. If you could split this up into smaller procedures (one definition of RUN-EXAMPLE and another that loops TEST-ASSERT and RUN-EXAMPLE on the test files) I’d already be happy :) > Let me know, I will send a v2 with your above recommendations and answers > to my questions. Thank you! -- Ricardo
End to end testing of pre-defined scenarios are a good way to check for regression. Here we introduce testing of some examples available in the documentation. That way, we're sure that new users should be able to run them without problems. Each scenario is a different test and is run in a different temporary directory which get destroyed if the scenario succeeded. --- tests/examples.scm | 51 +++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 48 insertions(+), 3 deletions(-) diff --git a/tests/examples.scm b/tests/examples.scm index 6f7a64e..22edd12 100644 --- a/tests/examples.scm +++ b/tests/examples.scm @@ -14,12 +14,13 @@ ;;; along with this program. If not, see <http://www.gnu.org/licenses/>. (define-module (test-examples) + #:use-module (gwl ui) #:use-module (gwl utils) #:use-module (gwl errors) + #:use-module ((guix build utils) + #:select (delete-file-recursively)) #:use-module (srfi srfi-64)) -(test-begin "examples") - (define user-module-for-file (@@ (gwl workflows utils) user-module-for-file)) @@ -30,6 +31,48 @@ (lambda (port) (wisp-reader port (user-module-for-file "test.w")))) +(define top-srcdir (getenv "abs_top_srcdir")) + +(define-syntax-rule (with-temporary-directory prefix body body* ...) + (let ((old-dir (getcwd)) + (tmp-dir (mkdtemp + (format #f "/tmp/~a.XXXXXX" prefix)))) + (dynamic-wind + (lambda () + (log-event 'info (G_ "Changing directory: ~a") tmp-dir) + (chdir tmp-dir)) + (lambda () body body* ...) + (lambda () + (log-event 'info (G_ "Changing directory: ~a") old-dir) + (chdir old-dir) + (delete-file-recursively tmp-dir))))) + +(define (process-success? status) + (zero? (or (status:exit-val status) + (status:term-sig status)))) + +(define (run-example example) + + (define abs-example + (canonicalize-path + (string-append top-srcdir "/doc/examples/" example))) + + (with-temporary-directory + (string-append "gwl-example-" example) + (process-success? + (system* "guix" + "workflow" + "run" + "--force" + "--container" + "--log-events=all" + abs-example)))) + +(define-syntax-rule (test-example example) + (test-assert example (run-example example))) + +(test-begin "examples") + (test-equal "wisp syntax produces the expected S-expression" '(process haiku (outputs "haiku.txt") @@ -39,8 +82,10 @@ (with-output-to-file (unquote outputs) (lambda () (display "the library book\noverdue?\nslow falling snow")))))) - (call-with-input-file (string-append (getenv "abs_top_srcdir") + (call-with-input-file (string-append top-srcdir "/doc/examples/haiku.w") read-wisp)) +(test-example "extended-example-workflow.w") + (test-end "examples") -- 2.36.0
Olivier Dion <olivier.dion@polymtl.ca> writes:
> End to end testing of pre-defined scenarios are a good way to check for
> regression.
>
> Here we introduce testing of some examples available in the documentation. That
> way, we're sure that new users should be able to run them without problems.
>
> Each scenario is a different test and is run in a different temporary
> directory which get destroyed if the scenario succeeded.
Thank you for this patch. I applied it. I also added an environment
variable check to allow us to skip these integration tests, e.g. when
running the tests inside of a Guix build container (where we don’t have
access to Guix).
--
Ricardo
On Fri, 03 Jun 2022, Ricardo Wurmus <rekado@elephly.net> wrote:
> Olivier Dion <olivier.dion@polymtl.ca> writes:
>
>> End to end testing of pre-defined scenarios are a good way to check for
>> regression.
>>
>> Here we introduce testing of some examples available in the documentation. That
>> way, we're sure that new users should be able to run them without problems.
>>
>> Each scenario is a different test and is run in a different temporary
>> directory which get destroyed if the scenario succeeded.
>
> Thank you for this patch. I applied it. I also added an environment
> variable check to allow us to skip these integration tests, e.g. when
> running the tests inside of a Guix build container (where we don’t have
> access to Guix).
What about skipping the group instead of running the tests?
--8<---------------cut here---------------start------------->8---
(when (getenv "GWL_SKIP_INTEGRATION_TESTS")
(test-skip "full-examples"))
(test-group "full-examples"
(test-example "simple.scm")
(test-example "simple-wisp.w")
;; ...
)
--8<---------------cut here---------------end--------------->8---
--
Olivier Dion
oldiob.dev
Olivier Dion <olivier.dion@polymtl.ca> writes:
> What about skipping the group instead of running the tests?
>
> (when (getenv "GWL_SKIP_INTEGRATION_TESTS")
> (test-skip "full-examples"))
>
> (test-group "full-examples"
> (test-example "simple.scm")
> (test-example "simple-wisp.w")
> ;; ...
> )
That would work, too. I’d apply a patch to that effect :)
--
Ricardo