unofficial mirror of guile-devel@gnu.org 
 help / color / mirror / Atom feed
From: Matthias Koeppe <mkoeppe@mail.Math.Uni-Magdeburg.De>
Cc: guile-devel@gnu.org, haus@lambda.math.uni-magdeburg.de
Subject: [Patch] SRFI-13 string-tokenize is wrong
Date: Tue, 12 Mar 2002 18:35:42 +0100	[thread overview]
Message-ID: <uw5henlwuxd.fsf@lambda.math.uni-magdeburg.de> (raw)

[-- Attachment #1: Type: text/plain, Size: 1539 bytes --]

Hi,

the Guile implementation of SRFI-13 `string-tokenize' gets the meaning
of the `token-set' argument wrong.

Quoting the SRFI:

| string-tokenize s [token-set start end] -> list
| 
|     Split the string s into a list of substrings, where each substring
|     is a maximal non-empty contiguous sequence of characters from the
|     character set token-set.
| 
|         * token-set defaults to char-set:graphic (see SRFI 14 for more
|           on character sets and char-set:graphic).
| 
|     [...]    
| 
|     (string-tokenize "Help make programs run, run, RUN!") 
|     => ("Help" "make" "programs" "run," "run," "RUN!")
 
In Guile (1.5 branch):

      (string-tokenize "Help make programs run, run, RUN!") 
      => ("Help" "make" "programs" "run," "run," "RUN!")  ; OK

but:

      (string-tokenize "Help make programs run, run, RUN!" char-set:graphic)
      => (" " " " " " " " " ")  ; WRONG

The corresponding tests in srfi-13.test are also wrong.

I suggest fixing this bug in both the stable and the unstable branch,
so that incorrect uses of `string-tokenize' in user code are avoided.

The attached patch fixes the bug and also removes the Guile-specific
extension of `string-tokenize' to accept a character as the
`token-set' argument because it is inconsistent with both the
Guile-specific procedure documentation and with the correct behavior
of `string-tokenize' when a character set is passed as `token-set'.

-- 
Matthias Köppe -- http://www.math.uni-magdeburg.de/~mkoeppe

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: Type: text/x-patch, Size: 2529 bytes --]

--- srfi-13.c.~1.11.2.4.~	Tue Mar 12 17:03:03 2002
+++ srfi-13.c	Tue Mar 12 18:03:23 2002
@@ -2798,13 +2798,14 @@
 
 
 SCM_DEFINE (scm_string_tokenize, "string-tokenize", 1, 3, 0,
-	    (SCM s, SCM token_char, SCM start, SCM end),
+	    (SCM s, SCM token_set, SCM start, SCM end),
 	    "Split the string @var{s} into a list of substrings, where each\n"
 	    "substring is a maximal non-empty contiguous sequence of\n"
-	    "characters equal to the character @var{token_char}, or\n"
-	    "whitespace, if @var{token_char} is not given.  If\n"
-	    "@var{token_char} is a character set, it is used for finding the\n"
-	    "token borders.")
+	    "characters from the character set @var{token_set}, which\n"
+	    "defaults to an equivalent of @code{char-set:graphic}.\n"
+	    "If @var{start} or @var{end} indices are provided, they restrict\n"
+	    "@code{string-tokenize} to operating on the indicated substring\n"
+	    "of @var{s}.")
 #define FUNC_NAME s_scm_string_tokenize
 {
   char * cstr;
@@ -2814,7 +2815,7 @@
   SCM_VALIDATE_SUBSTRING_SPEC_COPY (1, s, cstr,
 				    3, start, cstart,
 				    4, end, cend);
-  if (SCM_UNBNDP (token_char))
+  if (SCM_UNBNDP (token_set))
     {
       int idx;
 
@@ -2838,7 +2839,7 @@
 	  result = scm_cons (scm_mem2string (cstr + cend, idx - cend), result);
 	}
     }
-  else if (SCM_CHARSETP (token_char))
+  else if (SCM_CHARSETP (token_set))
     {
       int idx;
 
@@ -2846,7 +2847,7 @@
 	{
 	  while (cstart < cend)
 	    {
-	      if (!SCM_CHARSET_GET (token_char, cstr[cend - 1]))
+	      if (SCM_CHARSET_GET (token_set, cstr[cend - 1]))
 		break;
 	      cend--;
 	    }
@@ -2855,41 +2856,14 @@
 	  idx = cend;
 	  while (cstart < cend)
 	    {
-	      if (SCM_CHARSET_GET (token_char, cstr[cend - 1]))
-		break;
-	      cend--;
-	    }
-	  result = scm_cons (scm_mem2string (cstr + cend, idx - cend), result);
-	}
-    }
-  else
-    {
-      int idx;
-      char chr;
-
-      SCM_VALIDATE_CHAR (2, token_char);
-      chr = SCM_CHAR (token_char);
-
-      while (cstart < cend)
-	{
-	  while (cstart < cend)
-	    {
-	      if (cstr[cend - 1] != chr)
-		break;
-	      cend--;
-	    }
-	  if (cstart >= cend)
-	    break;
-	  idx = cend;
-	  while (cstart < cend)
-	    {
-	      if (cstr[cend - 1] == chr)
+	      if (!SCM_CHARSET_GET (token_set, cstr[cend - 1]))
 		break;
 	      cend--;
 	    }
 	  result = scm_cons (scm_mem2string (cstr + cend, idx - cend), result);
 	}
     }
+  else SCM_WRONG_TYPE_ARG (2, token_set);
   return result;
 }
 #undef FUNC_NAME

             reply	other threads:[~2002-03-12 17:35 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-03-12 17:35 Matthias Koeppe [this message]
2002-04-24 19:58 ` [Patch] SRFI-13 string-tokenize is wrong Marius Vollmer
2002-04-26  8:27   ` Matthias Koeppe
2002-04-26 18:18     ` Marius Vollmer
2002-04-29  9:21       ` Matthias Koeppe
2002-05-06 18:54         ` Marius Vollmer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.gnu.org/software/guile/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=uw5henlwuxd.fsf@lambda.math.uni-magdeburg.de \
    --to=mkoeppe@mail.math.uni-magdeburg.de \
    --cc=guile-devel@gnu.org \
    --cc=haus@lambda.math.uni-magdeburg.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).