From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.org!.POSTED!not-for-mail From: Philipp Stephani
> From: Philipp Stephani <p.stephani2@gmail.com>
> Date: Sat, 30 Sep 2017 22:02:55 +0000
> Cc: emacs-dev= el@gnu.org
>
> Subject: [PATCH] Implement native JSON support using Jansson
Thanks, a few more comments/questions.
> +#if __has_attribute (warn_unused_result)
> +# define ATTRIBUTE_WARN_UNUSED_RESULT __attribute__ ((__warn_unused_r= esult__))
> +#else
> +# define ATTRIBUTE_WARN_UNUSED_RESULT
> +#endif
Hmm... why do we need this attribute?=C2=A0 You use it with 2 static
functions, so this sounds like a left-over from the development stage?
<= /blockquote>It's not strictly needed (and if you do= n't like it, I can remove it), but it helps catching memory leaks.=C2=A0
> +static Lisp_Object
> +internal_catch_all_1 (Lisp_Object (*function) (void *), void *argumen= t)
Can you tell why you needed this (and the similar internal_catch_all)?
Is that only because the callbacks could signal an error, or is there
another reason?=C2=A0 If the former, I'd prefer to simplify the code an= d
its maintenance by treating the error condition in a less drastic
manner, and avoiding the call to xsignal.The callbacks (especially insert and before-/after-change-hook) can exit= nonlocally, but these nonlocal exits may not escape the Jansson callback. = Therefore all nonlocal exits must be caught here.
And btw, how can size be greater than SIZE_MAX in this case?=C2=A0 This is<= br> a valid Lisp object, isn't it?=C2=A0 (There are more such tests in the<= br> patch, e.g. in lisp_to_json, and I think they, too, are redundant.)
> +=C2=A0 =C2=A0 =C2=A0 *json =3D json_check (json_array ());
> +=C2=A0 =C2=A0 =C2=A0 ptrdiff_t count =3D SPECPDL_INDEX ();
> +=C2=A0 =C2=A0 =C2=A0 record_unwind_protect_ptr (json_release_object, = json);
> +=C2=A0 =C2=A0 =C2=A0 for (ptrdiff_t i =3D 0; i < size; ++i)
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 int status
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =3D json_array_append_new (= *json, lisp_to_json (AREF (lisp, i)));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (status =3D=3D -1)
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 json_out_of_memory ();
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 eassert (status =3D=3D 0);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 =C2=A0 eassert (json_array_size (*json) =3D=3D size); > +=C2=A0 =C2=A0 =C2=A0 clear_unwind_protect (count);
> +=C2=A0 =C2=A0 =C2=A0 return unbind_to (count, Qnil);
This, too, sounds more complex than it should: you record
unwind-protect just so lisp_to_json's subroutines could signal an
error due to insufficient memory, right?=C2=A0 Why can't we have the
out-of-memory check only inside this loop, which you already do, and
avoid the checks on lower levels (which undoubtedly cost us extra
cycles)?=C2=A0 What do those extra checks in json_check buy us? the errors<= br> they signal are no more informative than the one in the loop, AFAICT.
= blockquote>I don't understand what you mean. We nee= d to check the return values of all functions if we want to to use them lat= er.=C2=A0
> +static Lisp_Object
> +json_insert (void *data)
> +{
> +=C2=A0 const struct json_buffer_and_size *buffer_and_size =3D data; > +=C2=A0 if (buffer_and_size->size > PTRDIFF_MAX)
> +=C2=A0 =C2=A0 xsignal1 (Qoverflow_error, build_string ("buffer t= oo large"));
> +=C2=A0 insert (buffer_and_size->buffer, buffer_and_size->size);=
I don't think we need this test here, as 'insert' already has t= he
equivalent test in one of its subroutines.<= div>It can't, because it takes the byte length as ptrdiff_t. We need to= check before whether the size is actually in the valid range of ptrdiff_t.=
> +=C2=A0 =C2=A0 case JSON_INTEGER:
> +=C2=A0 =C2=A0 =C2=A0 {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 json_int_t value =3D json_integer_value (= json);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (FIXNUM_OVERFLOW_P (value))
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 xsignal1 (Qoverflow_error,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= build_string ("JSON integer is too large"));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return make_number (value);
This overflow test is also redundant, as make_number already does it.
= blockquote>It can't, because json_int_t can be larg= er than EMACS_INT. Also, make_number doesn't contain any checks.<= div>=C2=A0
> +=C2=A0 =C2=A0 case JSON_STRING:
> +=C2=A0 =C2=A0 =C2=A0 {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 size_t size =3D json_string_length (json)= ;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (FIXNUM_OVERFLOW_P (size))
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 xsignal1 (Qoverflow_error, build_s= tring ("JSON string is too long"));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return json_make_string (json_string_valu= e (json), size);
Once again, the overflow test is redundant, as make_specified_string
(called by json_make_string) already includes an equivalent test.And once again, we need to check at least whethe= r the size fits into ptrdiff_t.=C2=A0
> +=C2=A0 =C2=A0 case JSON_ARRAY:
> +=C2=A0 =C2=A0 =C2=A0 {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (++lisp_eval_depth > max_lisp_eval_= depth)
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 xsignal0 (Qjson_object_too_deep);<= br> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 size_t size =3D json_array_size (json); > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (FIXNUM_OVERFLOW_P (size))
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 xsignal1 (Qoverflow_error, build_s= tring ("JSON array is too long"));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Lisp_Object result =3D Fmake_vector (make= _natnum (size), Qunbound);
Likewise here: Fmake_vector makes sure the size is not larger than
allowed.Same as above: It can't.= div>--94eb2c1c1da489d661055aa39a72--=C2=A0
> +=C2=A0 =C2=A0 case JSON_OBJECT:
> +=C2=A0 =C2=A0 =C2=A0 {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (++lisp_eval_depth > max_lisp_eval_= depth)
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 xsignal0 (Qjson_object_too_deep);<= br> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 size_t size =3D json_object_size (json);<= br> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (FIXNUM_OVERFLOW_P (size))
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 xsignal1 (Qoverflow_error,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= build_string ("JSON object has too many elements"));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Lisp_Object result =3D CALLN (Fmake_hash_= table, QCtest, Qequal,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 QCsize, make_natnu= m (size));
Likewise here: make_natnum does the equivalent test.<= br>It doesn't and can't.=C2=A0
> +=C2=A0 =C2=A0 /* Adjust point by how much we just read.=C2=A0 Do this= here because
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0tokener->char_offset becomes incorrect = below.=C2=A0 */
> +=C2=A0 =C2=A0 bool overflow =3D INT_ADD_WRAPV (point, error.position,= &point);
> +=C2=A0 =C2=A0 eassert (!overflow);
> +=C2=A0 =C2=A0 eassert (point <=3D ZV_BYTE);
> +=C2=A0 =C2=A0 SET_PT_BOTH (BYTE_TO_CHAR (point), point);
It's better to use SET_PT here, I think.That's not possible because we don't have the character offse= t. (And I think using SET_PT (BYTE_TO_CHAR (point)) would just require need= lessly recalculating point.)=C2=A0
> +=C2=A0 define_error (Qjson_out_of_memory, "no free memory for cr= eating JSON object",
I'd prefer "not enough memory for creating JSON object".
Done.=C2=A0