On 22.04.2019 18:36, Eli Zaretskii wrote: > Let's start with just ASCII strings, and then consider moving to valid > UTF-8 sequences. I take it you can easily write a loop that ensures a > string is pure ASCII? All right. Does the attached json_encode_string_ascii_test.diff look good to you? In terms of correctness/safety, I mean. > No, I meant a test of performance. if we begin by testing for plain > ASCII strings, then non-ASCII strings will take longer to convert. > The existing tests are too short to support measurement of the effect, > we need a larger JSON object with many non-ASCII strings. Makes sense. > Suit yourself, but I don't like investing hours in code just to hear > "your best is not good enough" from those who triggered the changes to > begin with. I have code that parses JSON as well. Mentioned that before. > I don't want to make changes that affect decoding everywhere, because > having raw bytes in other cases is a more frequent phenomenon. Let's > just optimize JSON parsing, OK? > > Should be 'true', right? Erm, right. >_< Probably hit 'undo' one too many times. I've attached this patch as json_make_string_no_validation.diff.