From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Dmitry Gutov Newsgroups: gmane.emacs.devel Subject: Re: Improvement proposals for `completing-read' Date: Sat, 10 Apr 2021 05:21:22 +0300 Message-ID: <01ffe85f-6bdb-39a5-b20a-e3c60bea3e2e@yandex.ru> References: <09b67fc5-f8fd-c48a-8b0b-ad47c88761f1@yandex.ru> <292a9f63-5a41-7b32-66f2-67d06f138a09@yandex.ru> <7d03e917-6a61-23b3-e735-a8e43c3fb65f@daniel-mendler.de> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="21483"; mail-complaints-to="usenet@ciao.gmane.io" User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 To: Daniel Mendler , emacs-devel@gnu.org Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Sat Apr 10 04:22:15 2021 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1lV3GI-0005Ug-VS for ged-emacs-devel@m.gmane-mx.org; Sat, 10 Apr 2021 04:22:15 +0200 Original-Received: from localhost ([::1]:36088 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lV3GH-0006D7-Uo for ged-emacs-devel@m.gmane-mx.org; Fri, 09 Apr 2021 22:22:13 -0400 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]:38264) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lV3FY-0005oC-O9 for emacs-devel@gnu.org; Fri, 09 Apr 2021 22:21:28 -0400 Original-Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]:45650) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lV3FW-0004iZ-S4 for emacs-devel@gnu.org; Fri, 09 Apr 2021 22:21:28 -0400 Original-Received: by mail-wm1-x332.google.com with SMTP id o20-20020a05600c4fd4b0290114265518afso3862339wmq.4 for ; Fri, 09 Apr 2021 19:21:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:to:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=7kGd5BvVyVRSdHN90hO78NbrWZHjmg7cj1ghAdjcMlI=; b=r+dsj843jhBUqEeO6zMIsrluBJsdEY30b2nznV4LFpaQ3Mt2TaGZKW+dKfd0BWKtsT qV8PZqMVKgVI+TgZQeqAbztAuXEsoSvnNe71KuMCGUgftjN7SYNe7tzs8TydNfS4wmvs uB04jH7Y3PHDvnD2SR7lXViwYGgxLECM1bZq3JO9PLHii3J5RTtkYp5EwhFmUvvZ7dH/ 7LvWjcetyD/nMa+yHY2Uly8lA9Y35qHln6yxGvKGKW41MIHAYNt+tHoBochuqVPh1obn c1EfBUT+IVXqwq02xB96B/RftzcTAjhuAmFED80S6mXm8K70dWtrdLPrm2wZS+I6C3Wh xlAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:to:references:from:message-id :date:user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=7kGd5BvVyVRSdHN90hO78NbrWZHjmg7cj1ghAdjcMlI=; b=G6fdiXBFEoAoleVNI9j4JnV242Bz05YqHe98kJQY2Sq33lnN9t8/ETiExHYMb7mP6M 9kRwa4hKFaVjnNibpZifaBxGM/rQRpB4pBeFoPf2Y3+BqBaw4BY52gJw86f5ln0BAJVy /RNSMWikcvdgfsa33xXKKF8gGRU9KiPyt1tP3ndgMli4szlOeNDacPL7wmMBwa/KmVnM vioY6HeZ5X4WIMR6p8rHqS75Bmo9FPmci7W+6FSc13nj0oFmh07EljePWlPon/snKo6/ RIlOSfM15EondH4cH8cCOi9fNnCiykvIpm+68SYFvtdhJyez/OPwyCL9YXkAahiFPXdO aM6g== X-Gm-Message-State: AOAM530eVaQYbWzmbH+Xu4PhGaiVAgSpQemI1NkjvCeG0SeAtLaFpJH1 K5Wurl6YNT78iipGmFN0RNL0y+WOqVc= X-Google-Smtp-Source: ABdhPJxTXIhhdzaBCkfEdkrev0sfgGYeDylTmrdGuCxWq1Gr8DEx598OOEQIEISUS/JdLG1Y+gzDUg== X-Received: by 2002:a7b:c012:: with SMTP id c18mr14162991wmb.94.1618021284902; Fri, 09 Apr 2021 19:21:24 -0700 (PDT) Original-Received: from [192.168.0.6] ([46.251.119.176]) by smtp.googlemail.com with ESMTPSA id s83sm6099199wms.16.2021.04.09.19.21.23 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 09 Apr 2021 19:21:24 -0700 (PDT) In-Reply-To: <7d03e917-6a61-23b3-e735-a8e43c3fb65f@daniel-mendler.de> Content-Language: en-US Received-SPF: pass client-ip=2a00:1450:4864:20::332; envelope-from=raaahh@gmail.com; helo=mail-wm1-x332.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.25, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.249, NICE_REPLY_A=-0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: "Emacs-devel" Xref: news.gmane.io gmane.emacs.devel:267764 Archived-At: On 09.04.2021 00:30, Daniel Mendler wrote: > On 4/8/21 10:44 PM, Dmitry Gutov wrote: >> I was thinking more about interactions over network, with HTTP >> requests sent and received asynchronously. Mainly the cases where one >> uses the LSP protocol or similar. > > Yes, this is all possible with async completion tables in Consult. There > is a consult-spotify package which queries some web api and there is > also consult-lsp in the works which accesses the lsp api > (https://github.com/minad/consult/issues/263). Very good. >>> You may want to take a look at my Consult package, specifically the >>> async functionality. I believe that this functionality can easily be >>> provided on top of the current infrastructure, and actually in a nice >>> way. >> >> You can check out Company's asynchronous convention for backends: >> >> https://github.com/company-mode/company-mode/blob/f3aacd77d0135c09227400fef45c54b717d33f2e/company.el#L456-L467 >> >> It's a very simple lambda-based future-like value. It can be updated >> to use a named type, and with other features too. I think it's a clean >> and simple base to build on, though. > > Yes, this looks very simple. I actually prefer the functional style in > contrast to some named types as you have it in Company. So how is this > used? When completing the fetcher is called and as soon as it returns > via the callback the results are displayed? Pretty much. If the user has not typed the next char before that, of course. > But chunking is not possible > and probably also not desired? See below for my response regarding > chunking in Consult. It would be easy to extend to allow chunking, say, with an additional arg called MORE-TO-COME. Just like in UPDATE-FUNCTION argument of the VC action 'dir-status-files. It just doesn't seem very useful for code completion so far. Not sure what to do with it in the UI, and there are no backends interested in it currently (the new version of the LSP protocol has added a feature like that, but apparently it's better for other kinds of queries). >>> In Consult I am using closures which hold the asynchronously acquired >>> data. The closure function must accept a single argument, it can >>> either be a string (the new input) or it can be a list of newly >>> obtained candidates. >> >> I'm not sure where to look, sorry. > > Take a look at `consult--async-sink` at > https://github.com/minad/consult/blob/3121b34e207222b2db6ac96a655d68c0edf1a449/consult.el#L1264-L1297. Got it, thank you. > These `consult--async-*` functions can be chained together to produce an > async pipeline. The goal here was to have reusable functions which I can > glue together to create different async backends. See for example the > pipeline for asynchronous commands: > https://github.com/minad/consult/blob/3121b34e207222b2db6ac96a655d68c0edf1a449/consult.el#L1505-L1513. I also like that idea. How are the backtraces in case of error? Whether they can be made readable enough, was one of the sticking points in the (quite heated) discussion of bug#41531. >> I'm not 100% clear, but it sounds like chunked iteration. Which is a >> good feature to have. Though perhaps not always applicable to every UI >> (blinking with new results in a completion popup might be not >> user-friendly). > > Indeed, the UI receives the candidates as soon as they come in. One has > to ensure that this does not freeze the UI with some timer. Then there > is also throttling when the user changes the input. It works very well > for the `consult-grep` style commands. You may want to try those. They > are similar to `counsel-grep` but additionally allow to filter using the > Emacs completion style. Take a look here, in case you are interested > https://github.com/minad/consult#asynchronous-search. Note that you > don't have to use chunking necessarily. You can use a single chunk if > the whole async result arrives in bulk. Yeah, I figured those commands would be the the main beneficiary. I haven't seriously tried Consult yet, but IIRC Helm touted those kind of searches as one of the selling points back in the day. >>> Now a single problem remains - if new data is incoming the async data >>> source must somehow inform completion that new candidates are >>> available. In order to do this the async source must trigger the UI >>> for example via icomplete-exhibit/selectrum-exhibit and so on. It >>> would be good to have a common "completion-refresh" entry point for >>> that. In Consult I have to write a tiny bit of integration code for >>> each supported completion system. >> >> See my link, perhaps. >> >> Or in general, a Future/Promise API has a way to subscribe to the >> value's update(s) (and the completion frontend can do that). >> >> Having to use a global variable seems pretty inelegant in comparison. > > It is not a global variable but a function. That function would have to work with (and notify) different frontends, so that probably requires a hook variable of some sort where they would register themselves. And that is a global variable. And/or there would need to be added some tracking of which frontend to send the results to. > But for sure, one could also > design the async future such that it receives a callback argument which > should be called when new candidates arrive. The way I wrote it in > Consult is that the `consult-async-sink` handles a 'refresh action, > which then informs the completion UI. Yup, that sounds better. I would probably say that a UI should itself know better when to refresh or not, but I'm guessing you have good counter-examples. >> No hurry at all. Sometimes, though, a big feature like that can inform >> the whole design from the outset. > > Yes, sure. When planning to do a big overhaul you are certainly right. > But currently I am more focused on fixing a few smaller pain points with > the API, like retaining text properties and so on. Sounds good. I just wanted to add some context for completeness, in case the work turns into the direction of the "next completing-read".