On Wed, Mar 22, 2023, 4:10 AM Philip Kaludercic wrote: > Lynn Winebarger writes: > > > On Tue, Mar 21, 2023 at 12:53 PM Philip Kaludercic > wrote: > >> I really, really have no idea what you are getting at. As in "ok, but > >> what is your intent in explaining this?". > >> > >> Are you trying to propose that Emacs circumvents the SQLite API (that as > >> far as I see uses strings) by constructing statement objects manually? > > > > Not at all. I don't think I can communicate via email the power of > > generative programming techniques, and why basing them on simple > > string concatenation is a bad idea, so I'm going to stop trying. > > I get that, and I am not advocating for string concatenation. Perhaps > that is what is confusing me? > > > I don't think "? ? table values ( 1.0, 'Foo' )" can be supplied with > > 'insert and 'into as parameters. > > Nor do I, but I doubt the necessity. SQL is a very brittle language, > and replacing one keyword with another will usually require other > changes to be made as well. > Exactly the point of a DSL that compiles to a query. Whether emacsql is the best DSL or not, I don't know. I really haven't used it. It has the distinct advantage of existing and providing a syntax tree of the query, which are strong points. >> Are we sure that a database is more efficient than a hash-table (which > >> can already be printed and read)? Or are we talking about unusually > >> extreme values, like in your other message where you were loading 2000+ > >> packages? > > > > Who determines what is extreme? > > Experience and convention? There is no algorithm to determine this, but > before 2000 the highest number of Emacs packages I heard someone was > using was maybe 300-400 (which I also think is an absurd number). > I don't know why it's absurd. There are ~300 packages in gnu elpa, ~200 in nongnu elpa, and over 5000 in melpa. The vast majority are single files. My experiments have shown that a substantial part of the pain of adding packages is simply due to the cost of extending the load path. I really question how much of the effort in these configuration management systems and specialized configs like doomemacs is prompted by the inordinate cost of the extending the load path just to add a one-file package. I can report from experience that most packages can be simultaneously loaded and work fine as long as conflicting modes are not simultaneously in effect. > > Tasks that aren't done today because > > they are difficult to code efficiently? Tasks that seem extreme when > > you write the code in direct style may become much less extreme once a > > well-crafted table/query facility is available. I don't think simply > > *installing* 2000+ packages is all that extreme in itself. Even > > loading all those packages, particularly when using redumping, is not > > particularly extreme in terms of resource consumption on modern > > desktop hardware. > > > > Hash tables only index a single key of a data set. And they don't > > address tasks like efficiently joining tables. > > > > My personal interests run to using relational programming for problems > > like abstract interpretation and compiler implementation. > > In Elisp? > Eventually, sure - for elisp itself, anyway. It's a longer term project for me, though. > You might consider the LINQ sublanguage of C# and other .NET-based > > languages as an example of a useful query DSL. > > As far as I understand (I have no experience with .NET-based languages), > this is only syntax sugar? "Syntactic sugar" should be reserved for syntax that the compiler immediately transforms into a simpler syntactic structure. Otherwise any DSL can be shrugged off as syntactic sugar. I don't know the implementation details, but it's an example of a query DSL integrated into an otherwise imperative/OO paradigm language. Or how does this relate to the point of > dumping an in-memory database. > I just think it's a powerful paradigm that will eventually be utilized in core tasks involved in managing emacs itself, like reasoning about customization variables and relationships between them. Things that aren't done now because doing them without a relational language to take care of the details is painful as error prone. Once those kinds of tables are in use at startup, why wouldn't you want to include them in pdmp file? Particularly if you record pointers to objects as integers in the database - the dumper/loader will be needed to ensure those remain consistent. Lynn