On Tue, Aug 8, 2023 at 1:42 AM Philip Kaludercic <philipk@posteo.net> wrote:
Andrew Hyatt <ahyatt@gmail.com> writes:

> Hi everyone,
>
> I've created a new package called llm, for the purpose of abstracting the
> interface to various large language model providers.  There are many LLM
> packages already, but it would be wasteful for all of them to try to be
> compatible with a range of LLM providers API (local LLMs such as Llama 2,
> API providers such as Open AI and Google Cloud's Vertex).  This package
> attempts to solve this problem by defining generic functions which can then
> be implemented by different LLM providers.  I have started with just two:
> Open AI and Vertex.  Llama 2 would be a next choice, but I don't yet have
> it working on my system.  In addition, I'm starting with just two core
> functionality: chat and embeddings.  Extending to async is probably
> something that I will do next.

Llama was the model that could be executed locally, and the other two
are "real" services, right?

That's correct.
 

> You can see the code at https://github.com/ahyatt/llm.
>
> I prefer that this is NonGNU, because I suspect people would like to
> contribute interfaces to different LLM, and not all of them will have FSF
> papers.

I cannot estimate how important or not LLM will be in the future, but it
might be worth having something like this in the core, at some point.
Considering the size of a module at around 150-200 lines it seems, and
the relative infrequency of new models (at least to my understanding), I
don't know if the "advantage" of accepting contributions from people who
haven't signed the CA has that much weight, opposed to the general that
all users may enjoy from having the technology integrated into Emacs
itself, in a way that other packages (and perhaps even the core-help
system) could profit from it.

That seems reasonable.  I don't have a strong opinion here, so if others want to see this in GNU ELPA instead, I'm happy to do that.
 

> Your thoughts would be appreciated, thank you!