From: Andrew Hyatt <ahyatt@gmail.com>
To: Philip Kaludercic <philipk@posteo.net>
Cc: Stefan Kangas <stefankangas@gmail.com>,
emacs-devel@gnu.org, jporterbugs@gmail.com, rms@gnu.org
Subject: Re: [NonGNU ELPA] New package: llm
Date: Tue, 19 Sep 2023 14:19:30 -0400 [thread overview]
Message-ID: <CAM6wYYJ6y-J-Pgd4A5DYx6TOFkchsiQzfxEm4y549jU6EtCG=A@mail.gmail.com> (raw)
In-Reply-To: <87v8c69l5f.fsf@posteo.net>
[-- Attachment #1: Type: text/plain, Size: 7223 bytes --]
On Tue, Sep 19, 2023 at 12:34 PM Philip Kaludercic <philipk@posteo.net>
wrote:
> Andrew Hyatt <ahyatt@gmail.com> writes:
>
> > I've submitted the configuration for llm and set up the branch from my
> > repository last Friday. However, I'm still not seeing this package being
> > reflected in GNU ELPA's package archive. I followed the instructions,
> but
> > perhaps there's some step that I've missed, or it is only periodically
> > rebuilt?
>
> Did you try to run make build/llm? I get this error:
I did build it, and it seemed to work. I’m not sure what I’m doing
differently but I appreciate the patch which I’ll apply later today. Thank
you for your help.
>
>
> --8<---------------cut here---------------start------------->8---
> $ make build/llm
> emacs --batch -l /home/philip/.../elpa/admin/elpa-admin.el \
> -f elpaa-batch-make-one-package llm
> Cloning branch llm:
> Preparing worktree (new branch 'externals/llm')
> branch 'externals/llm' set up to track 'origin/externals/llm'.
> HEAD is now at 39ae6fc794 Assign copyright to FSF, in preparation of
> inclusion to GNU ELPA
>
> Debugger entered--Lisp error: (search-failed ";;; llm.el ends here")
> ...
> --8<---------------cut here---------------end--------------->8---
>
> In other words the footer is missing. I have prepared a patch that
> would address that and a few other checkdoc issues:
>
> diff --git a/llm.el b/llm.el
> index 11b508cb36..08f07b65ca 100644
> --- a/llm.el
> +++ b/llm.el
> @@ -23,9 +23,9 @@
>
> ;;; Commentary:
> ;; This file defines a generic interface for LLMs (large language
> models), and
> -;; functionality they can provide. Not all LLMs will support all of
> these, but
> +;; functionality they can provide. Not all LLMs will support all of
> these, but
> ;; programs that want to integrate with LLMs can code against the
> interface, and
> -;; users can then choose the LLMs they want to use. It's advisable to
> have the
> +;; users can then choose the LLMs they want to use. It's advisable to
> have the
> ;; possibility of using multiple LLMs when that make sense for different
> ;; functionality.
> ;;
> @@ -50,7 +50,7 @@
>
> (defun llm--warn-on-nonfree (name tos)
> "Issue a warning if `llm-warn-on-nonfree' is non-nil.
> -NAME is the human readable name of the LLM (e.g 'Open AI').
> +NAME is the human readable name of the LLM (e.g \"Open AI\").
>
> TOS is the URL of the terms of service for the LLM.
>
> @@ -72,7 +72,7 @@ EXAMPLES is a list of conses, where the car is an example
> inputs, and cdr is the corresponding example outputs. This is optional.
>
> INTERACTIONS is a list message sent by either the llm or the
> -user. It is a list of `llm-chat-prompt-interaction' objects. This
> +user. It is a list of `llm-chat-prompt-interaction' objects. This
> is required.
>
> TEMPERATURE is a floating point number with a minimum of 0, and
> @@ -80,8 +80,7 @@ maximum of 1, which controls how predictable the result
> is, with
> 0 being the most predicatable, and 1 being the most creative.
> This is not required.
>
> -MAX-TOKENS is the maximum number of tokens to generate. This is optional.
> -"
> +MAX-TOKENS is the maximum number of tokens to generate. This is
> optional."
> context examples interactions temperature max-tokens)
>
> (cl-defstruct llm-chat-prompt-interaction
> @@ -102,19 +101,20 @@ This should be a cons of the name of the LLM, and
> the URL of the
> terms of service.
>
> If the LLM is free and has no restrictions on use, this should
> -return nil. Since this function already returns nil, there is no
> +return nil. Since this function already returns nil, there is no
> need to override it."
> (ignore provider)
> nil)
>
> (cl-defgeneric llm-chat (provider prompt)
> "Return a response to PROMPT from PROVIDER.
> -PROMPT is a `llm-chat-prompt'. The response is a string."
> +PROMPT is a `llm-chat-prompt'. The response is a string."
> (ignore provider prompt)
> (signal 'not-implemented nil))
>
> (cl-defmethod llm-chat ((_ (eql nil)) _)
> - (error "LLM provider was nil. Please set the provider in the
> application you are using."))
> + "Catch trivial configuration mistake."
> + (error "LLM provider was nil. Please set the provider in the
> application you are using"))
>
> (cl-defmethod llm-chat :before (provider _)
> "Issue a warning if the LLM is non-free."
> @@ -130,7 +130,8 @@ ERROR-CALLBACK receives the error response."
> (signal 'not-implemented nil))
>
> (cl-defmethod llm-chat-async ((_ (eql nil)) _ _ _)
> - (error "LLM provider was nil. Please set the provider in the
> application you are using."))
> + "Catch trivial configuration mistake."
> + (error "LLM provider was nil. Please set the provider in the
> application you are using"))
>
> (cl-defmethod llm-chat-async :before (provider _ _ _)
> "Issue a warning if the LLM is non-free."
> @@ -143,7 +144,8 @@ ERROR-CALLBACK receives the error response."
> (signal 'not-implemented nil))
>
> (cl-defmethod llm-embedding ((_ (eql nil)) _)
> - (error "LLM provider was nil. Please set the provider in the
> application you are using."))
> + "Catch trivial configuration mistake."
> + (error "LLM provider was nil. Please set the provider in the
> application you are using"))
>
> (cl-defmethod llm-embedding :before (provider _)
> "Issue a warning if the LLM is non-free."
> @@ -159,7 +161,8 @@ error signal and a string message."
> (signal 'not-implemented nil))
>
> (cl-defmethod llm-embedding-async ((_ (eql nil)) _ _ _)
> - (error "LLM provider was nil. Please set the provider in the
> application you are using."))
> + "Catch trivial configuration mistake."
> + (error "LLM provider was nil. Please set the provider in the
> application you are using"))
>
> (cl-defmethod llm-embedding-async :before (provider _ _ _)
> "Issue a warning if the LLM is non-free."
> @@ -169,7 +172,7 @@ error signal and a string message."
> (cl-defgeneric llm-count-tokens (provider string)
> "Return the number of tokens in STRING from PROVIDER.
> This may be an estimate if the LLM does not provide an exact
> -count. Different providers might tokenize things in different
> +count. Different providers might tokenize things in different
> ways."
> (ignore provider)
> (with-temp-buffer
> @@ -199,3 +202,4 @@ This should only be used for logging or debugging."
> "")))
>
> (provide 'llm)
> +;;; llm.el ends here
>
>
> >
> > On Tue, Sep 12, 2023 at 11:05 AM Stefan Kangas <stefankangas@gmail.com>
> > wrote:
> >
> >> Andrew Hyatt <ahyatt@gmail.com> writes:
> >>
> >> > Another question is whether this should be one package or many. The
> >> > many-package option would have the llm and llm-fake package in the
> main
> >> llm
> >> > package, with a package for all llm clients, such as llm-openai and
> >> > llm-vertex (which are the two options I have now). If someone has an
> >> > opinion on this, please let me know.
> >>
> >> It's easier for users if it's just one package.
> >>
>
[-- Attachment #2: Type: text/html, Size: 8668 bytes --]
next prev parent reply other threads:[~2023-09-19 18:19 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-07 23:54 [NonGNU ELPA] New package: llm Andrew Hyatt
2023-08-08 5:42 ` Philip Kaludercic
2023-08-08 15:08 ` Spencer Baugh
2023-08-08 15:09 ` Andrew Hyatt
2023-08-09 3:47 ` Richard Stallman
2023-08-09 4:37 ` Andrew Hyatt
2023-08-13 1:43 ` Richard Stallman
2023-08-13 1:43 ` Richard Stallman
2023-08-13 2:11 ` Emanuel Berg
2023-08-15 5:14 ` Andrew Hyatt
2023-08-15 17:12 ` Jim Porter
2023-08-17 2:02 ` Richard Stallman
2023-08-17 2:48 ` Andrew Hyatt
2023-08-19 1:51 ` Richard Stallman
2023-08-19 9:08 ` Ihor Radchenko
2023-08-21 1:12 ` Richard Stallman
2023-08-21 8:26 ` Ihor Radchenko
2023-08-17 17:08 ` Daniel Fleischer
2023-08-19 1:49 ` Richard Stallman
2023-08-19 8:15 ` Daniel Fleischer
2023-08-21 1:12 ` Richard Stallman
2023-08-21 4:48 ` Jim Porter
2023-08-21 5:12 ` Andrew Hyatt
2023-08-21 6:03 ` Jim Porter
2023-08-21 6:36 ` Daniel Fleischer
2023-08-22 1:06 ` Richard Stallman
2023-08-16 2:30 ` Richard Stallman
2023-08-16 5:11 ` Tomas Hlavaty
2023-08-18 2:10 ` Richard Stallman
2023-08-27 1:07 ` Andrew Hyatt
2023-08-27 13:11 ` Philip Kaludercic
2023-08-28 1:31 ` Richard Stallman
2023-08-28 2:32 ` Andrew Hyatt
2023-08-28 2:59 ` Jim Porter
2023-08-28 4:54 ` Andrew Hyatt
2023-08-31 2:10 ` Richard Stallman
2023-08-31 9:06 ` Ihor Radchenko
2023-08-31 16:29 ` chad
2023-09-01 9:53 ` Ihor Radchenko
2023-09-04 1:27 ` Richard Stallman
2023-09-04 1:27 ` Richard Stallman
2023-09-06 12:25 ` Ihor Radchenko
2023-09-06 12:51 ` Is ChatGTP SaaSS? (was: [NonGNU ELPA] New package: llm) Ihor Radchenko
2023-09-06 16:59 ` Andrew Hyatt
2023-09-09 0:37 ` Richard Stallman
2023-09-06 22:52 ` Emanuel Berg
2023-09-07 7:28 ` Lucien Cartier-Tilet
2023-09-07 7:57 ` Emanuel Berg
2023-09-09 0:38 ` Richard Stallman
2023-09-09 10:28 ` Collaborative training of Libre LLMs (was: Is ChatGTP SaaSS? (was: [NonGNU ELPA] New package: llm)) Ihor Radchenko
2023-09-09 11:19 ` Jean Louis
2023-09-10 0:22 ` Richard Stallman
2023-09-10 2:18 ` Debanjum Singh Solanky
2023-08-27 18:36 ` [NonGNU ELPA] New package: llm Jim Porter
2023-08-28 0:19 ` Andrew Hyatt
2023-09-04 1:27 ` Richard Stallman
2023-09-04 5:18 ` Andrew Hyatt
2023-09-07 1:21 ` Richard Stallman
2023-09-12 4:54 ` Andrew Hyatt
2023-09-12 9:57 ` Philip Kaludercic
2023-09-12 15:05 ` Stefan Kangas
2023-09-19 16:26 ` Andrew Hyatt
2023-09-19 16:34 ` Philip Kaludercic
2023-09-19 18:19 ` Andrew Hyatt [this message]
2023-09-04 1:27 ` Richard Stallman
2023-08-09 3:47 ` Richard Stallman
2023-08-09 4:06 ` Andrew Hyatt
2023-08-12 2:44 ` Richard Stallman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAM6wYYJ6y-J-Pgd4A5DYx6TOFkchsiQzfxEm4y549jU6EtCG=A@mail.gmail.com' \
--to=ahyatt@gmail.com \
--cc=emacs-devel@gnu.org \
--cc=jporterbugs@gmail.com \
--cc=philipk@posteo.net \
--cc=rms@gnu.org \
--cc=stefankangas@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this external index
https://git.savannah.gnu.org/cgit/emacs.git
https://git.savannah.gnu.org/cgit/emacs/org-mode.git
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.