all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
From: Ihor Radchenko <yantar92@posteo.net>
To: chad <yandros@gmail.com>
Cc: emacs-tangents@gnu.org, Jim Porter <jporterbugs@gmail.com>,
	ahyatt@gmail.com, rms@gnu.org
Subject: Re: [NonGNU ELPA] New package: llm
Date: Fri, 01 Sep 2023 09:53:22 +0000	[thread overview]
Message-ID: <87pm32rzhp.fsf@localhost> (raw)
In-Reply-To: <CAO2hHWZ9f2fwr9NbtABFPuUv+Qsk6dmzhavwKV3BJ4P-YNEsOw@mail.gmail.com>

chad <yandros@gmail.com> writes:

> For large AI models specifically: there are many users for whom it is not
> practical to _actually_ recreate the model from scratch everywhere they
> might want to use it. It is important for computing freedom that such
> recreations be *possible*, but it will be very limiting to insist that
> everyone who wants to use such services actually do so, in a manner that
> seems to me to be very similar to not insisting that every potential emacs
> user actually compile their own. In this case there's the extra wrinkle
> that the actual details of recreating the currently-most-interesting large
> language models involves both _gigantic_ amounts of resources and also a
> fairly large amount of not-directly-reproducible randomness involved. It
> might be worth further consideration.

Let me refer to another message by RMS:

    >>   > While I certainly appreciate the effort people are making to produce 
    >>   > LLMs that are more open than OpenAI (a low bar), I'm not sure if 
    >>   > providing several gigabytes of model weights in binary format is really 
    >>   > providing the *source*. It's true that you can still edit these models 
    >>   > in a sense by fine-tuning them, but you could say the same thing about a 
    >>   > project that only provided the generated output from GNU Bison, instead 
    >>   > of the original input to Bison.
    >> 
    >> I don't think that is valid.
    >> Bison processing is very different from training a neural net.
    >> Incremental retraining of a trained neural net
    >> is the same kind of processing as the original training -- except
    >> that you use other data and it produces a neural net
    >> that is trained differently.
    >> 
    >> My conclusiuon is that the trained neural net is effectively a kind of
    >> source code.  So we don't need to demand the "original training data"
    >> as part of a package's source code.  That data does not have to be
    >> free, published, or available.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



  reply	other threads:[~2023-09-01  9:53 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-07 23:54 [NonGNU ELPA] New package: llm Andrew Hyatt
2023-08-08  5:42 ` Philip Kaludercic
2023-08-08 15:08   ` Spencer Baugh
2023-08-08 15:09   ` Andrew Hyatt
2023-08-09  3:47 ` Richard Stallman
2023-08-09  4:37   ` Andrew Hyatt
2023-08-13  1:43     ` Richard Stallman
2023-08-13  1:43     ` Richard Stallman
2023-08-13  2:11       ` Emanuel Berg
2023-08-15  5:14       ` Andrew Hyatt
2023-08-15 17:12         ` Jim Porter
2023-08-17  2:02           ` Richard Stallman
2023-08-17  2:48             ` Andrew Hyatt
2023-08-19  1:51               ` Richard Stallman
2023-08-19  9:08                 ` Ihor Radchenko
2023-08-21  1:12                   ` Richard Stallman
2023-08-21  8:26                     ` Ihor Radchenko
2023-08-17 17:08             ` Daniel Fleischer
2023-08-19  1:49               ` Richard Stallman
2023-08-19  8:15                 ` Daniel Fleischer
2023-08-21  1:12                   ` Richard Stallman
2023-08-21  4:48               ` Jim Porter
2023-08-21  5:12                 ` Andrew Hyatt
2023-08-21  6:03                   ` Jim Porter
2023-08-21  6:36                 ` Daniel Fleischer
2023-08-22  1:06                 ` Richard Stallman
2023-08-16  2:30         ` Richard Stallman
2023-08-16  5:11           ` Tomas Hlavaty
2023-08-18  2:10             ` Richard Stallman
2023-08-27  1:07       ` Andrew Hyatt
2023-08-27 13:11         ` Philip Kaludercic
2023-08-28  1:31           ` Richard Stallman
2023-08-28  2:32             ` Andrew Hyatt
2023-08-28  2:59               ` Jim Porter
2023-08-28  4:54                 ` Andrew Hyatt
2023-08-31  2:10                 ` Richard Stallman
2023-08-31  9:06                   ` Ihor Radchenko
2023-08-31 16:29                     ` chad
2023-09-01  9:53                       ` Ihor Radchenko [this message]
2023-09-04  1:27                     ` Richard Stallman
2023-09-04  1:27                     ` Richard Stallman
2023-09-06 12:25                       ` Ihor Radchenko
2023-09-06 12:51                       ` Is ChatGTP SaaSS? (was: [NonGNU ELPA] New package: llm) Ihor Radchenko
2023-09-06 16:59                         ` Andrew Hyatt
2023-09-09  0:37                           ` Richard Stallman
2023-09-06 22:52                         ` Emanuel Berg
2023-09-07  7:28                           ` Lucien Cartier-Tilet
2023-09-07  7:57                             ` Emanuel Berg
2023-09-09  0:38                         ` Richard Stallman
2023-09-09 10:28                           ` Collaborative training of Libre LLMs (was: Is ChatGTP SaaSS? (was: [NonGNU ELPA] New package: llm)) Ihor Radchenko
2023-09-09 11:19                             ` Jean Louis
2023-09-10  0:22                             ` Richard Stallman
2023-09-10  2:18                               ` Debanjum Singh Solanky
2023-08-27 18:36         ` [NonGNU ELPA] New package: llm Jim Porter
2023-08-28  0:19           ` Andrew Hyatt
2023-09-04  1:27           ` Richard Stallman
2023-09-04  5:18             ` Andrew Hyatt
2023-09-07  1:21               ` Richard Stallman
2023-09-12  4:54                 ` Andrew Hyatt
2023-09-12  9:57                   ` Philip Kaludercic
2023-09-12 15:05                   ` Stefan Kangas
2023-09-19 16:26                     ` Andrew Hyatt
2023-09-19 16:34                       ` Philip Kaludercic
2023-09-19 18:19                         ` Andrew Hyatt
2023-09-04  1:27         ` Richard Stallman
2023-08-09  3:47 ` Richard Stallman
2023-08-09  4:06   ` Andrew Hyatt
2023-08-12  2:44     ` Richard Stallman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87pm32rzhp.fsf@localhost \
    --to=yantar92@posteo.net \
    --cc=ahyatt@gmail.com \
    --cc=emacs-tangents@gnu.org \
    --cc=jporterbugs@gmail.com \
    --cc=rms@gnu.org \
    --cc=yandros@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.