unofficial mirror of emacs-tangents@gnu.org
 help / color / mirror / Atom feed
From: Christopher Howard <christopher@librehacker.com>
To: Jean Louis <bugs@gnu.support>
Cc: Basile Starynkevitch <basile@starynkevitch.net>,  emacs-tangents@gnu.org
Subject: Re: Including AI into Emacs
Date: Tue, 10 Dec 2024 09:14:48 -0900	[thread overview]
Message-ID: <87frmvmmlj.fsf@librehacker.com> (raw)
In-Reply-To: <Z1h5NTEWtEfkH1Av@lco2> (Jean Louis's message of "Tue, 10 Dec 2024 20:24:05 +0300")

Jean Louis <bugs@gnu.support> writes:

> Integration, if that is the right work, is enhancing the human workflow to minimize efforts and provide optimum results. That is what I mean.

That is not integration, that is optimization or efficiency. Integration may lead to better optimization or efficiency but it might have the opposite effect.

>
> Programmers are not necessarily scientists, and so they think in terms
> of typing. But it is possible to control light with brainwaves, with
> special hat, or typing on computer with the eyeball movements.
>

None of those interface have any appeal to me at all. Well, okay, controlling light with brainwaves sounds interesting, at least. But even so I don't see how the input interface has anything to do with whether or not LLMs (or other AI approaches) should be integrated it our workflow. Unless an input interface is so compute intensive that it requires some kind of cluster-based neural network just to work at all.

> Makers of LLMs now provided "trained" models  that
> can type text, translate text more accurately then common translators. 
>

This sounds like an argument for using LLMs to do language translation, which I suppose must be acknowledged. Regarding prose: I've read the mind-numbing, generic prose output on the Internet that is now being spit out by LLMs, and I hope that goes away. The artwork generated is also terrible, which has been showing up on some of the cheap furnishing products we buy from China.

>> For activity (3), even I can do it without the help of remote
>> compute cluster, it is going to require a large model database, plus
>> intense computing resources, like a separate computer, or an expensive
>> GPU requiring proprietary drivers.
>
> Here is example that works without GPU:
> https://github.com/Mozilla-Ocho/llamafile/
>
> and other examples on same page. 
>

I don't see how a llama driven chat interface or an image generator is going to be useful to me, or worth the computing costs. But I suppose if something like that could be specialized to have expert knowledge of the libraries on my computer or my work flow, it might be worth playing around with.

> Just as usual, you have got the computing cost, electricity and
> computer wearing cost.

My understanding was, for LLMs, the difference involves orders of magnitude. That is what I hear others saying, at least.

Regarding inference engines, I recall with Prolog there is a lot of backtracking going on, so the essence of figuring out a workably efficient program was (1) coming up with intelligent rules, and (2) figuring out when to cut off the backtracking. I have a old Prolog book on my book shelf, but I haven't played around with Prolog at all for years

-- 
Christopher Howard

---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)

  reply	other threads:[~2024-12-10 18:14 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <61ffb7417fcfe6fc0c1291aa53d1398b.support1@rcdrun.com>
     [not found] ` <87msh8ctag.fsf@librehacker.com>
     [not found]   ` <ad41d544df092a072deab64cd41bad0c5ea21185.camel@starynkevitch.net>
2024-12-10 15:04     ` Including AI into Emacs Jean Louis
2024-12-10 17:01       ` Christopher Howard
2024-12-10 17:24         ` Jean Louis
2024-12-10 18:14           ` Christopher Howard [this message]
2024-12-11  0:11             ` Jean Louis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.gnu.org/software/emacs/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87frmvmmlj.fsf@librehacker.com \
    --to=christopher@librehacker.com \
    --cc=basile@starynkevitch.net \
    --cc=bugs@gnu.support \
    --cc=emacs-tangents@gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).