all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
From: Christopher Howard <christopher@librehacker.com>
To: Jean Louis <bugs@gnu.support>
Cc: Basile Starynkevitch <basile@starynkevitch.net>,  emacs-tangents@gnu.org
Subject: Re: Including AI into Emacs
Date: Tue, 10 Dec 2024 09:14:48 -0900	[thread overview]
Message-ID: <87frmvmmlj.fsf@librehacker.com> (raw)
In-Reply-To: <Z1h5NTEWtEfkH1Av@lco2> (Jean Louis's message of "Tue, 10 Dec 2024 20:24:05 +0300")

Jean Louis <bugs@gnu.support> writes:

> Integration, if that is the right work, is enhancing the human workflow to minimize efforts and provide optimum results. That is what I mean.

That is not integration, that is optimization or efficiency. Integration may lead to better optimization or efficiency but it might have the opposite effect.

>
> Programmers are not necessarily scientists, and so they think in terms
> of typing. But it is possible to control light with brainwaves, with
> special hat, or typing on computer with the eyeball movements.
>

None of those interface have any appeal to me at all. Well, okay, controlling light with brainwaves sounds interesting, at least. But even so I don't see how the input interface has anything to do with whether or not LLMs (or other AI approaches) should be integrated it our workflow. Unless an input interface is so compute intensive that it requires some kind of cluster-based neural network just to work at all.

> Makers of LLMs now provided "trained" models  that
> can type text, translate text more accurately then common translators. 
>

This sounds like an argument for using LLMs to do language translation, which I suppose must be acknowledged. Regarding prose: I've read the mind-numbing, generic prose output on the Internet that is now being spit out by LLMs, and I hope that goes away. The artwork generated is also terrible, which has been showing up on some of the cheap furnishing products we buy from China.

>> For activity (3), even I can do it without the help of remote
>> compute cluster, it is going to require a large model database, plus
>> intense computing resources, like a separate computer, or an expensive
>> GPU requiring proprietary drivers.
>
> Here is example that works without GPU:
> https://github.com/Mozilla-Ocho/llamafile/
>
> and other examples on same page. 
>

I don't see how a llama driven chat interface or an image generator is going to be useful to me, or worth the computing costs. But I suppose if something like that could be specialized to have expert knowledge of the libraries on my computer or my work flow, it might be worth playing around with.

> Just as usual, you have got the computing cost, electricity and
> computer wearing cost.

My understanding was, for LLMs, the difference involves orders of magnitude. That is what I hear others saying, at least.

Regarding inference engines, I recall with Prolog there is a lot of backtracking going on, so the essence of figuring out a workably efficient program was (1) coming up with intelligent rules, and (2) figuring out when to cut off the backtracking. I have a old Prolog book on my book shelf, but I haven't played around with Prolog at all for years

-- 
Christopher Howard

---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)

  reply	other threads:[~2024-12-10 18:14 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-06 17:19 Including AI into Emacs Jean Louis
2024-12-06 18:16 ` Bruno Barbier
2024-12-06 22:18   ` Jean Louis
2024-12-07  9:32     ` Bruno Barbier
2024-12-07 10:30       ` Jean Louis
2024-12-07 11:29         ` Bruno Barbier
2024-12-09 21:06           ` Jean Louis
2024-12-09 22:56             ` Bruno Barbier
2024-12-10  8:03               ` Jean Louis
2024-12-10 10:37                 ` Bruno Barbier
2024-12-10 14:27                   ` Jean Louis
2024-12-06 18:22 ` Eli Zaretskii
2024-12-06 19:11 ` Basile Starynkevitch
2024-12-06 21:14   ` Jean Louis
2024-12-06 22:26   ` Jean Louis
2024-12-06 22:59 ` Christopher Howard
2024-12-06 23:21   ` Jean Louis
2024-12-10 10:45   ` Basile Starynkevitch
2024-12-10 15:04     ` Jean Louis
2024-12-10 17:01       ` Christopher Howard
2024-12-10 17:24         ` Jean Louis
2024-12-10 18:14           ` Christopher Howard [this message]
2024-12-11  0:11             ` Jean Louis
  -- strict thread matches above, loose matches on Subject: below --
2024-12-06 17:22 Jean Louis
2024-12-06 18:25 ` Eli Zaretskii
2024-12-06 18:32   ` John Yates
2024-12-06 19:06   ` Jean Louis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87frmvmmlj.fsf@librehacker.com \
    --to=christopher@librehacker.com \
    --cc=basile@starynkevitch.net \
    --cc=bugs@gnu.support \
    --cc=emacs-tangents@gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.