* Re: Including AI into Emacs
[not found] ` <ad41d544df092a072deab64cd41bad0c5ea21185.camel@starynkevitch.net>
@ 2024-12-10 15:04 ` Jean Louis
2024-12-10 17:01 ` Christopher Howard
0 siblings, 1 reply; 5+ messages in thread
From: Jean Louis @ 2024-12-10 15:04 UTC (permalink / raw)
To: Basile Starynkevitch; +Cc: Christopher Howard, emacs-tangents
* Basile Starynkevitch <basile@starynkevitch.net> [2024-12-10 13:46]:
> Without needing a remote supercomputer, you could run
> https://clipsrules.net/ on your Linux desktop (supplying it a rules
> source file), or extend https://github.com/RefPerSys/RefPerSys (it is
> GPLv3+ work-in-progress inference engine) to run locally on it and
> suggest some improvement (or contextual autocompletion) to some EMACS
> edited source file.
I asked you how, do you have example how you use it? I have clips
installe
Need example.
I have been reading PDF for users on CLIPS, and my impression is that
it is as much complicated as writing rules in Emacs Lisp. I have to
see some benefit. I have large database and I could generate various
rules and let computer figure out things.
For example, I would like that computer finds out some keywords, and
when keywords in the e-mail quoted message are found, to prepare the
answer or to find out the proper snippet to be used as answer.
I would need just 5 minutes to write rules in Emacs Lisp to find
proper snippet as answer to particular clients, as clients have
pattern of how they ask questions.
Or spam handling, here is some insight:
(defparameter *spam-sets*
'(("gratitude" "website" "magnificent" "investigation")
("identifying" "initiating" "developing" "partnership")
("export" "social" "media" "marketing")
("Google" "Ads")
("Bing" "Ads")
("Facebook" "Ads")
("Google" "Bing" "Facebook")
("PPC")
("web" "design" "professional")
("fortnite" "bucks")
Then it is handled like this:
(dolist (set *spam-sets*)
(when (subsetp set text-list :test #'equalp)
(setq spam t)))
Right now I have only vague idea that it would be more tiresome to
make it in CLIPS, then in Common Lisp.
Give me examples please.
> ChatGPT is certainly not the only possible open source symbolic AI
> software, and they don't require a supercomputer or datacenter. For
> example GNU prolog is also an open source AI software.
Yes, sure, I agree,
ALL COMPUTER PROGRAMS EMBODY ASPECTS OF ARTIFICIAL INTELLIGENCE:
https://gnu.support/articles/ALL-COMPUTER-PROGRAMS-EMBODY-ASPECTS-OF-ARTIFICIAL-INTELLIGENCE-92631.html
All software is artificial intelligence.
And LLM or Large Language Models run on local computers, as I said,
soon I will run it that way. It works now, just terribly slow.
> As a concrete example GNU chess is some open source AI program and
> you don't need a datacenter to run it.
Sure. But every game, and every software and Emacs itself is
artificial intelligence. It is extended mind. But now the term AI is
used in marketing to make it easier accessible to common people.
> Very probably, both CLIPSRULES and RefPerSys could be extended (in a
> few months of work) for simple tasks like English grammar checking or
> English spellchecking.
I need examples. How do you use it?
--
Jean Louis
---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Including AI into Emacs
2024-12-10 15:04 ` Including AI into Emacs Jean Louis
@ 2024-12-10 17:01 ` Christopher Howard
2024-12-10 17:24 ` Jean Louis
0 siblings, 1 reply; 5+ messages in thread
From: Christopher Howard @ 2024-12-10 17:01 UTC (permalink / raw)
To: Jean Louis; +Cc: Basile Starynkevitch, emacs-tangents
Jean Louis <bugs@gnu.support> writes:
> Sure. But every game, and every software and Emacs itself is
> artificial intelligence. It is extended mind. But now the term AI is
> used in marketing to make it easier accessible to common people.
It seems to me that some important distinctions are being blurred throughout this thread. I am seeing the term AI used to refer to three things:
(1) generally, any kind of computation or problem solving that involves computer programming;
(2) computation that involves inferences and rules (e.g., a prolog program)
(3) using LLM, i.e., "the use of large neural networks for language modeling" (wikipedia definition).
Activities (1) and (2) are things that I can do on my own computer, maybe even without having to leave Elisp or the running, single Emacs thread. For activity (3), even I can do it without the help of remote compute cluster, it is going to require a large model database, plus intense computing resources, like a separate computer, or an expensive GPU requiring proprietary drivers.
I'm open minded to integrations of (3), if they can be done cost-effectively, if they are truly useful, and if I don't have to give up my computing freedoms, but that has to be proven to me. And I don't want that approach confused with (1) and (2).
--
Christopher Howard
---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Including AI into Emacs
2024-12-10 17:01 ` Christopher Howard
@ 2024-12-10 17:24 ` Jean Louis
2024-12-10 18:14 ` Christopher Howard
0 siblings, 1 reply; 5+ messages in thread
From: Jean Louis @ 2024-12-10 17:24 UTC (permalink / raw)
To: Christopher Howard; +Cc: Basile Starynkevitch, emacs-tangents
* Christopher Howard <christopher@librehacker.com> [2024-12-10 20:02]:
> Jean Louis <bugs@gnu.support> writes:
>
> > Sure. But every game, and every software and Emacs itself is
> > artificial intelligence. It is extended mind. But now the term AI is
> > used in marketing to make it easier accessible to common people.
> It seems to me that some important distinctions are being blurred
> throughout this thread. I am seeing the term AI used to refer to three
> things:
> (1) generally, any kind of computation or problem solving that involves computer programming;
> (2) computation that involves inferences and rules (e.g., a prolog program)
> (3) using LLM, i.e., "the use of large neural networks for language modeling" (wikipedia definition).
You are right, I cannot personally say AI only for LLMs just because that is getting popular.
Like Basile Starynkevitch explained, there are systems like CLIPS and
RefPerSys, Prolog, etc., there are many ways how computer represents
AI. LLMs are not the only AI, that is like degradation of all of the
previous work on which that LLM was based.
LLM represent to me, the enhanced workflow taught to computer to
recognize the needs and provide results
Those human needs we have been recognizing all over place, like in
Emacs or any other software. User is moving the arrow and trying to
shoot the spaceship, but spaceship can see him, react, and fight
against the user. Every game is type of artificial intelligence.
> Activities (1) and (2) are things that I can do on my own computer, maybe even without having to leave Elisp or the running, single Emacs thread.
That is right, and many such we already all use.
But we do not integrate enough!
Integration, if that is the right work, is enhancing the human workflow to minimize efforts and provide optimum results. That is what I mean.
Programmers are not necessarily scientists, and so they think in terms
of typing. But it is possible to control light with brainwaves, with
special hat, or typing on computer with the eyeball movements.
Makers of LLMs now provided "trained" models that
can type text, translate text more accurately then common translators.
> For activity (3), even I can do it without the help of remote
> compute cluster, it is going to require a large model database, plus
> intense computing resources, like a separate computer, or an expensive
> GPU requiring proprietary drivers.
Here is example that works without GPU:
https://github.com/Mozilla-Ocho/llamafile/
and other examples on same page.
> I'm open minded to integrations of (3), if they can be done
> cost-effectively, if they are truly useful, and if I don't have to
> give up my computing freedoms, but that has to be proven to me. And I
> don't want that approach confused with (1) and (2).
Just as usual, you have got the computing cost, electricity and
computer wearing cost.
It seems that those files are free software, Apache 2.0 License, but I
did not inspect everything, and some models may be free, some not,
choose what is free.
To not confuse it, we shall simply talk about the LLMs when it is the
subject.
--
Jean Louis
---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Including AI into Emacs
2024-12-10 17:24 ` Jean Louis
@ 2024-12-10 18:14 ` Christopher Howard
2024-12-11 0:11 ` Jean Louis
0 siblings, 1 reply; 5+ messages in thread
From: Christopher Howard @ 2024-12-10 18:14 UTC (permalink / raw)
To: Jean Louis; +Cc: Basile Starynkevitch, emacs-tangents
Jean Louis <bugs@gnu.support> writes:
> Integration, if that is the right work, is enhancing the human workflow to minimize efforts and provide optimum results. That is what I mean.
That is not integration, that is optimization or efficiency. Integration may lead to better optimization or efficiency but it might have the opposite effect.
>
> Programmers are not necessarily scientists, and so they think in terms
> of typing. But it is possible to control light with brainwaves, with
> special hat, or typing on computer with the eyeball movements.
>
None of those interface have any appeal to me at all. Well, okay, controlling light with brainwaves sounds interesting, at least. But even so I don't see how the input interface has anything to do with whether or not LLMs (or other AI approaches) should be integrated it our workflow. Unless an input interface is so compute intensive that it requires some kind of cluster-based neural network just to work at all.
> Makers of LLMs now provided "trained" models that
> can type text, translate text more accurately then common translators.
>
This sounds like an argument for using LLMs to do language translation, which I suppose must be acknowledged. Regarding prose: I've read the mind-numbing, generic prose output on the Internet that is now being spit out by LLMs, and I hope that goes away. The artwork generated is also terrible, which has been showing up on some of the cheap furnishing products we buy from China.
>> For activity (3), even I can do it without the help of remote
>> compute cluster, it is going to require a large model database, plus
>> intense computing resources, like a separate computer, or an expensive
>> GPU requiring proprietary drivers.
>
> Here is example that works without GPU:
> https://github.com/Mozilla-Ocho/llamafile/
>
> and other examples on same page.
>
I don't see how a llama driven chat interface or an image generator is going to be useful to me, or worth the computing costs. But I suppose if something like that could be specialized to have expert knowledge of the libraries on my computer or my work flow, it might be worth playing around with.
> Just as usual, you have got the computing cost, electricity and
> computer wearing cost.
My understanding was, for LLMs, the difference involves orders of magnitude. That is what I hear others saying, at least.
Regarding inference engines, I recall with Prolog there is a lot of backtracking going on, so the essence of figuring out a workably efficient program was (1) coming up with intelligent rules, and (2) figuring out when to cut off the backtracking. I have a old Prolog book on my book shelf, but I haven't played around with Prolog at all for years
--
Christopher Howard
---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Including AI into Emacs
2024-12-10 18:14 ` Christopher Howard
@ 2024-12-11 0:11 ` Jean Louis
0 siblings, 0 replies; 5+ messages in thread
From: Jean Louis @ 2024-12-11 0:11 UTC (permalink / raw)
To: Christopher Howard; +Cc: Basile Starynkevitch, emacs-tangents
* Christopher Howard <christopher@librehacker.com> [2024-12-10 21:15]:
> Jean Louis <bugs@gnu.support> writes:
>
> > Integration, if that is the right work, is enhancing the human
> > workflow to minimize efforts and provide optimum results. That is
> > what I mean.
>
> That is not integration, that is optimization or
> efficiency. Integration may lead to better optimization or
> efficiency but it might have the opposite effect.
Sure optimization. Though I didn't express me well enough. I mean
connecting human methods of interactions with computer methods.
Integrating means making part of the whole. It is of course
optimization as well.
Examples of integration:
- program monitoring and making statistics of events in the house,
acting upon events logically; if human turned on lights at specific
time, lights will be turned on in future automatically; learning
patterns; acting upon triggers; sensors detecting events such as
movements, cleaning toilet, mopping the floor when nobody is at
home, feeding pets, cutting grass, recognizing strangers at the
gate;
- learning patterns of communication, rejecting nicely patterns not
relevant, accepting relevant (higher level spam detection);
answering common questions;
- understanding the agenda, the plan, reviewing automatically what was
done, what not, making new agenda and plan based on previous one;
printing it morning early in few copies, making it ready for human;
> > Programmers are not necessarily scientists, and so they think in terms
> > of typing. But it is possible to control light with brainwaves, with
> > special hat, or typing on computer with the eyeball movements.
>
> None of those interface have any appeal to me at all. Well, okay,
> controlling light with brainwaves sounds interesting, at least. But
> even so I don't see how the input interface has anything to do with
> whether or not LLMs (or other AI approaches) should be integrated it
> our workflow. Unless an input interface is so compute intensive that
> it requires some kind of cluster-based neural network just to work
> at all.
We are already integrating, just it moves slow. The new LLM revolution
is making it possible for common man to create it easier, in much more
easier way than programming, it is higher level programming, the goal
we wanted to achieve back in time.
Look here what people are doing: https://websim.ai/
Computer development now moves inevitably into direction of complex
integrations.
> > Makers of LLMs now provided "trained" models that
> > can type text, translate text more accurately then common translators.
>
> This sounds like an argument for using LLMs to do language
> translation, which I suppose must be acknowledged. Regarding prose:
> I've read the mind-numbing, generic prose output on the Internet
> that is now being spit out by LLMs, and I hope that goes away. The
> artwork generated is also terrible, which has been showing up on
> some of the cheap furnishing products we buy from China.
Yes, it works very well for translations.
- **Spanish:** Sí, funciona muy bien para traducciones.
- **French:** Oui, cela fonctionne très bien pour les traductions.
- **German:** Ja, es funktioniert sehr gut für Übersetzungen.
- **Chinese (Simplified):** 是的,它在翻译方面效果非常好。
- **Arabic:** نعم، إنه يعمل بشكل جيد جدًا للترجمات.
- **Russian:** Да, это очень хорошо работает для переводов.
- **Japanese:** はい、翻訳にとてもよく機能します。
- **Portuguese:** Sim, funciona muito bem para traduções.
- **Hindi:** हां, यह अनुवाद के लिए बहुत अच्छा काम करता है।
- **Italian:** Sì, funziona molto bene per le traduzioni.
Imagine. And I just said: translate in bullet list in 10 different
languages.
You can also say: make Emacs Lisp structure out of it:
Sure! Here is the translation of "Yes, it works very well for translations." in 10 different languages, formatted as an Emacs Lisp structure:
```emacs-lisp
'(("Spanish" . "Sí, funciona muy bien para traducciones.")
("French" . "Oui, cela fonctionne très bien pour les traductions.")
("German" . "Ja, es funktioniert sehr gut für Übersetzungen.")
("Chinese (Simplified)" . "是的,它在翻译方面效果非常好。")
("Arabic" . "نعم، إنه يعمل بشكل جيد جدًا للترجمات.")
("Russian" . "Да, это очень хорошо работает для переводов.")
("Japanese" . "はい、翻訳にとてもよく機能します。")
("Portuguese" . "Sim, funciona muito bem para traduções.")
("Hindi" . "हां, यह अनुवाद के लिए बहुत अच्छा काम करता है।")
("Italian" . "Sì, funziona molto bene per le traduzioni."))
```
Isn't that great?
I have been wasting hours and hours looking into various structures
and making new Lisp structures to import data from other programs, now
I just insert sample data and get the Emacs Lisp program ready, often
almost ready for production.
> >> For activity (3), even I can do it without the help of remote
> >> compute cluster, it is going to require a large model database, plus
> >> intense computing resources, like a separate computer, or an expensive
> >> GPU requiring proprietary drivers.
> >
> > Here is example that works without GPU:
> > https://github.com/Mozilla-Ocho/llamafile/
> >
> > and other examples on same page.
>
> I don't see how a llama driven chat interface or an image generator
> is going to be useful to me, or worth the computing costs. But I
> suppose if something like that could be specialized to have expert
> knowledge of the libraries on my computer or my work flow, it might
> be worth playing around with.
- Website Revision System, sales and marketing:
- Open Graph image related to page can be automatically generated,
very useful in Internet marketing;
- Correct titles, make them more appealing, describe the article,
generate slugs;
- By using list of links as memory, automatically link words in the
article with relevant links;
- Answer customers' questions, point out to articles, products,
provide support;
- Family:
- Generate daily routines for children;
- Generate planning, fun, entertainment;
- Programming:
- Create templates, improve CSS, programming code;
- Quickly find answers, debug; becoming rapid;
There is infinite list of uses.
> > Just as usual, you have got the computing cost, electricity and
> > computer wearing cost.
>
> My understanding was, for LLMs, the difference involves orders of
> magnitude. That is what I hear others saying, at least.
As I said, there are LLMs working on computer without GPU:
GitHub - Mozilla-Ocho/llamafile: Distribute and run LLMs with a single file.:
https://github.com/Mozilla-Ocho/llamafile
It works on mine i5 CPU, though slow. We will see soon when I insert
Nvidia GPU how it will work. I am very satisfied with results.
It can describe the picture on my computer, which is a fantastic
feature! 📸 I can already envision indexing all my images. It's not
just about my personal life; I also have numerous pictures related to
courses and teaching others.
> Regarding inference engines, I recall with Prolog there is a lot of
> backtracking going on, so the essence of figuring out a workably
> efficient program was (1) coming up with intelligent rules, and (2)
> figuring out when to cut off the backtracking. I have a old Prolog
> book on my book shelf, but I haven't played around with Prolog at
> all for years
SWI-Prolog:
https://www.swi-prolog.org/
--
Jean Louis
---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-12-11 0:11 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <61ffb7417fcfe6fc0c1291aa53d1398b.support1@rcdrun.com>
[not found] ` <87msh8ctag.fsf@librehacker.com>
[not found] ` <ad41d544df092a072deab64cd41bad0c5ea21185.camel@starynkevitch.net>
2024-12-10 15:04 ` Including AI into Emacs Jean Louis
2024-12-10 17:01 ` Christopher Howard
2024-12-10 17:24 ` Jean Louis
2024-12-10 18:14 ` Christopher Howard
2024-12-11 0:11 ` Jean Louis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).