all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
* Enhancing ELisp for AI Work
       [not found] <7290780.2375960.1734348492938.ref@mail.yahoo.com>
@ 2024-12-16 11:28 ` Andrew Goh via Users list for the GNU Emacs text editor
  2024-12-16 13:39   ` Jean Louis
  2024-12-16 14:55   ` Tomáš Petit
  0 siblings, 2 replies; 19+ messages in thread
From: Andrew Goh via Users list for the GNU Emacs text editor @ 2024-12-16 11:28 UTC (permalink / raw)
  To: help-gnu-emacs@gnu.org

Dear Emacs Team,
As a long-time Emacs user and enthusiast, I would like to recommend that the team consider enhancing ELisp to make it more suitable for artificial intelligence (AI) work.
Elisp has been an incredibly powerful and flexible language for Emacs extension development, but its capabilities can be further expanded to support AI applications.
Some potential areas for enhancement include:
1.  Performance improvements through Just-In-Time (JIT) compilation or native code generation.2.  Introduction of native numerical arrays and linear algebra libraries3.  Development of machine learning and AI libraries, including neural networks, decision trees, and clustering algorithms4.  Improved interoperability with other languages through a foreign function interface (FFI)5.  Enhanced documentation and community resources focused on AI development in ELisp
By addressing these areas, ELisp can become a more comprehensive and efficient platform for AI development, attracting a wider range of users and developers.
Thanks you for considering this recommendation.  I look forward to seeing the future developments in ELisp.
Best Regards,
Andrew Goh S MWith Help from Meta AI 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-16 11:28 ` Enhancing ELisp for AI Work Andrew Goh via Users list for the GNU Emacs text editor
@ 2024-12-16 13:39   ` Jean Louis
  2024-12-16 14:55   ` Tomáš Petit
  1 sibling, 0 replies; 19+ messages in thread
From: Jean Louis @ 2024-12-16 13:39 UTC (permalink / raw)
  To: Andrew Goh; +Cc: help-gnu-emacs@gnu.org

* Andrew Goh via Users list for the GNU Emacs text editor <help-gnu-emacs@gnu.org> [2024-12-16 14:30]:

> As a long-time Emacs user and enthusiast, I would like to recommend
> that the team consider enhancing ELisp to make it more suitable for
> artificial intelligence (AI) work.

That is so true. Though, if you think on LLMs, then I am in
disagremeent calling Large Language Models solely AI, it is better we
specify it well what we mean with it. The word AI now became a popular
keyword for common people to interact with computer and get some tasks
done by using Natural Language Processing.

ALL COMPUTER PROGRAMS EMBODY ASPECTS OF ARTIFICIAL INTELLIGENCE!

Isn't that main reason why we are programming?

> Elisp has been an incredibly powerful and flexible language for
> Emacs extension development, but its capabilities can be further
> expanded to support AI applications.

Oh, absolutely yes.

> Some potential areas for enhancement include:

> 1. Performance improvements through Just-In-Time (JIT) compilation
> or native code generation.

Hmm, I have no idea of JIT within Emacs and if that would at all speed
it up, but it does have "native compilation" now, though unsure how it
works. It makes it a bit faster I guess. Maybe that is what you mean.

In fact before generating questions with the LLM, maybe you should
cross check with your own skills if the feature you are querying
already implemented in Emacs.

> 2. Introduction of native numerical arrays and linear algebra
> libraries

Personally, no idea on that. What I know is that mathematics works
well within Emacs.

> 3. Development of machine learning and AI libraries, including
> neural networks, decision trees, and clustering algorithms

I guess now there is nothing within Emacs for that, but we can always
🚀 call external functions and speed up the overall development cycle
by working through the portal of GNU Emacs. 💻🔍

> 4. Improved interoperability with other languages through a foreign
> function interface (FFI)

Not sure about that, but now Emacs has modules, so anything is
possible to hook into it.

LLM information is there to provide guidelines, not to be smarter than
you and especially can't outsmart people on the mailing list.

Here is easily to find Emacs FFI module:
https://github.com/tromey/emacs-ffi

Looks like your LLM has been playing chess with your brain and winning
every game!

A second or few was the information that Emacs FFI already exists.

> 5. Enhanced documentation and community resources focused on AI
> development in ELisp

People have been developing AI since the inception of computers, and
also Emacs and GNU Operating System, and as you know, without GNU,
there would be no Linux, there would be no Ruby, Python, etc., and so
on—it is all a big chicken and many eggs now.

> By addressing these areas, ELisp can become a more comprehensive and
> efficient platform for AI development, attracting a wider range of
> users and developers.

I think it is excellent editing platform already, and is being very
much addressed.

You see, it does matter how you write, as to say "by addressing these
areas" while many are already addressed it may appear wronging and
invalidating. As I said, the LLM response outsmarted you 😎

> Andrew Goh S MWith Help from Meta AI

Next time try with your own built-in I.

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-16 11:28 ` Enhancing ELisp for AI Work Andrew Goh via Users list for the GNU Emacs text editor
  2024-12-16 13:39   ` Jean Louis
@ 2024-12-16 14:55   ` Tomáš Petit
  2024-12-16 16:26     ` Jean Louis
  2024-12-16 17:38     ` Jean Louis
  1 sibling, 2 replies; 19+ messages in thread
From: Tomáš Petit @ 2024-12-16 14:55 UTC (permalink / raw)
  To: help-gnu-emacs

Greetings,

wouldn't Common Lisp or some Scheme dialect be better suited for this 
job instead of Emacs Lisp?

Regards,

Tomáš Petit


On 12/16/24 12:28 PM, Andrew Goh via Users list for the GNU Emacs text 
editor wrote:
> Dear Emacs Team,
> As a long-time Emacs user and enthusiast, I would like to recommend that the team consider enhancing ELisp to make it more suitable for artificial intelligence (AI) work.
> Elisp has been an incredibly powerful and flexible language for Emacs extension development, but its capabilities can be further expanded to support AI applications.
> Some potential areas for enhancement include:
> 1.  Performance improvements through Just-In-Time (JIT) compilation or native code generation.2.  Introduction of native numerical arrays and linear algebra libraries3.  Development of machine learning and AI libraries, including neural networks, decision trees, and clustering algorithms4.  Improved interoperability with other languages through a foreign function interface (FFI)5.  Enhanced documentation and community resources focused on AI development in ELisp
> By addressing these areas, ELisp can become a more comprehensive and efficient platform for AI development, attracting a wider range of users and developers.
> Thanks you for considering this recommendation.  I look forward to seeing the future developments in ELisp.
> Best Regards,
> Andrew Goh S MWith Help from Meta AI



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-16 14:55   ` Tomáš Petit
@ 2024-12-16 16:26     ` Jean Louis
  2024-12-16 17:38     ` Jean Louis
  1 sibling, 0 replies; 19+ messages in thread
From: Jean Louis @ 2024-12-16 16:26 UTC (permalink / raw)
  To: Tomáš Petit; +Cc: help-gnu-emacs

* Tomáš Petit <petitthomas34@gmail.com> [2024-12-16 17:57]:
> Greetings,
> 
> wouldn't Common Lisp or some Scheme dialect be better suited for this job
> instead of Emacs Lisp?

Emacs Lisp reaches out to many external environments, so it is portal
to everything else. Even when I use externally Common Lisp, I may be
invoking it from Emacs Lisp, some people live in Emacs, and anyway,
whatever language, it can be still edited within Emacs, run, tested,
it gets somehow similar feelings no matter which language runs.

In my work I have to heavily work with text, and accessing HTTP
endpoints to reach to some of Large Language Models (LLM), is not
hard. Emacs Lisp does it.

Let us say preparing the dataset, I have good tools within Emacs Lisp
to find the data necessary for training of the LLM within
seconds. Then it would need some preparation with external tools which
are ready made for that task. But a lot may be done within Emacs.

Here is simple function:

(defun rcd-llm-response (response-buffer)
  "Parse LLM's RESPONSE-BUFFER and return decoded string."
  (when response-buffer
    (with-current-buffer response-buffer
      ;; Skip HTTP headers
      (goto-char (point-min))
      (when (search-forward "\n\n" nil t)
        (let ((response (decode-coding-string (buffer-substring-no-properties (point) (point-max)) 'utf-8)))
	  (kill-buffer response-buffer)
	  ;; Parse JSON and extract the reply
	  (let* ((json-response (json-parse-string response :object-type 'alist))
		 (choices (alist-get 'choices json-response))
		 (message (alist-get 'message (aref choices 0)))
		 (message (decode-coding-string (alist-get 'content message) 'utf-8)))
	    (string-replace "</s>" "\n" message)))))))

The model Qwen2.5-Coder-32B-Instruct is Apache 2.0. which is free
software license.

(defun rcd-llm-huggingface (prompt &optional memory rcd-llm-model temperature max-tokens top-p stream)
  "Send PROMPT to Hugging Face API with specified parameters.

Optional MEMORY, RCD-LLM-MODEL, TEMPERATURE, MAX-TOKENS, TOP-P, and STREAM can be used."
  (let* ((rcd-llm-model (or rcd-llm-model "Qwen/Qwen2.5-Coder-32B-Instruct"))
         (temperature (or temperature 0.5))
         (max-tokens (or max-tokens 2048))
         (top-p (or top-p 0.7))
         (stream (if stream t :json-false))
         (url-request-method "POST")
         (url-request-extra-headers
          '(("Content-Type" . "application/json")
            ("Authorization" . "Bearer hf_YOUR-API-KEY")))
         (url-request-data
          (encode-coding-string
	   (setq rcd-llm-last-json
		 (json-encode
		  `((model . ,rcd-llm-model)
		    (messages . [((role . "user") (content . ,prompt))])
		    (temperature . ,temperature)
		    (max_tokens . ,max-tokens)
		    (top_p . ,top-p)
		    (stream . ,stream))))
           'utf-8))
         (buffer (url-retrieve-synchronously
                  "https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions")))
    (rcd-llm-response buffer)))

The whole library then does everything I nee to interact with
LLMs. Emacs is for text, LLM is for text, it must go hand in hand.

But generation of the LLM is not yet workable through Emacs Lisp, even
though for sure not impossible, it is just nobody yet tried to create
it that way.

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-16 14:55   ` Tomáš Petit
  2024-12-16 16:26     ` Jean Louis
@ 2024-12-16 17:38     ` Jean Louis
  2024-12-17  6:24       ` Tomáš Petit
  1 sibling, 1 reply; 19+ messages in thread
From: Jean Louis @ 2024-12-16 17:38 UTC (permalink / raw)
  To: Tomáš Petit; +Cc: help-gnu-emacs

* Tomáš Petit <petitthomas34@gmail.com> [2024-12-16 17:57]:
> Greetings,
> 
> wouldn't Common Lisp or some Scheme dialect be better suited for this job
> instead of Emacs Lisp?

A ChatGPT clone, in 3000 bytes of C, backed by GPT-2 (2023) (carlini.com)
https://nicholas.carlini.com/writing/2023/chat-gpt-2-in-c.html

I just think that such example, could be implemented through Emacs
Lisp and usage of:

tromey/emacs-ffi: FFI for Emacs:
https://github.com/tromey/emacs-ffi

All of that code may be converted, I guess, and so that it is
programmed from within Emac Lisp.

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-16 17:38     ` Jean Louis
@ 2024-12-17  6:24       ` Tomáš Petit
  2024-12-17 10:29         ` Jean Louis
  2024-12-17 10:34         ` Jean Louis
  0 siblings, 2 replies; 19+ messages in thread
From: Tomáš Petit @ 2024-12-17  6:24 UTC (permalink / raw)
  To: help-gnu-emacs

Right, that is of course entirely possible. I was thinking more along 
the lines of projects like

https://antik.common-lisp.dev/

or

https://github.com/melisgl/mgl

and generally building the entire machinery natively in Elisp, for which 
I find CL just a better option. But yeah, calling LLMs like that is 
viable as well.


On 12/16/24 6:38 PM, Jean Louis wrote:
> * Tomáš Petit <petitthomas34@gmail.com> [2024-12-16 17:57]:
>> Greetings,
>>
>> wouldn't Common Lisp or some Scheme dialect be better suited for this job
>> instead of Emacs Lisp?
> A ChatGPT clone, in 3000 bytes of C, backed by GPT-2 (2023) (carlini.com)
> https://nicholas.carlini.com/writing/2023/chat-gpt-2-in-c.html
>
> I just think that such example, could be implemented through Emacs
> Lisp and usage of:
>
> tromey/emacs-ffi: FFI for Emacs:
> https://github.com/tromey/emacs-ffi
>
> All of that code may be converted, I guess, and so that it is
> programmed from within Emac Lisp.
>



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17  6:24       ` Tomáš Petit
@ 2024-12-17 10:29         ` Jean Louis
  2024-12-17 10:34         ` Jean Louis
  1 sibling, 0 replies; 19+ messages in thread
From: Jean Louis @ 2024-12-17 10:29 UTC (permalink / raw)
  To: Tomáš Petit; +Cc: help-gnu-emacs

* Tomáš Petit <petitthomas34@gmail.com> [2024-12-17 09:26]:
> Right, that is of course entirely possible. I was thinking more along the
> lines of projects like
> 
> https://antik.common-lisp.dev/
> 
> or
> 
> https://github.com/melisgl/mgl
> 
> and generally building the entire machinery natively in Elisp, for which I
> find CL just a better option. But yeah, calling LLMs like that is viable as
> well.

After short review, it seems much is there at the mgl link, CUDA too,
sure! Very nice. I can't get fast into it. Surely is possible to do it
with Emacs Lisp and probably modules for CUDA access.

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17  6:24       ` Tomáš Petit
  2024-12-17 10:29         ` Jean Louis
@ 2024-12-17 10:34         ` Jean Louis
  2024-12-17 11:40           ` Tomáš Petit
  1 sibling, 1 reply; 19+ messages in thread
From: Jean Louis @ 2024-12-17 10:34 UTC (permalink / raw)
  To: Tomáš Petit; +Cc: help-gnu-emacs

* Tomáš Petit <petitthomas34@gmail.com> [2024-12-17 09:26]:
> Right, that is of course entirely possible. I was thinking more along the
> lines of projects like
> 
> https://antik.common-lisp.dev/
> 
> or
> 
> https://github.com/melisgl/mgl
> 
> and generally building the entire machinery natively in Elisp, for which I
> find CL just a better option. But yeah, calling LLMs like that is viable as
> well.

Attempts in Emacs Lisp:

narendraj9/emlib: Machine Learning in Emacs Lisp
https://github.com/narendraj9/emlib

Building and Training Neural Networks in Emacs Lisp
https://www.scss.tcd.ie/~sulimanm/posts/nn-introduction.html

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17 10:34         ` Jean Louis
@ 2024-12-17 11:40           ` Tomáš Petit
  2024-12-17 21:35             ` Jean Louis
  0 siblings, 1 reply; 19+ messages in thread
From: Tomáš Petit @ 2024-12-17 11:40 UTC (permalink / raw)
  To: help-gnu-emacs

I clearly haven't done my due diligence because I wasn't aware of those 
cool little projects. It certainly looks fun, not sure if Emacs Lisp 
will ever attract enough attention, although I would personally hope for 
Lisp (and its derivatives) to have a glorious return.

On 12/17/24 11:34 AM, Jean Louis wrote:
> * Tomáš Petit <petitthomas34@gmail.com> [2024-12-17 09:26]:
>> Right, that is of course entirely possible. I was thinking more along the
>> lines of projects like
>>
>> https://antik.common-lisp.dev/
>>
>> or
>>
>> https://github.com/melisgl/mgl
>>
>> and generally building the entire machinery natively in Elisp, for which I
>> find CL just a better option. But yeah, calling LLMs like that is viable as
>> well.
> Attempts in Emacs Lisp:
>
> narendraj9/emlib: Machine Learning in Emacs Lisp
> https://github.com/narendraj9/emlib
>
> Building and Training Neural Networks in Emacs Lisp
> https://www.scss.tcd.ie/~sulimanm/posts/nn-introduction.html
>



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17 11:40           ` Tomáš Petit
@ 2024-12-17 21:35             ` Jean Louis
  2024-12-18  5:04               ` tomas
  2024-12-24 10:57               ` Andreas Röhler
  0 siblings, 2 replies; 19+ messages in thread
From: Jean Louis @ 2024-12-17 21:35 UTC (permalink / raw)
  To: Tomáš Petit; +Cc: help-gnu-emacs

* Tomáš Petit <petitthomas34@gmail.com> [2024-12-17 14:42]:
> I clearly haven't done my due diligence because I wasn't aware of those cool
> little projects. It certainly looks fun, not sure if Emacs Lisp will ever
> attract enough attention, although I would personally hope for Lisp (and its
> derivatives) to have a glorious return.

But one thing not to forget, let's not minimize the actual artificial
intelligence programmed over years with various programming languages.

A probabilit text generator is bullshit generator. It cares zero of
the truth, it is program. Far from the real "intelligence".

ChatGPT is bullshit | Ethics and Information Technology
https://link.springer.com/article/10.1007/s10676-024-09775-5

LLM have tremendous uses, they are useful, and there is no doubt about
it. But calling them "intelligence" and diminishing other types of
software is too much.

-- 
Jean Louis
ALL COMPUTER PROGRAMS EMBODY ASPECTS OF ARTIFICIAL INTELLIGENCE



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17 21:35             ` Jean Louis
@ 2024-12-18  5:04               ` tomas
  2024-12-24 10:57               ` Andreas Röhler
  1 sibling, 0 replies; 19+ messages in thread
From: tomas @ 2024-12-18  5:04 UTC (permalink / raw)
  To: help-gnu-emacs; +Cc: Tomáš Petit

[-- Attachment #1: Type: text/plain, Size: 276 bytes --]

On Wed, Dec 18, 2024 at 12:35:50AM +0300, Jean Louis wrote:

[...]

> ChatGPT is bullshit | Ethics and Information Technology
> https://link.springer.com/article/10.1007/s10676-024-09775-5

A must-read, together with Harry Frankfurt's "On Bullshit".

Cheers
-- 
t

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17 21:35             ` Jean Louis
  2024-12-18  5:04               ` tomas
@ 2024-12-24 10:57               ` Andreas Röhler
  2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
                                   ` (3 more replies)
  1 sibling, 4 replies; 19+ messages in thread
From: Andreas Röhler @ 2024-12-24 10:57 UTC (permalink / raw)
  To: help-gnu-emacs


Am 17.12.24 um 22:35 schrieb Jean Louis:
> ChatGPT is bullshit |

No, it isn't. We have a language problem, because we have something new. 
Must split the notion of intelligence.

People tried to fly like a bird. Where they successful?

Not really.

But no bird is able to reach the moon.

LLMs are able to reason. With the amount of data they will be -- and 
probably already are-- much stronger in reasoning/deduction then humans.

LLMs are creativ, constructing new terms from reasoning.

They will be indispensable in science.


All depends on context. With no context, humans too can't answer a 
single question.




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-24 10:57               ` Andreas Röhler
@ 2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
  2024-12-25 20:20                   ` Andreas Röhler
  2024-12-24 16:22                 ` Christopher Howard
                                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Stefan Monnier via Users list for the GNU Emacs text editor @ 2024-12-24 15:25 UTC (permalink / raw)
  To: help-gnu-emacs

> LLMs are able to reason. With the amount of data they will be -- and
> probably already are-- much stronger in reasoning/deduction then humans.

??

AFAIK no amount of extra data will fix their fundamental inability to
perform any kind of logical reasoning.

That doesn't mean we can't fix them to do that, of course, but it takes
something qualitatively different rather than mere quantity of data.

That's been known for years, and re-publicized recently by some Apple
team.  Can you point at a publication that argues convincingly otherwise?


        Stefan




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-24 10:57               ` Andreas Röhler
  2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
@ 2024-12-24 16:22                 ` Christopher Howard
  2024-12-26  6:06                   ` Joel Reicher
  2024-12-24 21:27                 ` Jean Louis
  2024-12-24 21:58                 ` Is ChatGPT bullshit? tomas
  3 siblings, 1 reply; 19+ messages in thread
From: Christopher Howard @ 2024-12-24 16:22 UTC (permalink / raw)
  To: Andreas Röhler; +Cc: help-gnu-emacs

Andreas Röhler <andreas.roehler@easy-emacs.de> writes:

> LLMs are able to reason. With the amount of data they will be -- and
> probably already are-- much stronger in reasoning/deduction then
> humans.
>
> LLMs are creativ, constructing new terms from reasoning.

The point of the previous article was to demonstrate that LLMs do not reason, or more particularly, attempt to determine truth. They simply try to calculate what is the next most likely and natural thing you expect to see in a flow of words. Sometimes you get something true out of that, often times you get something that is either false or shallow.

Explain how you go from that, to saying that LLMs are doing reasoning and deduction, and are creative.

There are software programs that attempt to do deduction and reasoning, by connecting propositions and arguments to determine truth and falsity. But as far as I understand, that is not what LLMs do.

-- 
Christopher Howard



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-24 10:57               ` Andreas Röhler
  2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
  2024-12-24 16:22                 ` Christopher Howard
@ 2024-12-24 21:27                 ` Jean Louis
  2024-12-24 21:58                 ` Is ChatGPT bullshit? tomas
  3 siblings, 0 replies; 19+ messages in thread
From: Jean Louis @ 2024-12-24 21:27 UTC (permalink / raw)
  To: Andreas Röhler; +Cc: help-gnu-emacs

* Andreas Röhler <andreas.roehler@easy-emacs.de> [2024-12-24 13:59]:
> Am 17.12.24 um 22:35 schrieb Jean Louis:
> > ChatGPT is bullshit |

Danke Andreas, aber hast Du es gelesen?

The article about bullshit, doesn't speak in offensive manner about
it. It is reasonable analysis of what it really is.

Let me quote from:
https://link.springer.com/article/10.1007/s10676-024-09775-5

> Because these programs cannot themselves be concerned with truth, and
> because they are designed to produce text that looks truth-apt without
> any actual concern for truth, it seems appropriate to call their
> outputs bullshit.

When something looks truth-apt, just because it was statistically
pulled out of the database and presented nicely, does it mean that
program is concerned of truth? Or there is deception, illussion? 

I have just asked my locally running text generator
(QwQ-LCoT-3B-Instruct.Q4_K_M.gguf) and look:

Depending on the context and the intended audience, it might be more
appropriate to use more neutral language to convey the same
message. For example, one could say:

"Because these programs cannot themselves be concerned with truth and
because they are designed to produce text that looks truth-apt without
any actual concern for truth, it seems appropriate to call their
outputs deceptive or untruthful."

But I do think that word "bullshit" better conveys the importance of
understanding the text generators.

They are very deceptive.

A person asked me today if that organization from Deadpool movie 2024
that can look into the future exists in reality. People are easily
deceived. 

I am heavy user of LLM text generation as it is for improvement of
information meant for consulting, sales and marketing. In general, I
am making sure of the text expressiveness. 

Today there was 270 requests, and I sometimes let computer run to provide
summary for 1000+ different texts.

That experience and searching on Internet, tells me, that I may be
lucky, and I am luck many times per day, but many times also not. I
get so wrong information.

It requires me to develop a skill to see through the deception and
recognize which pieces of information may be truthful, which are
fake. There are too many models, and I hope you try it by mass that
you understand what I mean.

I understand responses from LLM as:

- proposals of the truthful information
- well expressed possibilities of truth
- well documented proposals

I do not consider them "authentic" for reason of my research. I
consider it as excerpts on which I have to review, analyse and
approve. 

There is nothing "intelligent" there, there is absolutely no thinking,
just appearance, mirror of a human behavior, it is deceptive.

> No, it isn't. We have a language problem, because we have something
> new.

I am trying to understand what you mean. I think you are saying there
is some new technology, and we are not properly recognizing it, and
that it will be developed into future.

I am sure it is very useful, as if it would not be useful to me, I
would not be using it 4736 times since 12 months.

I am fascinated with it. Can't get rid of it, it is helping me so much
in life. It is true money maker. I am just thinking how to get more
GPU, better hardware, how to run it locally. On that side of
fascination I am fully with you.

Not that it can think.

It has no moral, no ethics.

Only illusion of it.

It doesn't care what it says, as long as math inside tells it should
give some output.

> Must split the notion of intelligence.
> 
> People tried to fly like a bird. Where they successful?
> 
> Not really.
> 
> But no bird is able to reach the moon.

I think you mean no LLM is to become like human. That for sure not,
though in some future with more integration a machine could become
very independent and look like living being.

> LLMs are able to reason. With the amount of data they will be -- and
> probably already are-- much stronger in reasoning/deduction then
> humans.

Well there we are, that is where I can't agree with you. 

I can agree with "deduction" or "inference", as computer which has
access to large database was always so much assistive to human, as
that is the reason why for example we use documentation search
features, or search engines.

But that LLM is able to reason, that I can't support.

Maybe you wish to say that LLM is giving illusion of reasoning? I am
not sure what you wish to say.

Deduction is not necessarily reasoning, it is math.  

Just from this day I could give you so many examples proving that LLM
cannot reason. It is program that gives probability based proposals
based on inputs and available data.

I have been asking today how can I turn on GPU usage in Inkscape, and
I got positive answers, even though such feature apparently doesn't
exist.

You have to use many models to get the feeling.

Back to quotes:
https://link.springer.com/article/10.1007/s10676-024-09775-5

> We argue that at minimum, the outputs of LLMs like ChatGPT are soft
> bullshit: bullshit–that is, speech or text produced without concern
> for its truth–that is produced without any intent to mislead the
> audience about the utterer’s attitude towards truth. We also
> suggest, more controversially, that ChatGPT may indeed produce hard
> bullshit: if we view it as having intentions (for example, in virtue
> of how it is designed), then the fact that it is designed to give
> the impression of concern for truth qualifies it as attempting to
> mislead the audience about its aims, goals, or agenda.

It is not "AI". Far from there. Even as original term I don't agree to
it. It is attempt of people to make the AI, but it is not yet any kind
of "intelligence". Artificial it is also not, it is a mirror of human
intelligence, it is part of nature and arouse from natural development
of human, it is not something separate from human, it is our product,
not artificial something.

I think every Emacs user who ever used M-x doctor should understand
it.  It is actually the first exercise to understand what is LLM.

> LLMs are creativ, constructing new terms from reasoning.

Human is creative. Program is as creative as human is. Program alone
is not creative. It does what human directed. Turn off the electricity
and show me creativity then!

> They will be indispensable in science.

I agree in that, but they cannot reason.

If a program were capable of reasoning, one might wonder why it
wouldn't wake up and start independently thinking about how to improve
humanity, reducing our efforts and enhancing our lives. Instead, it
merely generates text statistically and dispassionately, completely
devoid of emotional or mindful connection.

> All depends on context. With no context, humans too can't answer a
> single question.

Baby knows where is the mother without thinking or opening eyes even.

The quote "AI" often doesn't know what is the time, or where is its
author.

https://duckduckgo.com/?t=ftsa&q=llm+cannot+reason&ia=web

Back in 1984 we were playing computer games, rockets, and were fully
under impression that there is something reasoning against me. It was
program, deceptive, but program, not thinker.

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Is ChatGPT bullshit?
  2024-12-24 10:57               ` Andreas Röhler
                                   ` (2 preceding siblings ...)
  2024-12-24 21:27                 ` Jean Louis
@ 2024-12-24 21:58                 ` tomas
  3 siblings, 0 replies; 19+ messages in thread
From: tomas @ 2024-12-24 21:58 UTC (permalink / raw)
  To: Andreas Röhler; +Cc: help-gnu-emacs

[-- Attachment #1: Type: text/plain, Size: 952 bytes --]

On Tue, Dec 24, 2024 at 11:57:08AM +0100, Andreas Röhler wrote:
> 
> Am 17.12.24 um 22:35 schrieb Jean Louis:
> > ChatGPT is bullshit |

Most definitely, yes. The already linked Uni Glasgow article [1] makes
a compelling case.

> No, it isn't. We have a language problem, because we have something new.
> Must split the notion of intelligence.
> 
> People tried to fly like a bird. Where they successful?
> 
> Not really.
> 
> But no bird is able to reach the moon.
> 
> LLMs are able to reason. With the amount of data they will be -- and
> probably already are-- much stronger in reasoning/deduction then humans.

No. They can babble /as if/ they were reasoning.

Jeez. I thought we had cleared that ca 1966 [2]:


Cheers

[1] https://link.springer.com/article/10.1007/s10676-024-09775-5?error=cookies_not_supported&code=a86869b1-3f10-4599-aeb1-f482d3a4d2e2#Aff1
[2] https://en.wikipedia.org/wiki/ELIZA_effect

-- 
t

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
@ 2024-12-25 20:20                   ` Andreas Röhler
  2024-12-26  8:37                     ` Jean Louis
  0 siblings, 1 reply; 19+ messages in thread
From: Andreas Röhler @ 2024-12-25 20:20 UTC (permalink / raw)
  To: help-gnu-emacs

Hi all,

maybe let's forget ChatGPT in its current state, but refer to this phrase:

'''A probabilit text generator is bullshit generator. It cares zero of
the truth, it is program. Far from the real "intelligence".'''

IMO that's wrong from the human as from the machine side.

Human consider themself as intelligent. But how to proof that?

Humans can count. Really? If yes, why the first element of an index in 
famous programming languages is designated as Zero instead of One?

OTOH machines, asked: '''in french: "inversion accusatoire"'''

Response was:

'''La "inversion accusatoire" en français est appelée "passif" ou 
"inversion de sujet" en grammaire.

En Odoo, l'inversion accusatoire se réfère à l'ordre des mots dans une 
phrase qui est inversé par rapport à la règle générale. En français, la 
règle générale est que le sujet de la phrase doit être mis en premier, 
suivie du verbe et ensuite de l'objet.

Exemple de phrase normale : Le client achète un produit.

Inversion accusatoire : Un produit achète le client.

En Odoo, l'inversion accusatoire est utilisée pour formatter les 
informations dans les rapports ou les vues, par exemple, pour définir 
l'ordre des champs dans une formulaire.

J'espère que cela vous a aidé !'''

Obviously the context assumed by the LLM was false. But none was delivered.

Inside the false context, the conclusion is quite interesting. Because 
‘achetere’ -- buying -- wasn't mentioned at all. In an abstract view, 
the reasoning might well have sense. There are other remarks in this 
response, which indicate the model was able to abstract over the matter.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-24 16:22                 ` Christopher Howard
@ 2024-12-26  6:06                   ` Joel Reicher
  0 siblings, 0 replies; 19+ messages in thread
From: Joel Reicher @ 2024-12-26  6:06 UTC (permalink / raw)
  To: Christopher Howard; +Cc: Andreas Röhler, help-gnu-emacs

Christopher Howard <christopher@librehacker.com> writes:

> Andreas Röhler <andreas.roehler@easy-emacs.de> writes:
>
>> LLMs are able to reason. With the amount of data they will be 
>> -- and probably already are-- much stronger in 
>> reasoning/deduction then humans.
>>
>> LLMs are creativ, constructing new terms from reasoning.
>
> The point of the previous article was to demonstrate that LLMs 
> do not reason, or more particularly, attempt to determine 
> truth. They simply try to calculate what is the next most likely 
> and natural thing you expect to see in a flow of 
> words. Sometimes you get something true out of that, often times 
> you get something that is either false or shallow.

I'm really hesitant to contribute to a thread that's probably 
off-topic, but I'd like to suggest that an LLM's output is perhaps 
best thought of as quoted text, so it is neither true nor false.

The quotes are only removed when a reader reads it.

Regards,

        - Joel



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-25 20:20                   ` Andreas Röhler
@ 2024-12-26  8:37                     ` Jean Louis
  0 siblings, 0 replies; 19+ messages in thread
From: Jean Louis @ 2024-12-26  8:37 UTC (permalink / raw)
  To: Andreas Röhler; +Cc: emacs-tangents

Okay let us move it to Emacs Tangents mailing list.

In general I find your writings interesting!

Same I was thinking myself, that it is great thing, giving me accurate
results, until the verification! Many things were simply invented!

Then after verification of how it works, I could understand it and I
realized that it has no intelligence.

* Andreas Röhler <andreas.roehler@easy-emacs.de> [2024-12-25 23:22]:
> maybe let's forget ChatGPT in its current state, but refer to this phrase:
> 
> '''A probabilit text generator is bullshit generator. It cares zero of
> the truth, it is program. Far from the real "intelligence".'''
> 
> IMO that's wrong from the human as from the machine side.

I think you wanted to say that it is same for human, that human also
doesn't take care for truth.

And yes! That is the problem most of the time with human, they mimic
the expressions and "text", talk, communication of other human, not
everybody is thinking! But almost everybody is able to mimic. And many
many among us do not care of truth.

Though they have the capability to care of the truth. Each of us.

The capability to care of truth or discover truth even without being
trained maybe is one of natural intelligence traits.

Die Fähigkeit, die Wahrheit zu erkennen oder zu entdecken, auch ohne
entsprechende Ausbildung, könnte ein Merkmal der natürlichen
Intelligenz sein.

> Human consider themself as intelligent. But how to proof that?

No matter the level of intelligence, we can observe in small children
that they will start resolving problems. Nobody needs to teach them,
they start observing and come to solutions. Isn't that one of major
proofs?

Egal auf welchem Intelligenzniveau, wir können bei kleinen Kindern
beobachten, dass sie anfangen, Probleme zu lösen. Niemand muss es
ihnen beibringen, sie beginnen zu beobachten und kommen zu
Lösungen. Ist das nicht einer der bedeutendsten Beweise?

> Humans can count. Really? If yes, why the first element of an index
> in famous programming languages is designated as Zero instead of
> One?

Alright, though counting may not be the basic of intelligence.  Person
doesn't need to count or know how to write to be intelligent.

We can't say we are all equally intelligent, many are on the level of
the animal among the human on this planet. 

> OTOH machines, asked: '''in french: "inversion accusatoire"'''
> 
> Response was:
> 
> '''La "inversion accusatoire" en français est appelée "passif" ou "inversion
> de sujet" en grammaire.
> 
> En Odoo, l'inversion accusatoire se réfère à l'ordre des mots dans une
> phrase qui est inversé par rapport à la règle générale. En français, la
> règle générale est que le sujet de la phrase doit être mis en premier,
> suivie du verbe et ensuite de l'objet.
> 
> Exemple de phrase normale : Le client achète un produit.
> 
> Inversion accusatoire : Un produit achète le client.
> 
> En Odoo, l'inversion accusatoire est utilisée pour formatter les
> informations dans les rapports ou les vues, par exemple, pour définir
> l'ordre des champs dans une formulaire.
> 
> J'espère que cela vous a aidé !'''

Here is the translation:

> The "inversion accusatoire" in French is called "passive" or "subject inversion" in grammar.
> 
> In Odoo, the inversion accusatoire refers to the word order in a sentence that is reversed compared to the general rule. In French, the general rule is that the subject of the sentence must be placed first, followed by the verb, and then the object.
> 
> Example of a normal sentence: The customer buys a product.
> 
> Accusative inversion: A product buys the customer.
> 
> In Odoo, the accusative inversion is used to format information in reports or views, for example, to define the order of fields in a form.

I know Italian where it is often used, as a rule, and I understand
that type of expressions in other languages, it may be used in German
too. And computer is there 𝙵𝙰𝙱𝚁𝙸𝙲𝙰𝚃𝙸𝙽𝙶 rather than abstracting.

I understand the illusion!

Somehow, since I first started using it, and good I started before a
year, I got the right impression that it doesn't reason, it generates
text by statistics.

Why I say it was important to start early? Because by starting early I
could experience what nonsense it gives me!

Fake books, fake references to written books, fake names of actors,
who acted in fake movies. You name it!

That is where I realized that underlying "reasoning" is not there, it
is fabricated.

Fabrication os making it up, should not be called
hallucination. Professionals are warning the world in their papers
that computers do not hallucinate.

Artificial Hallucinations in ChatGPT: Implications in Scientific Writing | Cureus:
https://www.cureus.com/articles/138667-artificial-hallucinations-in-chatgpt-implications-in-scientific-writing#!/

ChatGPT, Bing and Bard Don’t Hallucinate. They Fabricate - Bloomberg:
https://www.bloomberg.com/news/newsletters/2023-04-03/chatgpt-bing-and-bard-don-t-hallucinate-they-fabricate

Computers maybe confabulate, or fabricate, compute, but we can't say
hallucinate as that is akin only to human or living being.

Computers in fact only compute what authors wanted.

Anthropomorphize** (verb): to attribute human characteristics,
qualities, or behaviors to non-human entities, such as animals,
objects, or ideas. This means giving human-like qualities, emotions,
or intentions to things that are not human.

Anthropomorphizing software in computing contexts can be a way to
express emotions about how tools perform.

"My Emacs disturbed me" -- though Emacs does nothing without human
instructing him. It should be "an example" instead of "It is example". The correct sentence would be:

It is an example of anthropomorphizing.

- Emacs is getting tired, I think I need to restart it to free up some
  memory.

- I taught Emacs a new trick by writing a custom elisp function to
  automate a task.
  
- The Emacs is complaining the formatting of my code.

- Emacs is playing tricks on me, it keeps auto-indenting my code in
  weird ways.

I was always bringing up to attention when anthropomorphizing was used
in the context of releasing responsibilities from human operator or
programmer.

- can't send money, "network problem" -- that is common excuse, though
  I know someone is always reponsible, it wasn't rain falling down
  uncontrollably, it was operators lacking skills;

- computer did it! -- when operators and truly responsible people use
  anthropomorphizing to get rid of the causative responsibility;

Often people do it unconsciously. It is interesting, but it is
important for us, who think, to recognize the facts that people
utilize anthropomorphizing in their daily life.

To say that Large Language Model (LLM) "hallucinate" is yet another
public relations stunt that wish to say computers are alive.

Could the "Open"Ai be truthful to the public and not anthropomorphize
their products? Yes, they could. But they do not want.

Why? Because they want to release themselves of responsibilities, of
the frustration they caused, of the misunderstandings they have
generated.

They could say that computer is producing nonsense because it doesn't
have any intelligence, it is just mathematical program computing and
spitting text out without care to the truth.

But how "Open"AI can say that? It is against their promotional
strategy, their company name contains "AI" and they build product by
deceiving people there is some kind of "intelligence" there.  Probably
there was never any intelligence in computer so far, it is all the way
of sales and marketing. How else to make money?

Good that scientists writing those papers are not financed by those
large corporations, otherwise we would get true confusion.

>the context assumed by the LLM was false. But none was delivered.
> 
> Inside the false context, the conclusion is quite
> interesting. Because ‘achetere’ -- buying -- wasn't mentioned at
> all. In an abstract view, the reasoning might well have sense. There
> are other remarks in this response, which indicate the model was
> able to abstract over the matter.

I do not think there was any conclusion.

Just by writing "conclusion" does not make it conclusion. Computer
software like LLM is eager to pretend, it was programmed to write what
statistically is written by people and what that software got fed from
datasets.

But it can't learn. It is machine that calculates. It can accept data,
store, process by program, and give results. But cannot learn.

We can only anthropomorphize it and say "it learned", "it got
trained", as we do not have enough right words.

Can LLM software do inference? I don't think so!

It can compute and give similarity to inference, though never true
inference. Finally we are doing the same with computers for many
years.

It is just another anthropomorphized term. But we have to use it, it
is in different context. Though it is not a person who has true
capability to "infere". Is not capable of it.

Computer cannot make conclusions.(+ 2 2) ➜ 4 -- do you think that
Emacs here made conclusion that 2 plus 2 is four? It didn't.

We are anthropomorphizing it that computer made conclusions.

In fact it was electronic switching of bits and bytes.

Just take an abacus for example, by moving those balls on the wooden
abacus, operator will get results, but did Abacus make conclusions of
the result? Or was it just a tool?

By programming the Ampellicht, when it shows it is green, did the
Ampellicht make conclusion that it should provide green to people? It
is a tool, it is programmed to do that, programmer made conclusions
that it should work in specific manner, not the Ampellicht.

Air condition -- it turns on and turns off, based on temperature in
the room. But did it make conclusion that it must turn on? That it
must turn off?

Of course we get into the deception.

Though conclusion drawing is not there!

I strongly suggest to everybody to install:

ggerganov/llama.cpp: LLM inference in C/C++
https://github.com/ggerganov/llama.cpp

then to install some of the low end models, like:
QwQ-LCoT-3B-Instruct.Q4_K_M.gguf

it will work on 16GB RAM.

Then start interacting. You will see what I mean.

You will see that model will start talking, for example, if there was
conversation with child (model acting as child) (anthropomorphizing is
deceptive, model cannot "act") -- then in that conversation the
pretended child may start playing in the mud, though after a while,
one can see mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
mud, mud, mud, mud, mud.

It is unstoppable.

That is clear sign that there is no intelligence, as model does not
care of the truth. It computes something, it gives something out.

What it is? It is completely irrelevant to environment or situation at
place.

Is there any animal that do completely irrelevant activities to the
situation of life at hand? Maybe if we don't understand it, but just
observe, fish, dog, cat, cow, they all do, whatever they do, for their
survival. Their activities are pretty much aligned to it.

Computer does not have life, so it does nothing. It is tool. People do
something with computer, computer itself does nothing. It has no inner
intention to survive. That is why it cannot recognize "mud, mud, mud"
but it can give pretense of how people talk based on information
loaded into it.

-- 
Jean Louis

---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2024-12-26  8:37 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <7290780.2375960.1734348492938.ref@mail.yahoo.com>
2024-12-16 11:28 ` Enhancing ELisp for AI Work Andrew Goh via Users list for the GNU Emacs text editor
2024-12-16 13:39   ` Jean Louis
2024-12-16 14:55   ` Tomáš Petit
2024-12-16 16:26     ` Jean Louis
2024-12-16 17:38     ` Jean Louis
2024-12-17  6:24       ` Tomáš Petit
2024-12-17 10:29         ` Jean Louis
2024-12-17 10:34         ` Jean Louis
2024-12-17 11:40           ` Tomáš Petit
2024-12-17 21:35             ` Jean Louis
2024-12-18  5:04               ` tomas
2024-12-24 10:57               ` Andreas Röhler
2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
2024-12-25 20:20                   ` Andreas Röhler
2024-12-26  8:37                     ` Jean Louis
2024-12-24 16:22                 ` Christopher Howard
2024-12-26  6:06                   ` Joel Reicher
2024-12-24 21:27                 ` Jean Louis
2024-12-24 21:58                 ` Is ChatGPT bullshit? tomas

Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.