unofficial mirror of help-gnu-emacs@gnu.org
 help / color / mirror / Atom feed
* Enhancing ELisp for AI Work
       [not found] <7290780.2375960.1734348492938.ref@mail.yahoo.com>
@ 2024-12-16 11:28 ` Andrew Goh via Users list for the GNU Emacs text editor
  2024-12-16 13:39   ` Jean Louis
  2024-12-16 14:55   ` Tomáš Petit
  0 siblings, 2 replies; 18+ messages in thread
From: Andrew Goh via Users list for the GNU Emacs text editor @ 2024-12-16 11:28 UTC (permalink / raw)
  To: help-gnu-emacs@gnu.org

Dear Emacs Team,
As a long-time Emacs user and enthusiast, I would like to recommend that the team consider enhancing ELisp to make it more suitable for artificial intelligence (AI) work.
Elisp has been an incredibly powerful and flexible language for Emacs extension development, but its capabilities can be further expanded to support AI applications.
Some potential areas for enhancement include:
1.  Performance improvements through Just-In-Time (JIT) compilation or native code generation.2.  Introduction of native numerical arrays and linear algebra libraries3.  Development of machine learning and AI libraries, including neural networks, decision trees, and clustering algorithms4.  Improved interoperability with other languages through a foreign function interface (FFI)5.  Enhanced documentation and community resources focused on AI development in ELisp
By addressing these areas, ELisp can become a more comprehensive and efficient platform for AI development, attracting a wider range of users and developers.
Thanks you for considering this recommendation.  I look forward to seeing the future developments in ELisp.
Best Regards,
Andrew Goh S MWith Help from Meta AI 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-16 11:28 ` Enhancing ELisp for AI Work Andrew Goh via Users list for the GNU Emacs text editor
@ 2024-12-16 13:39   ` Jean Louis
  2024-12-16 14:55   ` Tomáš Petit
  1 sibling, 0 replies; 18+ messages in thread
From: Jean Louis @ 2024-12-16 13:39 UTC (permalink / raw)
  To: Andrew Goh; +Cc: help-gnu-emacs@gnu.org

* Andrew Goh via Users list for the GNU Emacs text editor <help-gnu-emacs@gnu.org> [2024-12-16 14:30]:

> As a long-time Emacs user and enthusiast, I would like to recommend
> that the team consider enhancing ELisp to make it more suitable for
> artificial intelligence (AI) work.

That is so true. Though, if you think on LLMs, then I am in
disagremeent calling Large Language Models solely AI, it is better we
specify it well what we mean with it. The word AI now became a popular
keyword for common people to interact with computer and get some tasks
done by using Natural Language Processing.

ALL COMPUTER PROGRAMS EMBODY ASPECTS OF ARTIFICIAL INTELLIGENCE!

Isn't that main reason why we are programming?

> Elisp has been an incredibly powerful and flexible language for
> Emacs extension development, but its capabilities can be further
> expanded to support AI applications.

Oh, absolutely yes.

> Some potential areas for enhancement include:

> 1. Performance improvements through Just-In-Time (JIT) compilation
> or native code generation.

Hmm, I have no idea of JIT within Emacs and if that would at all speed
it up, but it does have "native compilation" now, though unsure how it
works. It makes it a bit faster I guess. Maybe that is what you mean.

In fact before generating questions with the LLM, maybe you should
cross check with your own skills if the feature you are querying
already implemented in Emacs.

> 2. Introduction of native numerical arrays and linear algebra
> libraries

Personally, no idea on that. What I know is that mathematics works
well within Emacs.

> 3. Development of machine learning and AI libraries, including
> neural networks, decision trees, and clustering algorithms

I guess now there is nothing within Emacs for that, but we can always
🚀 call external functions and speed up the overall development cycle
by working through the portal of GNU Emacs. 💻🔍

> 4. Improved interoperability with other languages through a foreign
> function interface (FFI)

Not sure about that, but now Emacs has modules, so anything is
possible to hook into it.

LLM information is there to provide guidelines, not to be smarter than
you and especially can't outsmart people on the mailing list.

Here is easily to find Emacs FFI module:
https://github.com/tromey/emacs-ffi

Looks like your LLM has been playing chess with your brain and winning
every game!

A second or few was the information that Emacs FFI already exists.

> 5. Enhanced documentation and community resources focused on AI
> development in ELisp

People have been developing AI since the inception of computers, and
also Emacs and GNU Operating System, and as you know, without GNU,
there would be no Linux, there would be no Ruby, Python, etc., and so
on—it is all a big chicken and many eggs now.

> By addressing these areas, ELisp can become a more comprehensive and
> efficient platform for AI development, attracting a wider range of
> users and developers.

I think it is excellent editing platform already, and is being very
much addressed.

You see, it does matter how you write, as to say "by addressing these
areas" while many are already addressed it may appear wronging and
invalidating. As I said, the LLM response outsmarted you 😎

> Andrew Goh S MWith Help from Meta AI

Next time try with your own built-in I.

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-16 11:28 ` Enhancing ELisp for AI Work Andrew Goh via Users list for the GNU Emacs text editor
  2024-12-16 13:39   ` Jean Louis
@ 2024-12-16 14:55   ` Tomáš Petit
  2024-12-16 16:26     ` Jean Louis
  2024-12-16 17:38     ` Jean Louis
  1 sibling, 2 replies; 18+ messages in thread
From: Tomáš Petit @ 2024-12-16 14:55 UTC (permalink / raw)
  To: help-gnu-emacs

Greetings,

wouldn't Common Lisp or some Scheme dialect be better suited for this 
job instead of Emacs Lisp?

Regards,

Tomáš Petit


On 12/16/24 12:28 PM, Andrew Goh via Users list for the GNU Emacs text 
editor wrote:
> Dear Emacs Team,
> As a long-time Emacs user and enthusiast, I would like to recommend that the team consider enhancing ELisp to make it more suitable for artificial intelligence (AI) work.
> Elisp has been an incredibly powerful and flexible language for Emacs extension development, but its capabilities can be further expanded to support AI applications.
> Some potential areas for enhancement include:
> 1.  Performance improvements through Just-In-Time (JIT) compilation or native code generation.2.  Introduction of native numerical arrays and linear algebra libraries3.  Development of machine learning and AI libraries, including neural networks, decision trees, and clustering algorithms4.  Improved interoperability with other languages through a foreign function interface (FFI)5.  Enhanced documentation and community resources focused on AI development in ELisp
> By addressing these areas, ELisp can become a more comprehensive and efficient platform for AI development, attracting a wider range of users and developers.
> Thanks you for considering this recommendation.  I look forward to seeing the future developments in ELisp.
> Best Regards,
> Andrew Goh S MWith Help from Meta AI



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-16 14:55   ` Tomáš Petit
@ 2024-12-16 16:26     ` Jean Louis
  2024-12-16 17:38     ` Jean Louis
  1 sibling, 0 replies; 18+ messages in thread
From: Jean Louis @ 2024-12-16 16:26 UTC (permalink / raw)
  To: Tomáš Petit; +Cc: help-gnu-emacs

* Tomáš Petit <petitthomas34@gmail.com> [2024-12-16 17:57]:
> Greetings,
> 
> wouldn't Common Lisp or some Scheme dialect be better suited for this job
> instead of Emacs Lisp?

Emacs Lisp reaches out to many external environments, so it is portal
to everything else. Even when I use externally Common Lisp, I may be
invoking it from Emacs Lisp, some people live in Emacs, and anyway,
whatever language, it can be still edited within Emacs, run, tested,
it gets somehow similar feelings no matter which language runs.

In my work I have to heavily work with text, and accessing HTTP
endpoints to reach to some of Large Language Models (LLM), is not
hard. Emacs Lisp does it.

Let us say preparing the dataset, I have good tools within Emacs Lisp
to find the data necessary for training of the LLM within
seconds. Then it would need some preparation with external tools which
are ready made for that task. But a lot may be done within Emacs.

Here is simple function:

(defun rcd-llm-response (response-buffer)
  "Parse LLM's RESPONSE-BUFFER and return decoded string."
  (when response-buffer
    (with-current-buffer response-buffer
      ;; Skip HTTP headers
      (goto-char (point-min))
      (when (search-forward "\n\n" nil t)
        (let ((response (decode-coding-string (buffer-substring-no-properties (point) (point-max)) 'utf-8)))
	  (kill-buffer response-buffer)
	  ;; Parse JSON and extract the reply
	  (let* ((json-response (json-parse-string response :object-type 'alist))
		 (choices (alist-get 'choices json-response))
		 (message (alist-get 'message (aref choices 0)))
		 (message (decode-coding-string (alist-get 'content message) 'utf-8)))
	    (string-replace "</s>" "\n" message)))))))

The model Qwen2.5-Coder-32B-Instruct is Apache 2.0. which is free
software license.

(defun rcd-llm-huggingface (prompt &optional memory rcd-llm-model temperature max-tokens top-p stream)
  "Send PROMPT to Hugging Face API with specified parameters.

Optional MEMORY, RCD-LLM-MODEL, TEMPERATURE, MAX-TOKENS, TOP-P, and STREAM can be used."
  (let* ((rcd-llm-model (or rcd-llm-model "Qwen/Qwen2.5-Coder-32B-Instruct"))
         (temperature (or temperature 0.5))
         (max-tokens (or max-tokens 2048))
         (top-p (or top-p 0.7))
         (stream (if stream t :json-false))
         (url-request-method "POST")
         (url-request-extra-headers
          '(("Content-Type" . "application/json")
            ("Authorization" . "Bearer hf_YOUR-API-KEY")))
         (url-request-data
          (encode-coding-string
	   (setq rcd-llm-last-json
		 (json-encode
		  `((model . ,rcd-llm-model)
		    (messages . [((role . "user") (content . ,prompt))])
		    (temperature . ,temperature)
		    (max_tokens . ,max-tokens)
		    (top_p . ,top-p)
		    (stream . ,stream))))
           'utf-8))
         (buffer (url-retrieve-synchronously
                  "https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions")))
    (rcd-llm-response buffer)))

The whole library then does everything I nee to interact with
LLMs. Emacs is for text, LLM is for text, it must go hand in hand.

But generation of the LLM is not yet workable through Emacs Lisp, even
though for sure not impossible, it is just nobody yet tried to create
it that way.

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-16 14:55   ` Tomáš Petit
  2024-12-16 16:26     ` Jean Louis
@ 2024-12-16 17:38     ` Jean Louis
  2024-12-17  6:24       ` Tomáš Petit
  1 sibling, 1 reply; 18+ messages in thread
From: Jean Louis @ 2024-12-16 17:38 UTC (permalink / raw)
  To: Tomáš Petit; +Cc: help-gnu-emacs

* Tomáš Petit <petitthomas34@gmail.com> [2024-12-16 17:57]:
> Greetings,
> 
> wouldn't Common Lisp or some Scheme dialect be better suited for this job
> instead of Emacs Lisp?

A ChatGPT clone, in 3000 bytes of C, backed by GPT-2 (2023) (carlini.com)
https://nicholas.carlini.com/writing/2023/chat-gpt-2-in-c.html

I just think that such example, could be implemented through Emacs
Lisp and usage of:

tromey/emacs-ffi: FFI for Emacs:
https://github.com/tromey/emacs-ffi

All of that code may be converted, I guess, and so that it is
programmed from within Emac Lisp.

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-16 17:38     ` Jean Louis
@ 2024-12-17  6:24       ` Tomáš Petit
  2024-12-17 10:29         ` Jean Louis
  2024-12-17 10:34         ` Jean Louis
  0 siblings, 2 replies; 18+ messages in thread
From: Tomáš Petit @ 2024-12-17  6:24 UTC (permalink / raw)
  To: help-gnu-emacs

Right, that is of course entirely possible. I was thinking more along 
the lines of projects like

https://antik.common-lisp.dev/

or

https://github.com/melisgl/mgl

and generally building the entire machinery natively in Elisp, for which 
I find CL just a better option. But yeah, calling LLMs like that is 
viable as well.


On 12/16/24 6:38 PM, Jean Louis wrote:
> * Tomáš Petit <petitthomas34@gmail.com> [2024-12-16 17:57]:
>> Greetings,
>>
>> wouldn't Common Lisp or some Scheme dialect be better suited for this job
>> instead of Emacs Lisp?
> A ChatGPT clone, in 3000 bytes of C, backed by GPT-2 (2023) (carlini.com)
> https://nicholas.carlini.com/writing/2023/chat-gpt-2-in-c.html
>
> I just think that such example, could be implemented through Emacs
> Lisp and usage of:
>
> tromey/emacs-ffi: FFI for Emacs:
> https://github.com/tromey/emacs-ffi
>
> All of that code may be converted, I guess, and so that it is
> programmed from within Emac Lisp.
>



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17  6:24       ` Tomáš Petit
@ 2024-12-17 10:29         ` Jean Louis
  2024-12-17 10:34         ` Jean Louis
  1 sibling, 0 replies; 18+ messages in thread
From: Jean Louis @ 2024-12-17 10:29 UTC (permalink / raw)
  To: Tomáš Petit; +Cc: help-gnu-emacs

* Tomáš Petit <petitthomas34@gmail.com> [2024-12-17 09:26]:
> Right, that is of course entirely possible. I was thinking more along the
> lines of projects like
> 
> https://antik.common-lisp.dev/
> 
> or
> 
> https://github.com/melisgl/mgl
> 
> and generally building the entire machinery natively in Elisp, for which I
> find CL just a better option. But yeah, calling LLMs like that is viable as
> well.

After short review, it seems much is there at the mgl link, CUDA too,
sure! Very nice. I can't get fast into it. Surely is possible to do it
with Emacs Lisp and probably modules for CUDA access.

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17  6:24       ` Tomáš Petit
  2024-12-17 10:29         ` Jean Louis
@ 2024-12-17 10:34         ` Jean Louis
  2024-12-17 11:40           ` Tomáš Petit
  1 sibling, 1 reply; 18+ messages in thread
From: Jean Louis @ 2024-12-17 10:34 UTC (permalink / raw)
  To: Tomáš Petit; +Cc: help-gnu-emacs

* Tomáš Petit <petitthomas34@gmail.com> [2024-12-17 09:26]:
> Right, that is of course entirely possible. I was thinking more along the
> lines of projects like
> 
> https://antik.common-lisp.dev/
> 
> or
> 
> https://github.com/melisgl/mgl
> 
> and generally building the entire machinery natively in Elisp, for which I
> find CL just a better option. But yeah, calling LLMs like that is viable as
> well.

Attempts in Emacs Lisp:

narendraj9/emlib: Machine Learning in Emacs Lisp
https://github.com/narendraj9/emlib

Building and Training Neural Networks in Emacs Lisp
https://www.scss.tcd.ie/~sulimanm/posts/nn-introduction.html

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17 10:34         ` Jean Louis
@ 2024-12-17 11:40           ` Tomáš Petit
  2024-12-17 21:35             ` Jean Louis
  0 siblings, 1 reply; 18+ messages in thread
From: Tomáš Petit @ 2024-12-17 11:40 UTC (permalink / raw)
  To: help-gnu-emacs

I clearly haven't done my due diligence because I wasn't aware of those 
cool little projects. It certainly looks fun, not sure if Emacs Lisp 
will ever attract enough attention, although I would personally hope for 
Lisp (and its derivatives) to have a glorious return.

On 12/17/24 11:34 AM, Jean Louis wrote:
> * Tomáš Petit <petitthomas34@gmail.com> [2024-12-17 09:26]:
>> Right, that is of course entirely possible. I was thinking more along the
>> lines of projects like
>>
>> https://antik.common-lisp.dev/
>>
>> or
>>
>> https://github.com/melisgl/mgl
>>
>> and generally building the entire machinery natively in Elisp, for which I
>> find CL just a better option. But yeah, calling LLMs like that is viable as
>> well.
> Attempts in Emacs Lisp:
>
> narendraj9/emlib: Machine Learning in Emacs Lisp
> https://github.com/narendraj9/emlib
>
> Building and Training Neural Networks in Emacs Lisp
> https://www.scss.tcd.ie/~sulimanm/posts/nn-introduction.html
>



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17 11:40           ` Tomáš Petit
@ 2024-12-17 21:35             ` Jean Louis
  2024-12-18  5:04               ` tomas
  2024-12-24 10:57               ` Andreas Röhler
  0 siblings, 2 replies; 18+ messages in thread
From: Jean Louis @ 2024-12-17 21:35 UTC (permalink / raw)
  To: Tomáš Petit; +Cc: help-gnu-emacs

* Tomáš Petit <petitthomas34@gmail.com> [2024-12-17 14:42]:
> I clearly haven't done my due diligence because I wasn't aware of those cool
> little projects. It certainly looks fun, not sure if Emacs Lisp will ever
> attract enough attention, although I would personally hope for Lisp (and its
> derivatives) to have a glorious return.

But one thing not to forget, let's not minimize the actual artificial
intelligence programmed over years with various programming languages.

A probabilit text generator is bullshit generator. It cares zero of
the truth, it is program. Far from the real "intelligence".

ChatGPT is bullshit | Ethics and Information Technology
https://link.springer.com/article/10.1007/s10676-024-09775-5

LLM have tremendous uses, they are useful, and there is no doubt about
it. But calling them "intelligence" and diminishing other types of
software is too much.

-- 
Jean Louis
ALL COMPUTER PROGRAMS EMBODY ASPECTS OF ARTIFICIAL INTELLIGENCE



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17 21:35             ` Jean Louis
@ 2024-12-18  5:04               ` tomas
  2024-12-24 10:57               ` Andreas Röhler
  1 sibling, 0 replies; 18+ messages in thread
From: tomas @ 2024-12-18  5:04 UTC (permalink / raw)
  To: help-gnu-emacs; +Cc: Tomáš Petit

[-- Attachment #1: Type: text/plain, Size: 276 bytes --]

On Wed, Dec 18, 2024 at 12:35:50AM +0300, Jean Louis wrote:

[...]

> ChatGPT is bullshit | Ethics and Information Technology
> https://link.springer.com/article/10.1007/s10676-024-09775-5

A must-read, together with Harry Frankfurt's "On Bullshit".

Cheers
-- 
t

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-17 21:35             ` Jean Louis
  2024-12-18  5:04               ` tomas
@ 2024-12-24 10:57               ` Andreas Röhler
  2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
                                   ` (3 more replies)
  1 sibling, 4 replies; 18+ messages in thread
From: Andreas Röhler @ 2024-12-24 10:57 UTC (permalink / raw)
  To: help-gnu-emacs


Am 17.12.24 um 22:35 schrieb Jean Louis:
> ChatGPT is bullshit |

No, it isn't. We have a language problem, because we have something new. 
Must split the notion of intelligence.

People tried to fly like a bird. Where they successful?

Not really.

But no bird is able to reach the moon.

LLMs are able to reason. With the amount of data they will be -- and 
probably already are-- much stronger in reasoning/deduction then humans.

LLMs are creativ, constructing new terms from reasoning.

They will be indispensable in science.


All depends on context. With no context, humans too can't answer a 
single question.




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-24 10:57               ` Andreas Röhler
@ 2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
  2024-12-25 20:20                   ` Andreas Röhler
  2024-12-24 16:22                 ` Christopher Howard
                                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 18+ messages in thread
From: Stefan Monnier via Users list for the GNU Emacs text editor @ 2024-12-24 15:25 UTC (permalink / raw)
  To: help-gnu-emacs

> LLMs are able to reason. With the amount of data they will be -- and
> probably already are-- much stronger in reasoning/deduction then humans.

??

AFAIK no amount of extra data will fix their fundamental inability to
perform any kind of logical reasoning.

That doesn't mean we can't fix them to do that, of course, but it takes
something qualitatively different rather than mere quantity of data.

That's been known for years, and re-publicized recently by some Apple
team.  Can you point at a publication that argues convincingly otherwise?


        Stefan




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-24 10:57               ` Andreas Röhler
  2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
@ 2024-12-24 16:22                 ` Christopher Howard
  2024-12-26  6:06                   ` Joel Reicher
  2024-12-24 21:27                 ` Jean Louis
  2024-12-24 21:58                 ` Is ChatGPT bullshit? tomas
  3 siblings, 1 reply; 18+ messages in thread
From: Christopher Howard @ 2024-12-24 16:22 UTC (permalink / raw)
  To: Andreas Röhler; +Cc: help-gnu-emacs

Andreas Röhler <andreas.roehler@easy-emacs.de> writes:

> LLMs are able to reason. With the amount of data they will be -- and
> probably already are-- much stronger in reasoning/deduction then
> humans.
>
> LLMs are creativ, constructing new terms from reasoning.

The point of the previous article was to demonstrate that LLMs do not reason, or more particularly, attempt to determine truth. They simply try to calculate what is the next most likely and natural thing you expect to see in a flow of words. Sometimes you get something true out of that, often times you get something that is either false or shallow.

Explain how you go from that, to saying that LLMs are doing reasoning and deduction, and are creative.

There are software programs that attempt to do deduction and reasoning, by connecting propositions and arguments to determine truth and falsity. But as far as I understand, that is not what LLMs do.

-- 
Christopher Howard



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-24 10:57               ` Andreas Röhler
  2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
  2024-12-24 16:22                 ` Christopher Howard
@ 2024-12-24 21:27                 ` Jean Louis
  2024-12-24 21:58                 ` Is ChatGPT bullshit? tomas
  3 siblings, 0 replies; 18+ messages in thread
From: Jean Louis @ 2024-12-24 21:27 UTC (permalink / raw)
  To: Andreas Röhler; +Cc: help-gnu-emacs

* Andreas Röhler <andreas.roehler@easy-emacs.de> [2024-12-24 13:59]:
> Am 17.12.24 um 22:35 schrieb Jean Louis:
> > ChatGPT is bullshit |

Danke Andreas, aber hast Du es gelesen?

The article about bullshit, doesn't speak in offensive manner about
it. It is reasonable analysis of what it really is.

Let me quote from:
https://link.springer.com/article/10.1007/s10676-024-09775-5

> Because these programs cannot themselves be concerned with truth, and
> because they are designed to produce text that looks truth-apt without
> any actual concern for truth, it seems appropriate to call their
> outputs bullshit.

When something looks truth-apt, just because it was statistically
pulled out of the database and presented nicely, does it mean that
program is concerned of truth? Or there is deception, illussion? 

I have just asked my locally running text generator
(QwQ-LCoT-3B-Instruct.Q4_K_M.gguf) and look:

Depending on the context and the intended audience, it might be more
appropriate to use more neutral language to convey the same
message. For example, one could say:

"Because these programs cannot themselves be concerned with truth and
because they are designed to produce text that looks truth-apt without
any actual concern for truth, it seems appropriate to call their
outputs deceptive or untruthful."

But I do think that word "bullshit" better conveys the importance of
understanding the text generators.

They are very deceptive.

A person asked me today if that organization from Deadpool movie 2024
that can look into the future exists in reality. People are easily
deceived. 

I am heavy user of LLM text generation as it is for improvement of
information meant for consulting, sales and marketing. In general, I
am making sure of the text expressiveness. 

Today there was 270 requests, and I sometimes let computer run to provide
summary for 1000+ different texts.

That experience and searching on Internet, tells me, that I may be
lucky, and I am luck many times per day, but many times also not. I
get so wrong information.

It requires me to develop a skill to see through the deception and
recognize which pieces of information may be truthful, which are
fake. There are too many models, and I hope you try it by mass that
you understand what I mean.

I understand responses from LLM as:

- proposals of the truthful information
- well expressed possibilities of truth
- well documented proposals

I do not consider them "authentic" for reason of my research. I
consider it as excerpts on which I have to review, analyse and
approve. 

There is nothing "intelligent" there, there is absolutely no thinking,
just appearance, mirror of a human behavior, it is deceptive.

> No, it isn't. We have a language problem, because we have something
> new.

I am trying to understand what you mean. I think you are saying there
is some new technology, and we are not properly recognizing it, and
that it will be developed into future.

I am sure it is very useful, as if it would not be useful to me, I
would not be using it 4736 times since 12 months.

I am fascinated with it. Can't get rid of it, it is helping me so much
in life. It is true money maker. I am just thinking how to get more
GPU, better hardware, how to run it locally. On that side of
fascination I am fully with you.

Not that it can think.

It has no moral, no ethics.

Only illusion of it.

It doesn't care what it says, as long as math inside tells it should
give some output.

> Must split the notion of intelligence.
> 
> People tried to fly like a bird. Where they successful?
> 
> Not really.
> 
> But no bird is able to reach the moon.

I think you mean no LLM is to become like human. That for sure not,
though in some future with more integration a machine could become
very independent and look like living being.

> LLMs are able to reason. With the amount of data they will be -- and
> probably already are-- much stronger in reasoning/deduction then
> humans.

Well there we are, that is where I can't agree with you. 

I can agree with "deduction" or "inference", as computer which has
access to large database was always so much assistive to human, as
that is the reason why for example we use documentation search
features, or search engines.

But that LLM is able to reason, that I can't support.

Maybe you wish to say that LLM is giving illusion of reasoning? I am
not sure what you wish to say.

Deduction is not necessarily reasoning, it is math.  

Just from this day I could give you so many examples proving that LLM
cannot reason. It is program that gives probability based proposals
based on inputs and available data.

I have been asking today how can I turn on GPU usage in Inkscape, and
I got positive answers, even though such feature apparently doesn't
exist.

You have to use many models to get the feeling.

Back to quotes:
https://link.springer.com/article/10.1007/s10676-024-09775-5

> We argue that at minimum, the outputs of LLMs like ChatGPT are soft
> bullshit: bullshit–that is, speech or text produced without concern
> for its truth–that is produced without any intent to mislead the
> audience about the utterer’s attitude towards truth. We also
> suggest, more controversially, that ChatGPT may indeed produce hard
> bullshit: if we view it as having intentions (for example, in virtue
> of how it is designed), then the fact that it is designed to give
> the impression of concern for truth qualifies it as attempting to
> mislead the audience about its aims, goals, or agenda.

It is not "AI". Far from there. Even as original term I don't agree to
it. It is attempt of people to make the AI, but it is not yet any kind
of "intelligence". Artificial it is also not, it is a mirror of human
intelligence, it is part of nature and arouse from natural development
of human, it is not something separate from human, it is our product,
not artificial something.

I think every Emacs user who ever used M-x doctor should understand
it.  It is actually the first exercise to understand what is LLM.

> LLMs are creativ, constructing new terms from reasoning.

Human is creative. Program is as creative as human is. Program alone
is not creative. It does what human directed. Turn off the electricity
and show me creativity then!

> They will be indispensable in science.

I agree in that, but they cannot reason.

If a program were capable of reasoning, one might wonder why it
wouldn't wake up and start independently thinking about how to improve
humanity, reducing our efforts and enhancing our lives. Instead, it
merely generates text statistically and dispassionately, completely
devoid of emotional or mindful connection.

> All depends on context. With no context, humans too can't answer a
> single question.

Baby knows where is the mother without thinking or opening eyes even.

The quote "AI" often doesn't know what is the time, or where is its
author.

https://duckduckgo.com/?t=ftsa&q=llm+cannot+reason&ia=web

Back in 1984 we were playing computer games, rockets, and were fully
under impression that there is something reasoning against me. It was
program, deceptive, but program, not thinker.

-- 
Jean Louis



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Is ChatGPT bullshit?
  2024-12-24 10:57               ` Andreas Röhler
                                   ` (2 preceding siblings ...)
  2024-12-24 21:27                 ` Jean Louis
@ 2024-12-24 21:58                 ` tomas
  3 siblings, 0 replies; 18+ messages in thread
From: tomas @ 2024-12-24 21:58 UTC (permalink / raw)
  To: Andreas Röhler; +Cc: help-gnu-emacs

[-- Attachment #1: Type: text/plain, Size: 952 bytes --]

On Tue, Dec 24, 2024 at 11:57:08AM +0100, Andreas Röhler wrote:
> 
> Am 17.12.24 um 22:35 schrieb Jean Louis:
> > ChatGPT is bullshit |

Most definitely, yes. The already linked Uni Glasgow article [1] makes
a compelling case.

> No, it isn't. We have a language problem, because we have something new.
> Must split the notion of intelligence.
> 
> People tried to fly like a bird. Where they successful?
> 
> Not really.
> 
> But no bird is able to reach the moon.
> 
> LLMs are able to reason. With the amount of data they will be -- and
> probably already are-- much stronger in reasoning/deduction then humans.

No. They can babble /as if/ they were reasoning.

Jeez. I thought we had cleared that ca 1966 [2]:


Cheers

[1] https://link.springer.com/article/10.1007/s10676-024-09775-5?error=cookies_not_supported&code=a86869b1-3f10-4599-aeb1-f482d3a4d2e2#Aff1
[2] https://en.wikipedia.org/wiki/ELIZA_effect

-- 
t

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
@ 2024-12-25 20:20                   ` Andreas Röhler
  0 siblings, 0 replies; 18+ messages in thread
From: Andreas Röhler @ 2024-12-25 20:20 UTC (permalink / raw)
  To: help-gnu-emacs

Hi all,

maybe let's forget ChatGPT in its current state, but refer to this phrase:

'''A probabilit text generator is bullshit generator. It cares zero of
the truth, it is program. Far from the real "intelligence".'''

IMO that's wrong from the human as from the machine side.

Human consider themself as intelligent. But how to proof that?

Humans can count. Really? If yes, why the first element of an index in 
famous programming languages is designated as Zero instead of One?

OTOH machines, asked: '''in french: "inversion accusatoire"'''

Response was:

'''La "inversion accusatoire" en français est appelée "passif" ou 
"inversion de sujet" en grammaire.

En Odoo, l'inversion accusatoire se réfère à l'ordre des mots dans une 
phrase qui est inversé par rapport à la règle générale. En français, la 
règle générale est que le sujet de la phrase doit être mis en premier, 
suivie du verbe et ensuite de l'objet.

Exemple de phrase normale : Le client achète un produit.

Inversion accusatoire : Un produit achète le client.

En Odoo, l'inversion accusatoire est utilisée pour formatter les 
informations dans les rapports ou les vues, par exemple, pour définir 
l'ordre des champs dans une formulaire.

J'espère que cela vous a aidé !'''

Obviously the context assumed by the LLM was false. But none was delivered.

Inside the false context, the conclusion is quite interesting. Because 
‘achetere’ -- buying -- wasn't mentioned at all. In an abstract view, 
the reasoning might well have sense. There are other remarks in this 
response, which indicate the model was able to abstract over the matter.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Enhancing ELisp for AI Work
  2024-12-24 16:22                 ` Christopher Howard
@ 2024-12-26  6:06                   ` Joel Reicher
  0 siblings, 0 replies; 18+ messages in thread
From: Joel Reicher @ 2024-12-26  6:06 UTC (permalink / raw)
  To: Christopher Howard; +Cc: Andreas Röhler, help-gnu-emacs

Christopher Howard <christopher@librehacker.com> writes:

> Andreas Röhler <andreas.roehler@easy-emacs.de> writes:
>
>> LLMs are able to reason. With the amount of data they will be 
>> -- and probably already are-- much stronger in 
>> reasoning/deduction then humans.
>>
>> LLMs are creativ, constructing new terms from reasoning.
>
> The point of the previous article was to demonstrate that LLMs 
> do not reason, or more particularly, attempt to determine 
> truth. They simply try to calculate what is the next most likely 
> and natural thing you expect to see in a flow of 
> words. Sometimes you get something true out of that, often times 
> you get something that is either false or shallow.

I'm really hesitant to contribute to a thread that's probably 
off-topic, but I'd like to suggest that an LLM's output is perhaps 
best thought of as quoted text, so it is neither true nor false.

The quotes are only removed when a reader reads it.

Regards,

        - Joel



^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2024-12-26  6:06 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <7290780.2375960.1734348492938.ref@mail.yahoo.com>
2024-12-16 11:28 ` Enhancing ELisp for AI Work Andrew Goh via Users list for the GNU Emacs text editor
2024-12-16 13:39   ` Jean Louis
2024-12-16 14:55   ` Tomáš Petit
2024-12-16 16:26     ` Jean Louis
2024-12-16 17:38     ` Jean Louis
2024-12-17  6:24       ` Tomáš Petit
2024-12-17 10:29         ` Jean Louis
2024-12-17 10:34         ` Jean Louis
2024-12-17 11:40           ` Tomáš Petit
2024-12-17 21:35             ` Jean Louis
2024-12-18  5:04               ` tomas
2024-12-24 10:57               ` Andreas Röhler
2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
2024-12-25 20:20                   ` Andreas Röhler
2024-12-24 16:22                 ` Christopher Howard
2024-12-26  6:06                   ` Joel Reicher
2024-12-24 21:27                 ` Jean Louis
2024-12-24 21:58                 ` Is ChatGPT bullshit? tomas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).