all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
From: Jean Louis <bugs@gnu.support>
To: "Andreas Röhler" <andreas.roehler@easy-emacs.de>
Cc: help-gnu-emacs@gnu.org
Subject: Re: Enhancing ELisp for AI Work
Date: Wed, 25 Dec 2024 00:27:15 +0300	[thread overview]
Message-ID: <Z2snM67bZCUI5Szx@lco2> (raw)
In-Reply-To: <6c53521e-4aa6-40d1-b4a5-0e00989ad201@easy-emacs.de>

* Andreas Röhler <andreas.roehler@easy-emacs.de> [2024-12-24 13:59]:
> Am 17.12.24 um 22:35 schrieb Jean Louis:
> > ChatGPT is bullshit |

Danke Andreas, aber hast Du es gelesen?

The article about bullshit, doesn't speak in offensive manner about
it. It is reasonable analysis of what it really is.

Let me quote from:
https://link.springer.com/article/10.1007/s10676-024-09775-5

> Because these programs cannot themselves be concerned with truth, and
> because they are designed to produce text that looks truth-apt without
> any actual concern for truth, it seems appropriate to call their
> outputs bullshit.

When something looks truth-apt, just because it was statistically
pulled out of the database and presented nicely, does it mean that
program is concerned of truth? Or there is deception, illussion? 

I have just asked my locally running text generator
(QwQ-LCoT-3B-Instruct.Q4_K_M.gguf) and look:

Depending on the context and the intended audience, it might be more
appropriate to use more neutral language to convey the same
message. For example, one could say:

"Because these programs cannot themselves be concerned with truth and
because they are designed to produce text that looks truth-apt without
any actual concern for truth, it seems appropriate to call their
outputs deceptive or untruthful."

But I do think that word "bullshit" better conveys the importance of
understanding the text generators.

They are very deceptive.

A person asked me today if that organization from Deadpool movie 2024
that can look into the future exists in reality. People are easily
deceived. 

I am heavy user of LLM text generation as it is for improvement of
information meant for consulting, sales and marketing. In general, I
am making sure of the text expressiveness. 

Today there was 270 requests, and I sometimes let computer run to provide
summary for 1000+ different texts.

That experience and searching on Internet, tells me, that I may be
lucky, and I am luck many times per day, but many times also not. I
get so wrong information.

It requires me to develop a skill to see through the deception and
recognize which pieces of information may be truthful, which are
fake. There are too many models, and I hope you try it by mass that
you understand what I mean.

I understand responses from LLM as:

- proposals of the truthful information
- well expressed possibilities of truth
- well documented proposals

I do not consider them "authentic" for reason of my research. I
consider it as excerpts on which I have to review, analyse and
approve. 

There is nothing "intelligent" there, there is absolutely no thinking,
just appearance, mirror of a human behavior, it is deceptive.

> No, it isn't. We have a language problem, because we have something
> new.

I am trying to understand what you mean. I think you are saying there
is some new technology, and we are not properly recognizing it, and
that it will be developed into future.

I am sure it is very useful, as if it would not be useful to me, I
would not be using it 4736 times since 12 months.

I am fascinated with it. Can't get rid of it, it is helping me so much
in life. It is true money maker. I am just thinking how to get more
GPU, better hardware, how to run it locally. On that side of
fascination I am fully with you.

Not that it can think.

It has no moral, no ethics.

Only illusion of it.

It doesn't care what it says, as long as math inside tells it should
give some output.

> Must split the notion of intelligence.
> 
> People tried to fly like a bird. Where they successful?
> 
> Not really.
> 
> But no bird is able to reach the moon.

I think you mean no LLM is to become like human. That for sure not,
though in some future with more integration a machine could become
very independent and look like living being.

> LLMs are able to reason. With the amount of data they will be -- and
> probably already are-- much stronger in reasoning/deduction then
> humans.

Well there we are, that is where I can't agree with you. 

I can agree with "deduction" or "inference", as computer which has
access to large database was always so much assistive to human, as
that is the reason why for example we use documentation search
features, or search engines.

But that LLM is able to reason, that I can't support.

Maybe you wish to say that LLM is giving illusion of reasoning? I am
not sure what you wish to say.

Deduction is not necessarily reasoning, it is math.  

Just from this day I could give you so many examples proving that LLM
cannot reason. It is program that gives probability based proposals
based on inputs and available data.

I have been asking today how can I turn on GPU usage in Inkscape, and
I got positive answers, even though such feature apparently doesn't
exist.

You have to use many models to get the feeling.

Back to quotes:
https://link.springer.com/article/10.1007/s10676-024-09775-5

> We argue that at minimum, the outputs of LLMs like ChatGPT are soft
> bullshit: bullshit–that is, speech or text produced without concern
> for its truth–that is produced without any intent to mislead the
> audience about the utterer’s attitude towards truth. We also
> suggest, more controversially, that ChatGPT may indeed produce hard
> bullshit: if we view it as having intentions (for example, in virtue
> of how it is designed), then the fact that it is designed to give
> the impression of concern for truth qualifies it as attempting to
> mislead the audience about its aims, goals, or agenda.

It is not "AI". Far from there. Even as original term I don't agree to
it. It is attempt of people to make the AI, but it is not yet any kind
of "intelligence". Artificial it is also not, it is a mirror of human
intelligence, it is part of nature and arouse from natural development
of human, it is not something separate from human, it is our product,
not artificial something.

I think every Emacs user who ever used M-x doctor should understand
it.  It is actually the first exercise to understand what is LLM.

> LLMs are creativ, constructing new terms from reasoning.

Human is creative. Program is as creative as human is. Program alone
is not creative. It does what human directed. Turn off the electricity
and show me creativity then!

> They will be indispensable in science.

I agree in that, but they cannot reason.

If a program were capable of reasoning, one might wonder why it
wouldn't wake up and start independently thinking about how to improve
humanity, reducing our efforts and enhancing our lives. Instead, it
merely generates text statistically and dispassionately, completely
devoid of emotional or mindful connection.

> All depends on context. With no context, humans too can't answer a
> single question.

Baby knows where is the mother without thinking or opening eyes even.

The quote "AI" often doesn't know what is the time, or where is its
author.

https://duckduckgo.com/?t=ftsa&q=llm+cannot+reason&ia=web

Back in 1984 we were playing computer games, rockets, and were fully
under impression that there is something reasoning against me. It was
program, deceptive, but program, not thinker.

-- 
Jean Louis



  parent reply	other threads:[~2024-12-24 21:27 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <7290780.2375960.1734348492938.ref@mail.yahoo.com>
2024-12-16 11:28 ` Enhancing ELisp for AI Work Andrew Goh via Users list for the GNU Emacs text editor
2024-12-16 13:39   ` Jean Louis
2024-12-16 14:55   ` Tomáš Petit
2024-12-16 16:26     ` Jean Louis
2024-12-16 17:38     ` Jean Louis
2024-12-17  6:24       ` Tomáš Petit
2024-12-17 10:29         ` Jean Louis
2024-12-17 10:34         ` Jean Louis
2024-12-17 11:40           ` Tomáš Petit
2024-12-17 21:35             ` Jean Louis
2024-12-18  5:04               ` tomas
2024-12-24 10:57               ` Andreas Röhler
2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
2024-12-24 16:22                 ` Christopher Howard
2024-12-24 21:27                 ` Jean Louis [this message]
2024-12-24 21:58                 ` Is ChatGPT bullshit? tomas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z2snM67bZCUI5Szx@lco2 \
    --to=bugs@gnu.support \
    --cc=andreas.roehler@easy-emacs.de \
    --cc=help-gnu-emacs@gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.