all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
From: Jean Louis <bugs@gnu.support>
To: "Andreas Röhler" <andreas.roehler@easy-emacs.de>
Cc: emacs-tangents@gnu.org
Subject: Re: Enhancing ELisp for AI Work
Date: Thu, 26 Dec 2024 11:37:51 +0300	[thread overview]
Message-ID: <Z20V34g4SxCumOB5@lco2> (raw)
In-Reply-To: <112da2f7-eb5a-4112-9be0-29fa3d73045c@easy-emacs.de>

Okay let us move it to Emacs Tangents mailing list.

In general I find your writings interesting!

Same I was thinking myself, that it is great thing, giving me accurate
results, until the verification! Many things were simply invented!

Then after verification of how it works, I could understand it and I
realized that it has no intelligence.

* Andreas Röhler <andreas.roehler@easy-emacs.de> [2024-12-25 23:22]:
> maybe let's forget ChatGPT in its current state, but refer to this phrase:
> 
> '''A probabilit text generator is bullshit generator. It cares zero of
> the truth, it is program. Far from the real "intelligence".'''
> 
> IMO that's wrong from the human as from the machine side.

I think you wanted to say that it is same for human, that human also
doesn't take care for truth.

And yes! That is the problem most of the time with human, they mimic
the expressions and "text", talk, communication of other human, not
everybody is thinking! But almost everybody is able to mimic. And many
many among us do not care of truth.

Though they have the capability to care of the truth. Each of us.

The capability to care of truth or discover truth even without being
trained maybe is one of natural intelligence traits.

Die Fähigkeit, die Wahrheit zu erkennen oder zu entdecken, auch ohne
entsprechende Ausbildung, könnte ein Merkmal der natürlichen
Intelligenz sein.

> Human consider themself as intelligent. But how to proof that?

No matter the level of intelligence, we can observe in small children
that they will start resolving problems. Nobody needs to teach them,
they start observing and come to solutions. Isn't that one of major
proofs?

Egal auf welchem Intelligenzniveau, wir können bei kleinen Kindern
beobachten, dass sie anfangen, Probleme zu lösen. Niemand muss es
ihnen beibringen, sie beginnen zu beobachten und kommen zu
Lösungen. Ist das nicht einer der bedeutendsten Beweise?

> Humans can count. Really? If yes, why the first element of an index
> in famous programming languages is designated as Zero instead of
> One?

Alright, though counting may not be the basic of intelligence.  Person
doesn't need to count or know how to write to be intelligent.

We can't say we are all equally intelligent, many are on the level of
the animal among the human on this planet. 

> OTOH machines, asked: '''in french: "inversion accusatoire"'''
> 
> Response was:
> 
> '''La "inversion accusatoire" en français est appelée "passif" ou "inversion
> de sujet" en grammaire.
> 
> En Odoo, l'inversion accusatoire se réfère à l'ordre des mots dans une
> phrase qui est inversé par rapport à la règle générale. En français, la
> règle générale est que le sujet de la phrase doit être mis en premier,
> suivie du verbe et ensuite de l'objet.
> 
> Exemple de phrase normale : Le client achète un produit.
> 
> Inversion accusatoire : Un produit achète le client.
> 
> En Odoo, l'inversion accusatoire est utilisée pour formatter les
> informations dans les rapports ou les vues, par exemple, pour définir
> l'ordre des champs dans une formulaire.
> 
> J'espère que cela vous a aidé !'''

Here is the translation:

> The "inversion accusatoire" in French is called "passive" or "subject inversion" in grammar.
> 
> In Odoo, the inversion accusatoire refers to the word order in a sentence that is reversed compared to the general rule. In French, the general rule is that the subject of the sentence must be placed first, followed by the verb, and then the object.
> 
> Example of a normal sentence: The customer buys a product.
> 
> Accusative inversion: A product buys the customer.
> 
> In Odoo, the accusative inversion is used to format information in reports or views, for example, to define the order of fields in a form.

I know Italian where it is often used, as a rule, and I understand
that type of expressions in other languages, it may be used in German
too. And computer is there 𝙵𝙰𝙱𝚁𝙸𝙲𝙰𝚃𝙸𝙽𝙶 rather than abstracting.

I understand the illusion!

Somehow, since I first started using it, and good I started before a
year, I got the right impression that it doesn't reason, it generates
text by statistics.

Why I say it was important to start early? Because by starting early I
could experience what nonsense it gives me!

Fake books, fake references to written books, fake names of actors,
who acted in fake movies. You name it!

That is where I realized that underlying "reasoning" is not there, it
is fabricated.

Fabrication os making it up, should not be called
hallucination. Professionals are warning the world in their papers
that computers do not hallucinate.

Artificial Hallucinations in ChatGPT: Implications in Scientific Writing | Cureus:
https://www.cureus.com/articles/138667-artificial-hallucinations-in-chatgpt-implications-in-scientific-writing#!/

ChatGPT, Bing and Bard Don’t Hallucinate. They Fabricate - Bloomberg:
https://www.bloomberg.com/news/newsletters/2023-04-03/chatgpt-bing-and-bard-don-t-hallucinate-they-fabricate

Computers maybe confabulate, or fabricate, compute, but we can't say
hallucinate as that is akin only to human or living being.

Computers in fact only compute what authors wanted.

Anthropomorphize** (verb): to attribute human characteristics,
qualities, or behaviors to non-human entities, such as animals,
objects, or ideas. This means giving human-like qualities, emotions,
or intentions to things that are not human.

Anthropomorphizing software in computing contexts can be a way to
express emotions about how tools perform.

"My Emacs disturbed me" -- though Emacs does nothing without human
instructing him. It should be "an example" instead of "It is example". The correct sentence would be:

It is an example of anthropomorphizing.

- Emacs is getting tired, I think I need to restart it to free up some
  memory.

- I taught Emacs a new trick by writing a custom elisp function to
  automate a task.
  
- The Emacs is complaining the formatting of my code.

- Emacs is playing tricks on me, it keeps auto-indenting my code in
  weird ways.

I was always bringing up to attention when anthropomorphizing was used
in the context of releasing responsibilities from human operator or
programmer.

- can't send money, "network problem" -- that is common excuse, though
  I know someone is always reponsible, it wasn't rain falling down
  uncontrollably, it was operators lacking skills;

- computer did it! -- when operators and truly responsible people use
  anthropomorphizing to get rid of the causative responsibility;

Often people do it unconsciously. It is interesting, but it is
important for us, who think, to recognize the facts that people
utilize anthropomorphizing in their daily life.

To say that Large Language Model (LLM) "hallucinate" is yet another
public relations stunt that wish to say computers are alive.

Could the "Open"Ai be truthful to the public and not anthropomorphize
their products? Yes, they could. But they do not want.

Why? Because they want to release themselves of responsibilities, of
the frustration they caused, of the misunderstandings they have
generated.

They could say that computer is producing nonsense because it doesn't
have any intelligence, it is just mathematical program computing and
spitting text out without care to the truth.

But how "Open"AI can say that? It is against their promotional
strategy, their company name contains "AI" and they build product by
deceiving people there is some kind of "intelligence" there.  Probably
there was never any intelligence in computer so far, it is all the way
of sales and marketing. How else to make money?

Good that scientists writing those papers are not financed by those
large corporations, otherwise we would get true confusion.

>the context assumed by the LLM was false. But none was delivered.
> 
> Inside the false context, the conclusion is quite
> interesting. Because ‘achetere’ -- buying -- wasn't mentioned at
> all. In an abstract view, the reasoning might well have sense. There
> are other remarks in this response, which indicate the model was
> able to abstract over the matter.

I do not think there was any conclusion.

Just by writing "conclusion" does not make it conclusion. Computer
software like LLM is eager to pretend, it was programmed to write what
statistically is written by people and what that software got fed from
datasets.

But it can't learn. It is machine that calculates. It can accept data,
store, process by program, and give results. But cannot learn.

We can only anthropomorphize it and say "it learned", "it got
trained", as we do not have enough right words.

Can LLM software do inference? I don't think so!

It can compute and give similarity to inference, though never true
inference. Finally we are doing the same with computers for many
years.

It is just another anthropomorphized term. But we have to use it, it
is in different context. Though it is not a person who has true
capability to "infere". Is not capable of it.

Computer cannot make conclusions.(+ 2 2) ➜ 4 -- do you think that
Emacs here made conclusion that 2 plus 2 is four? It didn't.

We are anthropomorphizing it that computer made conclusions.

In fact it was electronic switching of bits and bytes.

Just take an abacus for example, by moving those balls on the wooden
abacus, operator will get results, but did Abacus make conclusions of
the result? Or was it just a tool?

By programming the Ampellicht, when it shows it is green, did the
Ampellicht make conclusion that it should provide green to people? It
is a tool, it is programmed to do that, programmer made conclusions
that it should work in specific manner, not the Ampellicht.

Air condition -- it turns on and turns off, based on temperature in
the room. But did it make conclusion that it must turn on? That it
must turn off?

Of course we get into the deception.

Though conclusion drawing is not there!

I strongly suggest to everybody to install:

ggerganov/llama.cpp: LLM inference in C/C++
https://github.com/ggerganov/llama.cpp

then to install some of the low end models, like:
QwQ-LCoT-3B-Instruct.Q4_K_M.gguf

it will work on 16GB RAM.

Then start interacting. You will see what I mean.

You will see that model will start talking, for example, if there was
conversation with child (model acting as child) (anthropomorphizing is
deceptive, model cannot "act") -- then in that conversation the
pretended child may start playing in the mud, though after a while,
one can see mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
mud, mud, mud, mud, mud.

It is unstoppable.

That is clear sign that there is no intelligence, as model does not
care of the truth. It computes something, it gives something out.

What it is? It is completely irrelevant to environment or situation at
place.

Is there any animal that do completely irrelevant activities to the
situation of life at hand? Maybe if we don't understand it, but just
observe, fish, dog, cat, cow, they all do, whatever they do, for their
survival. Their activities are pretty much aligned to it.

Computer does not have life, so it does nothing. It is tool. People do
something with computer, computer itself does nothing. It has no inner
intention to survive. That is why it cannot recognize "mud, mud, mud"
but it can give pretense of how people talk based on information
loaded into it.

-- 
Jean Louis

---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)

  reply	other threads:[~2024-12-26  8:37 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <7290780.2375960.1734348492938.ref@mail.yahoo.com>
2024-12-16 11:28 ` Enhancing ELisp for AI Work Andrew Goh via Users list for the GNU Emacs text editor
2024-12-16 13:39   ` Jean Louis
2024-12-16 14:55   ` Tomáš Petit
2024-12-16 16:26     ` Jean Louis
2024-12-16 17:38     ` Jean Louis
2024-12-17  6:24       ` Tomáš Petit
2024-12-17 10:29         ` Jean Louis
2024-12-17 10:34         ` Jean Louis
2024-12-17 11:40           ` Tomáš Petit
2024-12-17 21:35             ` Jean Louis
2024-12-18  5:04               ` tomas
2024-12-24 10:57               ` Andreas Röhler
2024-12-24 15:25                 ` Stefan Monnier via Users list for the GNU Emacs text editor
2024-12-25 20:20                   ` Andreas Röhler
2024-12-26  8:37                     ` Jean Louis [this message]
2024-12-24 16:22                 ` Christopher Howard
2024-12-26  6:06                   ` Joel Reicher
2024-12-24 21:27                 ` Jean Louis
2024-12-24 21:58                 ` Is ChatGPT bullshit? tomas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z20V34g4SxCumOB5@lco2 \
    --to=bugs@gnu.support \
    --cc=andreas.roehler@easy-emacs.de \
    --cc=emacs-tangents@gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.