From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Jean Louis Newsgroups: gmane.emacs.help Subject: Re: Enhancing ELisp for AI Work Date: Wed, 25 Dec 2024 00:27:15 +0300 Message-ID: References: <7290780.2375960.1734348492938.ref@mail.yahoo.com> <7290780.2375960.1734348492938@mail.yahoo.com> <6c53521e-4aa6-40d1-b4a5-0e00989ad201@easy-emacs.de> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="39445"; mail-complaints-to="usenet@ciao.gmane.io" User-Agent: Mutt/2.2.12 (2023-09-09) Cc: help-gnu-emacs@gnu.org To: Andreas =?utf-8?Q?R=C3=B6hler?= Original-X-From: help-gnu-emacs-bounces+geh-help-gnu-emacs=m.gmane-mx.org@gnu.org Tue Dec 24 23:17:01 2024 Return-path: Envelope-to: geh-help-gnu-emacs@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1tQDD3-000A7D-1e for geh-help-gnu-emacs@m.gmane-mx.org; Tue, 24 Dec 2024 23:17:01 +0100 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tQDCe-0003BF-3D; Tue, 24 Dec 2024 17:16:36 -0500 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tQDCS-00039a-0t for help-gnu-emacs@gnu.org; Tue, 24 Dec 2024 17:16:24 -0500 Original-Received: from stw1.rcdrun.com ([217.170.207.13]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tQDCP-0000A8-Js for help-gnu-emacs@gnu.org; Tue, 24 Dec 2024 17:16:23 -0500 Original-Received: from localhost ([::ffff:41.75.176.71]) (AUTH: PLAIN admin, TLS: TLS1.3,256bits,ECDHE_RSA_AES_256_GCM_SHA384) by stw1.rcdrun.com with ESMTPSA id 000000000007DC95.00000000676B3149.001179BA; Tue, 24 Dec 2024 15:10:16 -0700 Mail-Followup-To: Andreas =?utf-8?Q?R=C3=B6hler?= , help-gnu-emacs@gnu.org Content-Disposition: inline In-Reply-To: <6c53521e-4aa6-40d1-b4a5-0e00989ad201@easy-emacs.de> Received-SPF: pass client-ip=217.170.207.13; envelope-from=bugs@gnu.support; helo=stw1.rcdrun.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: help-gnu-emacs@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Users list for the GNU Emacs text editor List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: help-gnu-emacs-bounces+geh-help-gnu-emacs=m.gmane-mx.org@gnu.org Original-Sender: help-gnu-emacs-bounces+geh-help-gnu-emacs=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.help:148961 Archived-At: * Andreas Röhler [2024-12-24 13:59]: > Am 17.12.24 um 22:35 schrieb Jean Louis: > > ChatGPT is bullshit | Danke Andreas, aber hast Du es gelesen? The article about bullshit, doesn't speak in offensive manner about it. It is reasonable analysis of what it really is. Let me quote from: https://link.springer.com/article/10.1007/s10676-024-09775-5 > Because these programs cannot themselves be concerned with truth, and > because they are designed to produce text that looks truth-apt without > any actual concern for truth, it seems appropriate to call their > outputs bullshit. When something looks truth-apt, just because it was statistically pulled out of the database and presented nicely, does it mean that program is concerned of truth? Or there is deception, illussion? I have just asked my locally running text generator (QwQ-LCoT-3B-Instruct.Q4_K_M.gguf) and look: Depending on the context and the intended audience, it might be more appropriate to use more neutral language to convey the same message. For example, one could say: "Because these programs cannot themselves be concerned with truth and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs deceptive or untruthful." But I do think that word "bullshit" better conveys the importance of understanding the text generators. They are very deceptive. A person asked me today if that organization from Deadpool movie 2024 that can look into the future exists in reality. People are easily deceived. I am heavy user of LLM text generation as it is for improvement of information meant for consulting, sales and marketing. In general, I am making sure of the text expressiveness. Today there was 270 requests, and I sometimes let computer run to provide summary for 1000+ different texts. That experience and searching on Internet, tells me, that I may be lucky, and I am luck many times per day, but many times also not. I get so wrong information. It requires me to develop a skill to see through the deception and recognize which pieces of information may be truthful, which are fake. There are too many models, and I hope you try it by mass that you understand what I mean. I understand responses from LLM as: - proposals of the truthful information - well expressed possibilities of truth - well documented proposals I do not consider them "authentic" for reason of my research. I consider it as excerpts on which I have to review, analyse and approve. There is nothing "intelligent" there, there is absolutely no thinking, just appearance, mirror of a human behavior, it is deceptive. > No, it isn't. We have a language problem, because we have something > new. I am trying to understand what you mean. I think you are saying there is some new technology, and we are not properly recognizing it, and that it will be developed into future. I am sure it is very useful, as if it would not be useful to me, I would not be using it 4736 times since 12 months. I am fascinated with it. Can't get rid of it, it is helping me so much in life. It is true money maker. I am just thinking how to get more GPU, better hardware, how to run it locally. On that side of fascination I am fully with you. Not that it can think. It has no moral, no ethics. Only illusion of it. It doesn't care what it says, as long as math inside tells it should give some output. > Must split the notion of intelligence. > > People tried to fly like a bird. Where they successful? > > Not really. > > But no bird is able to reach the moon. I think you mean no LLM is to become like human. That for sure not, though in some future with more integration a machine could become very independent and look like living being. > LLMs are able to reason. With the amount of data they will be -- and > probably already are-- much stronger in reasoning/deduction then > humans. Well there we are, that is where I can't agree with you. I can agree with "deduction" or "inference", as computer which has access to large database was always so much assistive to human, as that is the reason why for example we use documentation search features, or search engines. But that LLM is able to reason, that I can't support. Maybe you wish to say that LLM is giving illusion of reasoning? I am not sure what you wish to say. Deduction is not necessarily reasoning, it is math. Just from this day I could give you so many examples proving that LLM cannot reason. It is program that gives probability based proposals based on inputs and available data. I have been asking today how can I turn on GPU usage in Inkscape, and I got positive answers, even though such feature apparently doesn't exist. You have to use many models to get the feeling. Back to quotes: https://link.springer.com/article/10.1007/s10676-024-09775-5 > We argue that at minimum, the outputs of LLMs like ChatGPT are soft > bullshit: bullshit–that is, speech or text produced without concern > for its truth–that is produced without any intent to mislead the > audience about the utterer’s attitude towards truth. We also > suggest, more controversially, that ChatGPT may indeed produce hard > bullshit: if we view it as having intentions (for example, in virtue > of how it is designed), then the fact that it is designed to give > the impression of concern for truth qualifies it as attempting to > mislead the audience about its aims, goals, or agenda. It is not "AI". Far from there. Even as original term I don't agree to it. It is attempt of people to make the AI, but it is not yet any kind of "intelligence". Artificial it is also not, it is a mirror of human intelligence, it is part of nature and arouse from natural development of human, it is not something separate from human, it is our product, not artificial something. I think every Emacs user who ever used M-x doctor should understand it. It is actually the first exercise to understand what is LLM. > LLMs are creativ, constructing new terms from reasoning. Human is creative. Program is as creative as human is. Program alone is not creative. It does what human directed. Turn off the electricity and show me creativity then! > They will be indispensable in science. I agree in that, but they cannot reason. If a program were capable of reasoning, one might wonder why it wouldn't wake up and start independently thinking about how to improve humanity, reducing our efforts and enhancing our lives. Instead, it merely generates text statistically and dispassionately, completely devoid of emotional or mindful connection. > All depends on context. With no context, humans too can't answer a > single question. Baby knows where is the mother without thinking or opening eyes even. The quote "AI" often doesn't know what is the time, or where is its author. https://duckduckgo.com/?t=ftsa&q=llm+cannot+reason&ia=web Back in 1984 we were playing computer games, rockets, and were fully under impression that there is something reasoning against me. It was program, deceptive, but program, not thinker. -- Jean Louis