* Re: Enhancing ELisp for AI Work
[not found] ` <112da2f7-eb5a-4112-9be0-29fa3d73045c@easy-emacs.de>
@ 2024-12-26 8:37 ` Jean Louis
2025-01-04 10:44 ` Andreas Röhler
0 siblings, 1 reply; 3+ messages in thread
From: Jean Louis @ 2024-12-26 8:37 UTC (permalink / raw)
To: Andreas Röhler; +Cc: emacs-tangents
Okay let us move it to Emacs Tangents mailing list.
In general I find your writings interesting!
Same I was thinking myself, that it is great thing, giving me accurate
results, until the verification! Many things were simply invented!
Then after verification of how it works, I could understand it and I
realized that it has no intelligence.
* Andreas Röhler <andreas.roehler@easy-emacs.de> [2024-12-25 23:22]:
> maybe let's forget ChatGPT in its current state, but refer to this phrase:
>
> '''A probabilit text generator is bullshit generator. It cares zero of
> the truth, it is program. Far from the real "intelligence".'''
>
> IMO that's wrong from the human as from the machine side.
I think you wanted to say that it is same for human, that human also
doesn't take care for truth.
And yes! That is the problem most of the time with human, they mimic
the expressions and "text", talk, communication of other human, not
everybody is thinking! But almost everybody is able to mimic. And many
many among us do not care of truth.
Though they have the capability to care of the truth. Each of us.
The capability to care of truth or discover truth even without being
trained maybe is one of natural intelligence traits.
Die Fähigkeit, die Wahrheit zu erkennen oder zu entdecken, auch ohne
entsprechende Ausbildung, könnte ein Merkmal der natürlichen
Intelligenz sein.
> Human consider themself as intelligent. But how to proof that?
No matter the level of intelligence, we can observe in small children
that they will start resolving problems. Nobody needs to teach them,
they start observing and come to solutions. Isn't that one of major
proofs?
Egal auf welchem Intelligenzniveau, wir können bei kleinen Kindern
beobachten, dass sie anfangen, Probleme zu lösen. Niemand muss es
ihnen beibringen, sie beginnen zu beobachten und kommen zu
Lösungen. Ist das nicht einer der bedeutendsten Beweise?
> Humans can count. Really? If yes, why the first element of an index
> in famous programming languages is designated as Zero instead of
> One?
Alright, though counting may not be the basic of intelligence. Person
doesn't need to count or know how to write to be intelligent.
We can't say we are all equally intelligent, many are on the level of
the animal among the human on this planet.
> OTOH machines, asked: '''in french: "inversion accusatoire"'''
>
> Response was:
>
> '''La "inversion accusatoire" en français est appelée "passif" ou "inversion
> de sujet" en grammaire.
>
> En Odoo, l'inversion accusatoire se réfère à l'ordre des mots dans une
> phrase qui est inversé par rapport à la règle générale. En français, la
> règle générale est que le sujet de la phrase doit être mis en premier,
> suivie du verbe et ensuite de l'objet.
>
> Exemple de phrase normale : Le client achète un produit.
>
> Inversion accusatoire : Un produit achète le client.
>
> En Odoo, l'inversion accusatoire est utilisée pour formatter les
> informations dans les rapports ou les vues, par exemple, pour définir
> l'ordre des champs dans une formulaire.
>
> J'espère que cela vous a aidé !'''
Here is the translation:
> The "inversion accusatoire" in French is called "passive" or "subject inversion" in grammar.
>
> In Odoo, the inversion accusatoire refers to the word order in a sentence that is reversed compared to the general rule. In French, the general rule is that the subject of the sentence must be placed first, followed by the verb, and then the object.
>
> Example of a normal sentence: The customer buys a product.
>
> Accusative inversion: A product buys the customer.
>
> In Odoo, the accusative inversion is used to format information in reports or views, for example, to define the order of fields in a form.
I know Italian where it is often used, as a rule, and I understand
that type of expressions in other languages, it may be used in German
too. And computer is there 𝙵𝙰𝙱𝚁𝙸𝙲𝙰𝚃𝙸𝙽𝙶 rather than abstracting.
I understand the illusion!
Somehow, since I first started using it, and good I started before a
year, I got the right impression that it doesn't reason, it generates
text by statistics.
Why I say it was important to start early? Because by starting early I
could experience what nonsense it gives me!
Fake books, fake references to written books, fake names of actors,
who acted in fake movies. You name it!
That is where I realized that underlying "reasoning" is not there, it
is fabricated.
Fabrication os making it up, should not be called
hallucination. Professionals are warning the world in their papers
that computers do not hallucinate.
Artificial Hallucinations in ChatGPT: Implications in Scientific Writing | Cureus:
https://www.cureus.com/articles/138667-artificial-hallucinations-in-chatgpt-implications-in-scientific-writing#!/
ChatGPT, Bing and Bard Don’t Hallucinate. They Fabricate - Bloomberg:
https://www.bloomberg.com/news/newsletters/2023-04-03/chatgpt-bing-and-bard-don-t-hallucinate-they-fabricate
Computers maybe confabulate, or fabricate, compute, but we can't say
hallucinate as that is akin only to human or living being.
Computers in fact only compute what authors wanted.
Anthropomorphize** (verb): to attribute human characteristics,
qualities, or behaviors to non-human entities, such as animals,
objects, or ideas. This means giving human-like qualities, emotions,
or intentions to things that are not human.
Anthropomorphizing software in computing contexts can be a way to
express emotions about how tools perform.
"My Emacs disturbed me" -- though Emacs does nothing without human
instructing him. It should be "an example" instead of "It is example". The correct sentence would be:
It is an example of anthropomorphizing.
- Emacs is getting tired, I think I need to restart it to free up some
memory.
- I taught Emacs a new trick by writing a custom elisp function to
automate a task.
- The Emacs is complaining the formatting of my code.
- Emacs is playing tricks on me, it keeps auto-indenting my code in
weird ways.
I was always bringing up to attention when anthropomorphizing was used
in the context of releasing responsibilities from human operator or
programmer.
- can't send money, "network problem" -- that is common excuse, though
I know someone is always reponsible, it wasn't rain falling down
uncontrollably, it was operators lacking skills;
- computer did it! -- when operators and truly responsible people use
anthropomorphizing to get rid of the causative responsibility;
Often people do it unconsciously. It is interesting, but it is
important for us, who think, to recognize the facts that people
utilize anthropomorphizing in their daily life.
To say that Large Language Model (LLM) "hallucinate" is yet another
public relations stunt that wish to say computers are alive.
Could the "Open"Ai be truthful to the public and not anthropomorphize
their products? Yes, they could. But they do not want.
Why? Because they want to release themselves of responsibilities, of
the frustration they caused, of the misunderstandings they have
generated.
They could say that computer is producing nonsense because it doesn't
have any intelligence, it is just mathematical program computing and
spitting text out without care to the truth.
But how "Open"AI can say that? It is against their promotional
strategy, their company name contains "AI" and they build product by
deceiving people there is some kind of "intelligence" there. Probably
there was never any intelligence in computer so far, it is all the way
of sales and marketing. How else to make money?
Good that scientists writing those papers are not financed by those
large corporations, otherwise we would get true confusion.
>the context assumed by the LLM was false. But none was delivered.
>
> Inside the false context, the conclusion is quite
> interesting. Because ‘achetere’ -- buying -- wasn't mentioned at
> all. In an abstract view, the reasoning might well have sense. There
> are other remarks in this response, which indicate the model was
> able to abstract over the matter.
I do not think there was any conclusion.
Just by writing "conclusion" does not make it conclusion. Computer
software like LLM is eager to pretend, it was programmed to write what
statistically is written by people and what that software got fed from
datasets.
But it can't learn. It is machine that calculates. It can accept data,
store, process by program, and give results. But cannot learn.
We can only anthropomorphize it and say "it learned", "it got
trained", as we do not have enough right words.
Can LLM software do inference? I don't think so!
It can compute and give similarity to inference, though never true
inference. Finally we are doing the same with computers for many
years.
It is just another anthropomorphized term. But we have to use it, it
is in different context. Though it is not a person who has true
capability to "infere". Is not capable of it.
Computer cannot make conclusions.(+ 2 2) ➜ 4 -- do you think that
Emacs here made conclusion that 2 plus 2 is four? It didn't.
We are anthropomorphizing it that computer made conclusions.
In fact it was electronic switching of bits and bytes.
Just take an abacus for example, by moving those balls on the wooden
abacus, operator will get results, but did Abacus make conclusions of
the result? Or was it just a tool?
By programming the Ampellicht, when it shows it is green, did the
Ampellicht make conclusion that it should provide green to people? It
is a tool, it is programmed to do that, programmer made conclusions
that it should work in specific manner, not the Ampellicht.
Air condition -- it turns on and turns off, based on temperature in
the room. But did it make conclusion that it must turn on? That it
must turn off?
Of course we get into the deception.
Though conclusion drawing is not there!
I strongly suggest to everybody to install:
ggerganov/llama.cpp: LLM inference in C/C++
https://github.com/ggerganov/llama.cpp
then to install some of the low end models, like:
QwQ-LCoT-3B-Instruct.Q4_K_M.gguf
it will work on 16GB RAM.
Then start interacting. You will see what I mean.
You will see that model will start talking, for example, if there was
conversation with child (model acting as child) (anthropomorphizing is
deceptive, model cannot "act") -- then in that conversation the
pretended child may start playing in the mud, though after a while,
one can see mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
mud, mud, mud, mud, mud.
It is unstoppable.
That is clear sign that there is no intelligence, as model does not
care of the truth. It computes something, it gives something out.
What it is? It is completely irrelevant to environment or situation at
place.
Is there any animal that do completely irrelevant activities to the
situation of life at hand? Maybe if we don't understand it, but just
observe, fish, dog, cat, cow, they all do, whatever they do, for their
survival. Their activities are pretty much aligned to it.
Computer does not have life, so it does nothing. It is tool. People do
something with computer, computer itself does nothing. It has no inner
intention to survive. That is why it cannot recognize "mud, mud, mud"
but it can give pretense of how people talk based on information
loaded into it.
--
Jean Louis
---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Enhancing ELisp for AI Work
2024-12-26 8:37 ` Enhancing ELisp for AI Work Jean Louis
@ 2025-01-04 10:44 ` Andreas Röhler
2025-01-04 18:09 ` Jean Louis
0 siblings, 1 reply; 3+ messages in thread
From: Andreas Röhler @ 2025-01-04 10:44 UTC (permalink / raw)
To: Jean Louis; +Cc: emacs-tangents
Hi Jean,
tried your code delivered at
https://lists.gnu.org/archive/html/help-gnu-emacs/2024-12/msg00363.html
which works nicely, thanks!
Notably it's much smaller than the stuff seen so far.
Is there a repo for it?
Maybe some tweaks be of interest for other too.
Cheers,
Andreas
Am 26.12.24 um 09:37 schrieb Jean Louis:
> Okay let us move it to Emacs Tangents mailing list.
>
> In general I find your writings interesting!
>
> Same I was thinking myself, that it is great thing, giving me accurate
> results, until the verification! Many things were simply invented!
>
> Then after verification of how it works, I could understand it and I
> realized that it has no intelligence.
>
> * Andreas Röhler <andreas.roehler@easy-emacs.de> [2024-12-25 23:22]:
>> maybe let's forget ChatGPT in its current state, but refer to this phrase:
>>
>> '''A probabilit text generator is bullshit generator. It cares zero of
>> the truth, it is program. Far from the real "intelligence".'''
>>
>> IMO that's wrong from the human as from the machine side.
> I think you wanted to say that it is same for human, that human also
> doesn't take care for truth.
>
> And yes! That is the problem most of the time with human, they mimic
> the expressions and "text", talk, communication of other human, not
> everybody is thinking! But almost everybody is able to mimic. And many
> many among us do not care of truth.
>
> Though they have the capability to care of the truth. Each of us.
>
> The capability to care of truth or discover truth even without being
> trained maybe is one of natural intelligence traits.
>
> Die Fähigkeit, die Wahrheit zu erkennen oder zu entdecken, auch ohne
> entsprechende Ausbildung, könnte ein Merkmal der natürlichen
> Intelligenz sein.
>
>> Human consider themself as intelligent. But how to proof that?
> No matter the level of intelligence, we can observe in small children
> that they will start resolving problems. Nobody needs to teach them,
> they start observing and come to solutions. Isn't that one of major
> proofs?
>
> Egal auf welchem Intelligenzniveau, wir können bei kleinen Kindern
> beobachten, dass sie anfangen, Probleme zu lösen. Niemand muss es
> ihnen beibringen, sie beginnen zu beobachten und kommen zu
> Lösungen. Ist das nicht einer der bedeutendsten Beweise?
>
>> Humans can count. Really? If yes, why the first element of an index
>> in famous programming languages is designated as Zero instead of
>> One?
> Alright, though counting may not be the basic of intelligence. Person
> doesn't need to count or know how to write to be intelligent.
>
> We can't say we are all equally intelligent, many are on the level of
> the animal among the human on this planet.
>
>> OTOH machines, asked: '''in french: "inversion accusatoire"'''
>>
>> Response was:
>>
>> '''La "inversion accusatoire" en français est appelée "passif" ou "inversion
>> de sujet" en grammaire.
>>
>> En Odoo, l'inversion accusatoire se réfère à l'ordre des mots dans une
>> phrase qui est inversé par rapport à la règle générale. En français, la
>> règle générale est que le sujet de la phrase doit être mis en premier,
>> suivie du verbe et ensuite de l'objet.
>>
>> Exemple de phrase normale : Le client achète un produit.
>>
>> Inversion accusatoire : Un produit achète le client.
>>
>> En Odoo, l'inversion accusatoire est utilisée pour formatter les
>> informations dans les rapports ou les vues, par exemple, pour définir
>> l'ordre des champs dans une formulaire.
>>
>> J'espère que cela vous a aidé !'''
> Here is the translation:
>
>> The "inversion accusatoire" in French is called "passive" or "subject inversion" in grammar.
>>
>> In Odoo, the inversion accusatoire refers to the word order in a sentence that is reversed compared to the general rule. In French, the general rule is that the subject of the sentence must be placed first, followed by the verb, and then the object.
>>
>> Example of a normal sentence: The customer buys a product.
>>
>> Accusative inversion: A product buys the customer.
>>
>> In Odoo, the accusative inversion is used to format information in reports or views, for example, to define the order of fields in a form.
> I know Italian where it is often used, as a rule, and I understand
> that type of expressions in other languages, it may be used in German
> too. And computer is there 𝙵𝙰𝙱𝚁𝙸𝙲𝙰𝚃𝙸𝙽𝙶 rather than abstracting.
>
> I understand the illusion!
>
> Somehow, since I first started using it, and good I started before a
> year, I got the right impression that it doesn't reason, it generates
> text by statistics.
>
> Why I say it was important to start early? Because by starting early I
> could experience what nonsense it gives me!
>
> Fake books, fake references to written books, fake names of actors,
> who acted in fake movies. You name it!
>
> That is where I realized that underlying "reasoning" is not there, it
> is fabricated.
>
> Fabrication os making it up, should not be called
> hallucination. Professionals are warning the world in their papers
> that computers do not hallucinate.
>
> Artificial Hallucinations in ChatGPT: Implications in Scientific Writing | Cureus:
> https://www.cureus.com/articles/138667-artificial-hallucinations-in-chatgpt-implications-in-scientific-writing#!/
>
> ChatGPT, Bing and Bard Don’t Hallucinate. They Fabricate - Bloomberg:
> https://www.bloomberg.com/news/newsletters/2023-04-03/chatgpt-bing-and-bard-don-t-hallucinate-they-fabricate
>
> Computers maybe confabulate, or fabricate, compute, but we can't say
> hallucinate as that is akin only to human or living being.
>
> Computers in fact only compute what authors wanted.
>
> Anthropomorphize** (verb): to attribute human characteristics,
> qualities, or behaviors to non-human entities, such as animals,
> objects, or ideas. This means giving human-like qualities, emotions,
> or intentions to things that are not human.
>
> Anthropomorphizing software in computing contexts can be a way to
> express emotions about how tools perform.
>
> "My Emacs disturbed me" -- though Emacs does nothing without human
> instructing him. It should be "an example" instead of "It is example". The correct sentence would be:
>
> It is an example of anthropomorphizing.
>
> - Emacs is getting tired, I think I need to restart it to free up some
> memory.
>
> - I taught Emacs a new trick by writing a custom elisp function to
> automate a task.
>
> - The Emacs is complaining the formatting of my code.
>
> - Emacs is playing tricks on me, it keeps auto-indenting my code in
> weird ways.
>
> I was always bringing up to attention when anthropomorphizing was used
> in the context of releasing responsibilities from human operator or
> programmer.
>
> - can't send money, "network problem" -- that is common excuse, though
> I know someone is always reponsible, it wasn't rain falling down
> uncontrollably, it was operators lacking skills;
>
> - computer did it! -- when operators and truly responsible people use
> anthropomorphizing to get rid of the causative responsibility;
>
> Often people do it unconsciously. It is interesting, but it is
> important for us, who think, to recognize the facts that people
> utilize anthropomorphizing in their daily life.
>
> To say that Large Language Model (LLM) "hallucinate" is yet another
> public relations stunt that wish to say computers are alive.
>
> Could the "Open"Ai be truthful to the public and not anthropomorphize
> their products? Yes, they could. But they do not want.
>
> Why? Because they want to release themselves of responsibilities, of
> the frustration they caused, of the misunderstandings they have
> generated.
>
> They could say that computer is producing nonsense because it doesn't
> have any intelligence, it is just mathematical program computing and
> spitting text out without care to the truth.
>
> But how "Open"AI can say that? It is against their promotional
> strategy, their company name contains "AI" and they build product by
> deceiving people there is some kind of "intelligence" there. Probably
> there was never any intelligence in computer so far, it is all the way
> of sales and marketing. How else to make money?
>
> Good that scientists writing those papers are not financed by those
> large corporations, otherwise we would get true confusion.
>
>> the context assumed by the LLM was false. But none was delivered.
>>
>> Inside the false context, the conclusion is quite
>> interesting. Because ‘achetere’ -- buying -- wasn't mentioned at
>> all. In an abstract view, the reasoning might well have sense. There
>> are other remarks in this response, which indicate the model was
>> able to abstract over the matter.
> I do not think there was any conclusion.
>
> Just by writing "conclusion" does not make it conclusion. Computer
> software like LLM is eager to pretend, it was programmed to write what
> statistically is written by people and what that software got fed from
> datasets.
>
> But it can't learn. It is machine that calculates. It can accept data,
> store, process by program, and give results. But cannot learn.
>
> We can only anthropomorphize it and say "it learned", "it got
> trained", as we do not have enough right words.
>
> Can LLM software do inference? I don't think so!
>
> It can compute and give similarity to inference, though never true
> inference. Finally we are doing the same with computers for many
> years.
>
> It is just another anthropomorphized term. But we have to use it, it
> is in different context. Though it is not a person who has true
> capability to "infere". Is not capable of it.
>
> Computer cannot make conclusions.(+ 2 2) ➜ 4 -- do you think that
> Emacs here made conclusion that 2 plus 2 is four? It didn't.
>
> We are anthropomorphizing it that computer made conclusions.
>
> In fact it was electronic switching of bits and bytes.
>
> Just take an abacus for example, by moving those balls on the wooden
> abacus, operator will get results, but did Abacus make conclusions of
> the result? Or was it just a tool?
>
> By programming the Ampellicht, when it shows it is green, did the
> Ampellicht make conclusion that it should provide green to people? It
> is a tool, it is programmed to do that, programmer made conclusions
> that it should work in specific manner, not the Ampellicht.
>
> Air condition -- it turns on and turns off, based on temperature in
> the room. But did it make conclusion that it must turn on? That it
> must turn off?
>
> Of course we get into the deception.
>
> Though conclusion drawing is not there!
>
> I strongly suggest to everybody to install:
>
> ggerganov/llama.cpp: LLM inference in C/C++
> https://github.com/ggerganov/llama.cpp
>
> then to install some of the low end models, like:
> QwQ-LCoT-3B-Instruct.Q4_K_M.gguf
>
> it will work on 16GB RAM.
>
> Then start interacting. You will see what I mean.
>
> You will see that model will start talking, for example, if there was
> conversation with child (model acting as child) (anthropomorphizing is
> deceptive, model cannot "act") -- then in that conversation the
> pretended child may start playing in the mud, though after a while,
> one can see mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
> mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
> mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud, mud,
> mud, mud, mud, mud, mud.
>
> It is unstoppable.
>
> That is clear sign that there is no intelligence, as model does not
> care of the truth. It computes something, it gives something out.
>
> What it is? It is completely irrelevant to environment or situation at
> place.
>
> Is there any animal that do completely irrelevant activities to the
> situation of life at hand? Maybe if we don't understand it, but just
> observe, fish, dog, cat, cow, they all do, whatever they do, for their
> survival. Their activities are pretty much aligned to it.
>
> Computer does not have life, so it does nothing. It is tool. People do
> something with computer, computer itself does nothing. It has no inner
> intention to survive. That is why it cannot recognize "mud, mud, mud"
> but it can give pretense of how people talk based on information
> loaded into it.
>
---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Enhancing ELisp for AI Work
2025-01-04 10:44 ` Andreas Röhler
@ 2025-01-04 18:09 ` Jean Louis
0 siblings, 0 replies; 3+ messages in thread
From: Jean Louis @ 2025-01-04 18:09 UTC (permalink / raw)
To: Andreas Röhler; +Cc: emacs-tangents
[-- Attachment #1: Type: text/plain, Size: 9934 bytes --]
* Andreas Röhler <andreas.roehler@easy-emacs.de> [2025-01-04 13:44]:
> Hi Jean,
>
> tried your code delivered at
> https://lists.gnu.org/archive/html/help-gnu-emacs/2024-12/msg00363.html
>
> which works nicely, thanks!
>
> Notably it's much smaller than the stuff seen so far.
>
> Is there a repo for it?
I don't use git.
> Maybe some tweaks be of interest for other too.
I am glad that it works for you. I am attaching the full library which
I am using actively. You feel free of course to modify it as you
wish. Those functions beyond the database were just for my learning
stage. I am using database based model and API key settings:
20 Qwen/Qwen2.5-Coder-32B-Instruct, HuggingFace, https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions
21 rocket-3b.Q4_K_M.llamafile, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
22 Mistral-Nemo-Base-2407, llama.cpp, https://api-inference.huggingface.co/models/mistralai/Mistral-Nemo-Base-2407
23 mistralai/Mistral-Nemo-Instruct-2407, HuggingFace, https://api-inference.huggingface.co/models/mistralai/Mistral-Nemo-Instruct-2407/v1/chat/completions
24 Phi-3.5-mini-instruct-Q3_K_M.gguf, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
25 mistral-7b-v0.1.Q5_K_M.gguf, llama.cpp, http://127.0.0.1:8080/v1/chat/completions
26 Phi-3.5-mini-instruct-Q3_K_M.gguf, llama.cpp, http://127.0.0.1:8080/v1/chat/completions
27 bling-phi-3.5.gguf, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
28 granite-3.1-2b-instruct-Q5_K.gguf, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
29 Qwen2.5-7B-Instruct_Q3_K_M.gguf, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
30 Qwen2.5-1.5B-Instruct, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
So basically I am editing settings in the database for each model. I
cannot think of using Emacs variables for huge number of models, my
entry looks like following and it works well.
ID 30
UUID "09834f52-e601-40e2-8e4e-e6814de72f81"
Date created "2025-01-02 23:07:25.345686+03"
Date modified "2025-01-02 23:13:35.102727+03"
User created "maddox"
User modified "maddox"
Model "Qwen2.5-1.5B-Instruct"
Description nil
Hyperdocument nil
LLM Endpoint "http://192.168.188.140:8080/v1/chat/completions"
User "Jean Louis"
Rank 0
Model's nick "LLM: "
Temperature 0.6
Max tokens 2048
Top-p 0.85
Top-k 30.0
Min-p 0.1
System message "You are helpful assistant."
I am using Emacs functions which serve in the end as "AI agents", a
function can iterate over some entries in the database and provide
descriptions, here is practical example:
(defun rcd-db-describe-countries ()
"Use this function to describe the whole table `countries'."
(interactive)
(let* ((id (rcd-sql-first "SELECT countries_id
FROM countries
WHERE countries_description IS NULL
ORDER BY countries_id"
rcd-db))
(country (rcd-db-get-entry "countries" "countries_name" id rcd-db))
(prompt (format "Describe the country: %s" country))
(description (rcd-llm prompt)))
(when description
(rcd-db-update-entry "countries" "countries_description" id description rcd-db)
(rcd-message "%s" description))))
Then:
(run-with-timer 10 20 'rcd-db-describe-countries)
or you can run with idle timer!
and I get entries like:
Austria is a country located in Central Europe. It has a population of about 9 million people and covers an area of about 83, 879 square kilometers. The capital city is Vienna, which is also its largest city and cultural and economic center. Other major cities include Graz, Linz, and Innsbruck.
Austria is known for its rich history and culture, which is reflected in its architecture, museums, and festivals. It is also famous for its food, especially its cheese and meat dishes.
Austria is a member of the European Union and is part of the Schengen Area, which means that its citizens do not have to hold a passport to travel to other European countries. It is also a member of NATO and is a landlocked country.
Those entries later I can use in a dashboard, like when viewing a
profile of customer, I can click on the country to see more
information about it on instant.
It runs in background all the time on the low-end Nvidia GTX 1050 Ti 4
GB RAM, but I would like to get GTX 3090 with 25 GB RAM soon somewhere
somehow. And I have 16 GB RAM.
I am using full free software models like Qwen-1.5, these work very well:
21 rocket-3b.Q4_K_M.llamafile, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
22 Mistral-Nemo-Base-2407, llama.cpp, https://api-inference.huggingface.co/models/mistralai/Mistral-Nemo-Base-2407
23 mistralai/Mistral-Nemo-Instruct-2407, HuggingFace, https://api-inference.huggingface.co/models/mistralai/Mistral-Nemo-Instruct-2407/v1/chat/completions
24 Phi-3.5-mini-instruct-Q3_K_M.gguf, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
25 mistral-7b-v0.1.Q5_K_M.gguf, llama.cpp, http://127.0.0.1:8080/v1/chat/completions
26 Phi-3.5-mini-instruct-Q3_K_M.gguf, llama.cpp, http://127.0.0.1:8080/v1/chat/completions
27 bling-phi-3.5.gguf, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
28 granite-3.1-2b-instruct-Q5_K.gguf, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
29 Qwen2.5-7B-Instruct_Q3_K_M.gguf, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
30 Qwen2.5-1.5B-Instruct, llama.cpp, http://192.168.188.140:8080/v1/chat/completions
If you are using it locally, models like Phi-3.5-mini under MIT
license from Microsoft (wow!) has most quality that I know and fastest
is Qwen2.5-1.5B which I use to generate meaningful keyword for 1500+
website pages.
Keywords are generated as Emacs Lisp list:
("screens" "being connected together" "feeding rate" "approximately 5-6 tonnes per hour" "welding" "screws" "gold particles" "sluice" "effectively separate gold particles" "sluice" "retract other materials" "screens" "reusable" "screens" "cost efficiency" "utilize screws instead of welding")
They may be repetitive, but what matters is that it is pretty nicely
formatted. Prompt is complicated, but it works pretty well, most of
time.
Those which come out wrong sometimes, can easily and automatically be
connected.
Why that? Well when I know which important keywords relate to some
website page, later I can use PostgreSQL trigram functions to find
similar keywords in other pages, and relate those pages for linking.
Once related, the pages will have keywords inside of the text and
related pages related to those keywords.
When I process the website, no matter the markup, before processing, I
can insert those links without my supervision and special editing one
by one.
For example this text would get linked over the words "cost
efficiency" to some page www.example.com automatically, without my
attention, on the fly, before Markdown, Asciidoctor or Org Mode or
other markup is converted to HTML:
"The company struggled to achieve cost efficiency while trying to
increase production."
Linked pages contribute to the overall understanding of products and
services on a website by providing additional information and context
for the main content. It helps in guiding clients to the products or
services.
IMHO it is better for programmers to use their own functions to
request LLM responses as that way you get more freedom, rather than
trying to accommodate yourself to existing pretty large libraries like
gptel or chatgpt-emacs something.
Local models such as Phi-3.5-mini and Qwen2.5-1.5B, among others, are
notably efficient and encompass a vast amount of data. They are
beneficial for education and understanding of information. However,
these models are not intended for accuracy, and users must recognize
that they simply store information rather than perform actual thought
or intelligence. The term "artificial intelligence" is somewhat
misleading as it implies some kind of thinking, but it’s appropriate
as long as one understands "artificial" in the context of
non-intelligent computation. These models generate text through
statistical analysis of tensors without any conscious decision-making,
which differs from true thinking and intelligence. The true thinking
relies on an innate "survival" principle that computers lack.
The information produced by an LLM, which might seem nonsensical to
humans, was generated with the same values and worthiness as the
information that seemed reasonable to humans. This is deceptive, as
humans are misled by the work of an LLM, even though it merely
replicates human behavior.
When a foreigner learns basic phrases like "hello", "how are you",
"thank you", and "good bye", locals might mistakenly believe they know
Chinese. In reality, this doesn't imply the speaker understands the
language. The receiver of communication often interprets the speaker's
few words as part of the language.
Same with the LLM. It is mimicking and human thinks "wow, it can
interact with me, it thinks". It is an illusion.
ChatGPT is bullshit | Ethics and Information Technology
https://link.springer.com/article/10.1007/s10676-024-09775-5
In my opinion we shall open up GNU project and adopt some of the fully
free LLM models and build on it.
--
Jean Louis
[-- Attachment #2: rcd-llm-without-api-keys.el --]
[-- Type: text/plain, Size: 27382 bytes --]
;;; rcd-llm.el --- RCD LLM Functions -*- lexical-binding: t; -*-
;; Copyright (C) 2024 by Jean Louis
;; Author: Jean Louis <bugs@gnu.support>
;; Version: 0.1
;; Package-Requires: (rcd-utilities rcd-pg-basics rcd-cf hyperscope)
;; Keywords: convenience help multimedia text tools
;; URL:
;; This file is not part of GNU Emacs.
;; This program is free software: you can redistribute it and/or
;; modify it under the terms of the GNU General Public License as
;; published by the Free Software Foundation, either version 3 of the
;; License, or (at your option) any later version.
;;
;; This program is distributed in the hope that it will be useful, but
;; WITHOUT ANY WARRANTY; without even the implied warranty of
;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
;; General Public License for more details.
;;
;; You should have received a copy of the GNU General Public License
;; along with this program. If not, see <http://www.gnu.org/licenses/>.
;;; Commentary:
;; RCD LLM Functions
;;; Change Log:
;;; Code:
(require 'rcd-utilities)
(require 'rcd-cf)
(require 'rcd-pg-basics)
(require 'hyperscope)
(require 'rcd-dashboard)
;;; Customization Group
(defgroup rcd-llm nil
"Customization options for RCD AI functionalities."
:prefix "rcd-llm-"
:group 'applications)
;;; Customize Variables
(defcustom rcd-llm-users-llm-function 'rcd-llm-db
"User's AI function.
This variable determines which AI function is used by the RCD AI system.
You can customize it to use a different AI backend as needed."
:type '(choice (const :tag "ChatGPT Shell" rcd-chatgpt-shell)
(const :tag "RCD ChatGPT" rcd-llm-chatgpt)
(const :tag "Groq" rcd-llm-groq)
(const :tag "RCD LLM Database" rcd-llm-db)
(const :tag "Llamafile" rcd-llm-llamafile)
(const :tag "HuggingFace" rcd-llm-huggingface)
(const :tag "Mistral" rcd-llm-mistral))
:group 'rcd-llm)
;;; Variables
(defvar rcd-llm-prompt "LLM Prompt: "
"Defines LLM prompt.")
(defvar rcd-llm-last-response ""
"Defines last LLM response.")
(defvar rcd-llm-use-users-llm-memory nil
"When TRUE, use user's AI memory.")
(defvar rcd-llm-add-to-memory nil
"When TRUE, always add to present AI memory of the user.")
(defvar rcd-llm-speak t
"Toggle speaking by LLM.")
(defalias 'rcd-llm-memory-select 'rcd-db-current-user-llmq-memory-select)
;;; LLM Utilities
(defun rcd-llm-speak-toggle ()
"Toggles the LLM speech output"
(interactive)
(cond (rcd-llm-speak (setq rcd-llm-speak nil))
(t (setq rcd-llm-speak t)))
(rcd-message (format "Variable `rcd-llm-speak': %s" rcd-llm-speak)))
(defun rcd-llm-user-memory ()
"Return current user AI memory."
(let ((memory (rcd-db-get-entry-where "usersdefaults" "usersdefaults_aimemory"
(format "usersdefaults_users = %s" (rcd-db-current-user))
rcd-db)))
(when memory (concat (hyperscope-text-with-title memory)))))
(defun rcd-parse-http-json-string (string)
"Parses a JSON STRING preceded by HTTP headers.
Returns the parsed JSON object."
(let ((json-start-index (string-match "\{" string)))
(when json-start-index
(json-read-from-string (substring string json-start-index)))))
(defun rcd-llm-clipboard-modify ()
"Modify X clipboard by using AI.
User may write in the web browser, marks the text Let us say yo"
(interactive)
(select-frame-set-input-focus (selected-frame))
(with-temp-buffer
(clipboard-yank)
(goto-char (point-min))
(set-mark (point-max))
(rcd-region-string)
(rcd-llm)
(gui--set-last-clipboard-selection (buffer-string))))
(defun rcd-llm-add-to-memory ()
"Toggle variable `rcd-llm-add-to-memory'.
When `rcd-llm-add-to-memory' is TRUE, then LLM responses are added to
users's memory."
(interactive)
(let ((answer (y-or-n-p "Add to user's LLM memory? ")))
(cond (answer (setq rcd-llm-add-to-memory t))
(t (setq rcd-llm-add-to-memory nil)))
(rcd-message (format "Variable `rcd-llm-add-to-memory' is not: %s" rcd-llm-add-to-memory))))
(defun rcd-llm-use-users-memory ()
"Toggle variable `rcd-llm-use-users-memory'.
When `rcd-llm-use-users-memory' is TRUE, then user's LLM memory is used
with each prompt."
(interactive)
(let ((answer (y-or-n-p "Use user's AI memory? ")))
(cond (answer (setq rcd-llm-use-users-llm-memory t))
(t (setq rcd-llm-use-users-llm-memory nil)))))
;;; LLM Logging
(defun rcd-log-llm-list (&optional query)
"Search LLM log and display report.
Optional QUERY string may be used for search."
(interactive)
(let* ((query (or query (rcd-string-nil-if-blank (rcd-ask "Find ChatGPT log: "))))
(query (when query (sql-escape-string query)))
(query-sql (cond (query (concat " AND log_name ~* " query " OR log_description ~* " query))
(t "")))
(sql (format "SELECT log_id, coalesce(get_full_contacts_name(log_people), 'UNKNOWN'),
REPLACE(coalesce(log_name,''), E'\n', ' '),
REPLACE(coalesce(log_description,''), E'\n', ' ')
FROM log
WHERE log_logtypes = 8 %s" query-sql))
(prompt "RCD Notes ChatGPT Log"))
(rcd-db-sql-report prompt sql [("ID" 4 t) ("Contact" 20 t) ("Prompt" 20 t) ("Response" 100 t)] "log" nil nil)))
(defun rcd-log-llm-model ()
(let* ((users-model (rcd-db-users-defaults "llmmodels"))
(model (rcd-sql-list "SELECT llmmodels_name, llmendpoints_name
FROM llmendpoints, llmmodels, usersdefaults
WHERE llmendpoints_id = llmmodels_llmendpoints
AND llmmodels_id = usersdefaults_llmmodels
AND usersdefaults_users = $1"
rcd-db 1)))
model))
(defun rcd-log-llm (prompt response)
"Log PROMPT, RESPONSE.
It takes two arguments: PROMPT and RESPONSE. The purpose of this
function is to log a LLM's PROMPT and RESPONSE in a database."
(cond ((and (and prompt response)
(and (stringp prompt) (stringp response)))
(let* ((function (symbol-name rcd-llm-users-llm-function))
(model-name (cond ((not (eq function (symbol-name 'rcd-llm-db)))
(format "Function: %s" function))
(t (car (rcd-log-llm-model)))))
(model-url (cond ((not (eq function (symbol-name 'rcd-llm-db)))
"URL: see function")
(t (cdr (rcd-log-llm-model)))))
(note (format "Model name: %s\nModel URL: %s\n" model-name model-url)))
(rcd-sql-first "INSERT INTO log (log_people, log_name, log_description, log_logtypes, log_note)
VALUES (1, $1, $2, $3, $4)
RETURNING log_id"
rcd-db prompt response 8 note)))
((not prompt) (user-error "rcd-log-llm: PROMPT missing."))
((not response) (user-error "rcd-log-llm: RESPONSE missing."))))
;;; Main LLM Functions
(defun hyperscope-llm-user-new-memory ()
"Generate new AI memory elementary object for user, and keep adding to it."
(interactive)
(let* ((defaults-id (rcd-db-current-user-defaults-id))
(title (rcd-ask-get "New AI memory title: "))
(subtype (rcd-llm-users-memory-subtype))
(description (rcd-ask (format "Describe `%s'" title)))
(set (rcd-repeat-until-something 'hyperscope-select-set "Select set for AI memory: "))
(id (hyperscope-add-generic title nil nil 1 subtype set nil description)))
(rcd-db-update-entry "usersdefaults" "usersdefaults_aimemory" defaults-id id rcd-db)
(setq rcd-llm-add-to-memory t)))
(defun rcd-llm-switch-users-memory ()
"Switch to users AI memory if such exist."
(interactive)
(let ((id (rcd-db-current-user-llm-memory)))
(when id
(hyperscope-isolate id))))
(defun rcd-db-current-user-llm-subtype ()
"Return AI subtype."
(rcd-db-get-entry "usersdefaults" "usersdefaults_aimemorysubtype" (rcd-db-current-user) rcd-db))
(defun rcd-db-current-user-llm-memory ()
"Return AI memory."
(rcd-db-get-entry "usersdefaults" "usersdefaults_aimemory" (rcd-db-current-user) rcd-db))
(defun rcd-llm-users-memory-subtype ()
"Return LLM memory subtype."
(let* ((defaults-id (rcd-db-current-user-defaults-id))
(memory-subtype (or (rcd-db-get-entry
"usersdefaults" "usersdefaults_aimemorysubtype" defaults-id rcd-db)
(let ((subtype (hyperscope-subtype-select "Select AI memory subtype: ")))
(when (and subtype defaults-id)
(rcd-db-update-entry "usersdefaults" "usersdefaults_aimemorysubtype"
defaults-id subtype rcd-db))))))
memory-subtype))
(defun rcd-db-current-user-llm-memory-select ()
"Select AI memory for current user."
(interactive)
(let* ((defaults-id (rcd-db-current-user-defaults-id))
(memory-subtype (rcd-llm-users-memory-subtype))
(memory (hyperscope-select-by-subtype "Select AI memory: " memory-subtype)))
(when (and defaults-id memory memory-subtype)
(when (rcd-db-update-entry "usersdefaults" "usersdefaults_aimemorysubtype"
defaults-id memory-subtype rcd-db)
(rcd-db-update-entry "usersdefaults" "usersdefaults_aimemory" defaults-id memory rcd-db)
(setq rcd-llm-add-to-memory t)))))
(defvar rcd-llm-pop-to-window nil)
(defun rcd-llm (&optional prompt)
"Send PROMPT to default LLM.
With single prefix key `C-u' it will add RESPONSE after the cursor or
PROMPT in the buffer.
With double prefix key `C-u C-u' it will kill RESPONSE in the memory.
With triple prefix key `C-u C-u C-u' it will pop up new buffer with the
PROMPT and RESPONSE.
It will invoke function as customized by user in the variable
`rcd-llm-users-llm-function'.
It determines the selected region of text using the `rcd-region-string'
function and assigns it to the `region` variable.
It processes the PROMPT value based on several conditions:
- If both the region and PROMPT are non-empty, it combines the prompt
and region with appropriate formatting.
- If the PROMPT is non-empty but there is no region, it uses the PROMPT
as is.
- If the PROMPT is empty but there is a region, it uses the region as
the prompt.
- If both the PROMPT and region are empty, it displays a warning
message.
- The processed prompt is then assigned to the `prompt`
variable.
5. It sends the `prompt` and the model name \"gpt-3.5-turbo\" to
the `chatgpt-shell-post-prompt` function to obtain a response
from the ChatGPT model. The response is stored in the `response`
variable.
6. It logs the response using the `rcd-log-llm` function.
7. Based on the conditions, it performs different actions:
- If there is a selected `region` and the function is called
with a PREFIX argument of 4, it inserts the response right
after the region in the buffer.
- If there is a selected `region` and the function is called
with a PREFIX argument of 16, it copies the response to the
kill ring using `rcd-kill-new`.
- If there is a selected `region`, it replaces the selected
region with the response using `rcd-region-string-replace`.
- If there is no selected region, it simply inserts the
response.
8. The function completes execution."
(interactive)
(let* ((region (rcd-region-string))
(rcd-llm-model (map-elt (seq-first chatgpt-shell-models) :version))
(memory (when rcd-llm-use-users-llm-memory (rcd-llm-user-memory)))
(prompt (or prompt (rcd-ask rcd-llm-prompt)))
(prompt (cond ((and region
(not (string-empty-p prompt)))
(cond (t (concat (string-add prompt ":\n\n") region))))
((not (string-empty-p prompt)) prompt)
((and region
(string-empty-p prompt))
region)
(t nil)))
(rcd-message-date nil))
(cond (prompt
(rcd-message "Requesting LLM...")
(let ((response (cond (rcd-llm-model (funcall rcd-llm-users-llm-function prompt memory rcd-llm-model))
(t (error "Could not find a model. Missing model setup?")))))
(cond (response (rcd-log-llm prompt response)
(cond
;; when there is region
((and region
(called-interactively-p)
(eql (car current-prefix-arg) 4))
(goto-char (cdar (region-bounds)))
(setq deactivate-mark t)
(insert "\n\n" response "\n\n"))
;; When region and 2 C-u
((and region
(called-interactively-p)
(eql (car current-prefix-arg) 16))
(setq deactivate-mark t)
(rcd-kill-new response))
;; with 3 x C-u open buffer
((or (and (eql (car current-prefix-arg) 64)
(called-interactively-p))
rcd-llm-pop-to-window)
(rcd-pop-to-report (concat (underline-text (concat "LLM Function: " (upcase (symbol-name rcd-llm-users-llm-function))))
(concat "Prompt: " prompt)
"\n"
(make-string fill-column (string-to-char "="))
"\n\n"
response)
"*LLM Response*")
(switch-to-buffer "*LLM Response*")
(markdown-mode)
(text-scale-adjust 1)
(local-set-key "q" 'kill-buffer-and-window))
;; otherwise if region
((and region (called-interactively-p))
(rcd-region-string-replace response))
;; if called interactively, insert into buffer
((called-interactively-p) (insert response))
;; otherwise just return response
(t response))
(when (and rcd-llm-add-to-memory rcd-llm-use-users-llm-memory)
(hyperscope-add-to-column (rcd-db-current-user-llm-memory) "hyobjects_text" prompt)
(hyperscope-add-to-column (rcd-db-current-user-llm-memory) "hyobjects_text" response))
(setq rcd-llm-last-response response)
(when rcd-llm-speak
(rcd-tts-and-speak "Finished."))
;; (rcd-notify (format "Processing finished:" rcd-llm-users-llm-function)
;; (concat "\n" (rcd-substring-soft rcd-llm-last-response 0 100))
;; nil
;; "/usr/share/icons/gnome/256x256/status/starred.png")
(cond ((called-interactively-p) (rcd-kill-new response))
(t response)))
(t (prog2 (rcd-warning-message "Could not reach AI server") nil)))))
(t (rcd-warning-message "LLM: Empty Prompt")))))
(defun rcd-llm-response (response-buffer)
"Parse LLM's RESPONSE-BUFFER and return decoded string."
(when response-buffer
(with-current-buffer response-buffer
(condition-case err
(progn
;; Skip HTTP headers
(goto-char (point-min))
(when (search-forward "\n\n" nil t)
(let ((response (decode-coding-string (buffer-substring-no-properties (point) (point-max)) 'utf-8)))
(kill-buffer response-buffer)
;; Parse JSON and extract the reply
(let* ((json-response (json-parse-string response :object-type 'alist))
(choices (alist-get 'choices json-response))
(message (alist-get 'message (aref choices 0)))
(message (decode-coding-string (alist-get 'content message) 'utf-8)))
(replace-regexp-in-string (rx (or "</s>" "<|eot_id|>" "<|end|>" "<|endoftext|>"
"|im_end|" "<|end_of_text|>" "<|end_of_role|>")
line-end)
"\n" message)))))
(error (rcd-message "Error in rcd-llm-response: %s" (error-message-string err))
nil)))))
;;; LLM Statistics
(defun rcd-llm-usage-by-day ()
(interactive)
(cf-chart-bar-quickie "SELECT date_part('day', log_datecreated)::int AS day,
count(log_name)
FROM log
WHERE log_logtypes = 8
AND log_datecreated > (current_date - 30)
GROUP BY date_part('day', log_datecreated)::int
ORDER BY day DESC"
"LLM usage by day"
"Last days"
"Totals"))
(defun rcd-llm-usage-by-week ()
(interactive)
(rcd-db-chart-by-periods "log" "week" "LLM usage by week" "WEEKS" "REQUESTS"
"WHERE log_logtypes = 8
AND log_datecreated > (current_date - 365)"))
(defun rcd-llm-usage-by-month ()
(interactive)
(rcd-db-chart-by-periods "log" "month" "LLM usage by month" "MONTHS" "REQUESTS"
"WHERE log_logtypes = 8
AND log_datecreated > (current_date - 365)"))
;;;; Other LLM
;;; ChatGPT
(defun rcd-llm-chatgpt (prompt &optional memory rcd-llm-model)
"Send PROMPT to OpenAI API and return the response.
Optional MEMORY and MODEL may be used."
(let* ((rcd-llm-model (cond ((boundp 'rcd-llm-model) rcd-llm-model)
(t "gpt-4o-mini-2024-07-18")))
(url-request-method "POST")
(url-request-extra-headers
'(("Content-Type" . "application/json; charset=utf-8")
("Authorization" . "Bearer APIKEY")))
(url-request-data
(encode-coding-string
(json-encode
`(("model" . ,rcd-llm-model)
("messages" . [((role . "user") (content . ,prompt))])))
'utf-8))
(response-buffer (url-retrieve-synchronously "https://api.openai.com/v1/chat/completions")))
(rcd-llm-response response-buffer)))
(defun rcd-chatgpt-shell (prompt memory rcd-llm-model)
"Call function `chatgpt-shell-post'.
PROMPT is LLM's prompt.
MEMORY is string containing user's memory.
MODEL is one of available LLM models by OpenAI."
(chatgpt-shell-post :context (list (cons memory nil) (cons prompt nil) ) :version rcd-llm-model))
;;; Llamafile
(defun rcd-llm-llamafile (prompt &optional memory rcd-llm-model)
"Send PROMPT to Llama file.
Optional MEMORY and MODEL may be used."
(let* ((rcd-llm-model (cond ((boundp 'rcd-llm-model) rcd-llm-model)
(t "LLaMA_CPP")))
(memory (cond ((and memory rcd-llm-use-users-llm-memory)
(concat "Following is user's memory, until the END-OF-MEMORY-TAG: \n\n" memory "\n\n END-OF-MEMORY-TAG\n\n"))))
(prompt (cond (memory (concat memory "\n\n" prompt))
(t prompt)))
(temperature 0.8)
(max-tokens -1)
(top-p 0.95)
(stream :json-false)
(buffer (let ((url-request-method "POST")
(url-request-extra-headers
'(("Content-Type" . "application/json")
("Authorization" . "Bearer no-key")))
(prompt (encode-coding-string prompt 'utf-8))
(url-request-data
(encode-coding-string
(setq rcd-llm-last-json
(json-encode
`((model . ,rcd-llm-model)
(messages . [ ((role . "system")
(content . "You are a helpful assistant. Answer short."))
((role . "user")
(content . ,prompt))])
(temperature . ,temperature)
(max_tokens . ,max-tokens)
(top_p . ,top-p)
(stream . ,stream))))
'utf-8)))
(url-retrieve-synchronously
;; "http://127.0.0.1:8080/v1/chat/completions"))))
"http://192.168.188.140:8080/v1/chat/completions"))))
(rcd-llm-response buffer)))
;;; Groq
(defun rcd-llm-groq (prompt &optional memory rcd-llm-model)
"Send PROMPT to Groq.
Optional MEMORY and MODEL may be used."
(let* ((rcd-llm-model (cond ((boundp 'rcd-llm-model) rcd-llm-model)
;; (t "llama-3.2-1b-preview")))
(t "mixtral-8x7b-32768")))
; (t "llama-3.3-70b-versatile")))
(buffer (let ((url-request-method "POST")
(url-request-extra-headers
'(("Content-Type" . "application/json")
("Authorization" . "Bearer APIKEY")))
(url-request-data
(encode-coding-string
(json-encode
`((model . ,rcd-llm-model)
(messages . [
((role . "user")
(content . ,prompt))
])))
'utf-8)))
(url-retrieve-synchronously
"https://api.groq.com/openai/v1/chat/completions"))))
(rcd-llm-response buffer)))
;;; DB based LLM
(defun rcd-llm-db (prompt &optional memory rcd-llm-model temperature max-tokens top-p top-k min-p stream)
"Send PROMPT to API as decided by the database.
Optional MEMORY, RCD-LLM-MODEL, TEMPERATURE, MAX-TOKENS, TOP-P, and STREAM can be used."
(let ((rcd-llm-model-id (rcd-db-users-defaults "llmmodels")))
(cond ((not rcd-llm-model-id) (rcd-warning-message "Did not find default user's LLM model. Do `M-x rcd-my-defaults' to set it."))
(t (let* ((rcd-llm-model (rcd-db-get-entry "llmmodels" "llmmodels_name" rcd-llm-model-id rcd-db))
(temperature (or temperature (rcd-db-get-entry "llmmodels" "llmmodels_temperature" rcd-llm-model-id rcd-db)))
(max-tokens (or max-tokens (rcd-db-get-entry "llmmodels" "llmmodels_maxtokens" rcd-llm-model-id rcd-db)))
(top-p (or top-p (rcd-db-get-entry "llmmodels" "llmmodels_topp" rcd-llm-model-id rcd-db)))
(min-p (or top-p (rcd-db-get-entry "llmmodels" "llmmodels_minp" rcd-llm-model-id rcd-db)))
(top-k (or top-k (rcd-db-get-entry "llmmodels" "llmmodels_topk" rcd-llm-model-id rcd-db)))
(llm-endpoint-id (rcd-db-get-entry "llmmodels" "llmmodels_llmendpoints" rcd-llm-model-id rcd-db))
(llm-endpoint (rcd-db-get-entry "llmendpoints" "llmendpoints_name" llm-endpoint-id rcd-db))
(llm-provider-id (rcd-db-get-entry "llmendpoints" "llmendpoints_llmproviders" llm-endpoint-id rcd-db))
(api-key (rcd-db-get-entry "llmproviders" "llmproviders_apikey" llm-provider-id rcd-db))
(system-message (or (rcd-db-get-entry "llmmodels" "llmmodels_systemmessage" rcd-llm-model-id rcd-db) "You are helpful assistant."))
(authorization (concat "Bearer " api-key))
(stream (if stream t :json-false))
(url-request-method "POST")
(prompt (encode-coding-string prompt 'utf-8))
(url-request-extra-headers
`(("Content-Type" . "application/json")
("Authorization" . ,authorization)))
(url-request-data
(encode-coding-string
(setq rcd-llm-last-json
(json-encode
`((model . ,rcd-llm-model)
;; `((model . "Qwen2.5-1.5B-Instruct")
(messages . [ ((role . "system")
(content . ,system-message))
((role . "user")
(content . ,prompt))])
(temperature . ,temperature)
(max_tokens . ,max-tokens)
(top_p . ,top-p)
(frequency_penalty . 1.2)
(repeat_penalty . 1.2)
;; (top_k . ,top-k)
;; (min_p . ,min-p)
(stream . ,stream))))
'utf-8))
(buffer (url-retrieve-synchronously llm-endpoint)))
(rcd-llm-response buffer))))))
;;; Hugging Face
(defvar rcd-llm-last-json nil
"Last JSON sent to LLM.")
(defun rcd-llm-huggingface (prompt &optional memory rcd-llm-model temperature max-tokens top-p stream)
"send PROMPT to Hugging Face API with specified parameters.
Optional MEMORY, RCD-LLM-MODEL, TEMPERATURE, MAX-TOKENS, TOP-P, and STREAM can be used."
(let* ((rcd-llm-model (or rcd-llm-model "Qwen/Qwen2.5-Coder-32B-Instruct"))
(temperature (or temperature 0.5))
(max-tokens (or max-tokens 2048))
(top-p (or top-p 0.7))
(stream (if stream t :json-false))
(url-request-method "POST")
(url-request-extra-headers
'(("Content-Type" . "application/json")
("Authorization" . "Bearer APIKEY")))
(url-request-data
(encode-coding-string
(setq rcd-llm-last-json
(json-encode
`((model . ,rcd-llm-model)
(messages . [((role . "user") (content . ,prompt))])
(temperature . ,temperature)
(max_tokens . ,max-tokens)
(top_p . ,top-p)
(stream . ,stream))))
'utf-8))
(buffer (url-retrieve-synchronously
"https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions")))
(rcd-llm-response buffer)))
;;; Mistral
(defun rcd-llm-mistral (prompt &optional memory rcd-llm-model)
"Send PROMPT to Mistral.
Optional MEMORY and MODEL may be used."
(let* ((rcd-llm-model (cond ((boundp 'rcd-llm-model) rcd-llm-model)
(t "open-mistral-7b")))
; (t "mistral-large-latest")))
(buffer (let ((url-request-method "POST")
(url-request-extra-headers
'(("Content-Type" . "application/json")
("Authorization" . "Bearer APIKEY")))
(url-request-data
(encode-coding-string
(json-encode
`((model . ,rcd-llm-model)
;; (agent_id . "ag:6bf709a1:20250103:helpful-mistral-7b:adb53e32")
(messages . [
((role . "user")
(content . ,prompt))])))
'utf-8)))
(url-retrieve-synchronously
"https://api.mistral.ai/v1/chat/completions"))))
(rcd-llm-response buffer)))
(global-set-key (kbd "C-<f5>") #'rcd-llm)
;;; Modification of other mode maps
(defun rcd-llm-other-window (&optional prompt)
"Return result of `rcd-llm' in other window."
(interactive)
(let ((prompt (or prompt (rcd-ask-get rcd-llm-prompt)))
(current-prefix-arg '(64)))
(rcd-llm prompt)))
(defun rcd-llm-define-word-or-region-other-window (&optional word)
"Define current word or region by using `rcd-llm' in other window."
(interactive)
(let* ((region (rcd-region-string))
(word (or word (current-word)))
(prompt (cond (region (concat "What is meaning of: " region))
(word (concat "Define this word: " word))
(t (rcd-ask-get (concat "No region or word found, " rcd-llm-prompt))))))
(rcd-llm-other-window prompt)))
(with-eval-after-load "wordnut"
(keymap-set wordnut-mode-map "L" #'rcd-llm-define-word-or-region-other-window))
;; RCD LLM Dashboard
(defun rcd-llm-dashboard-header ()
"RCD LLM Dashboard header."
(rcd-dashboard-heading
(concat
(format "⭐ %s ⭐ Dashboard ⭐ " rcd-program-name-full)
(or user-full-name user-login-name user-real-login-name "")
"\n")))
(defvar rcd-llm-prompts-general-information
'("What is the definition of `%s'?"
"Can you provide an overview of %s?"
"How does %s work?"
"What are the benefits of %s?"
"Can you give me some examples of %s?")
"LLM General Information Prompts")
(defvar rcd-llm-prompts-trends-and-analysis
'("What are the current trends in %s?"
"Can you analyze the impact of %s on %s?"
"What are the top %s factors influencing %s?")
"LLM Trends and Analysis Prompts")
;; (rcd-button-insert "What is: " (lambda (_)
;; (let ((current-prefix-arg '(64)))
;; (rcd-llm (concat "What is?" (rcd-ask-get "What is? ") "?"))))))
(defun rcd-llm-dashboard-basics ()
"RCD LLM Basics"
(insert "** Large Language Models\n\n")
(insert "*** LLM Settings\n\n")
(insert "**** ")
(rcd-button-insert "LLM Providers" (lambda (_) (rcd-db-table-edit-by-name "llmproviders")))
(insert "\n**** ")
(rcd-button-insert "LLM Endpoints" (lambda (_) (rcd-db-table-edit-by-name "llmendpoints")))
(insert "\n**** ")
(rcd-button-insert "LLM Models" (lambda (_) (rcd-db-table-edit-by-name "llmmodels")))
(insert "\n\n** LLM Prompts\n\n")
(insert "*** General information\n\n")
(insert "*** Trends and Analysis\n\n")
(insert "*** Data and Statistics\n\n")
(insert "*** Recommendation and Advice\n\n")
(insert "*** Comparison and Evaluation\n\n")
(insert "*** Creative and Open-Ended\n\n"))
(defun rcd-llm-dashboard ()
"RCD Notes Dashboard."
(interactive)
(let ((rcd-dashboard-buffer-name "RCD LLM Dashboard")
(rcd-dashboard-always-refresh
(cond ((equal '(4) current-prefix-arg) t)
(t rcd-dashboard-always-refresh))))
(cond ((equal 0 current-prefix-arg) (help-for-help))
(t (rcd-dashboard '(rcd-llm-dashboard-header
rcd-llm-dashboard-basics)
"RCD LLM Dashboard")
(rcd-speak "Large Language Model Dashboard")))))
(provide 'rcd-llm)
;;; rcd-llm.el ends here
[-- Attachment #3: Type: text/plain, Size: 92 bytes --]
---
via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-01-04 18:09 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <7290780.2375960.1734348492938@mail.yahoo.com>
[not found] ` <cd8e400f-0b14-4d7d-a6e9-88d0bfe094c2@gmail.com>
[not found] ` <Z2BlqTDFPNaywBeY@lco2>
[not found] ` <cbac0261-c67a-4fd5-8ec2-076ef9da7b3a@gmail.com>
[not found] ` <Z2FTzU7MK6LEeO1x@lco2>
[not found] ` <b10135c5-fe0d-4e5a-a64b-f14d299fb7d7@gmail.com>
[not found] ` <Z2Huth8EockJqDgz@lco2>
[not found] ` <6c53521e-4aa6-40d1-b4a5-0e00989ad201@easy-emacs.de>
[not found] ` <jwvo711f6pf.fsf-monnier+emacs@gnu.org>
[not found] ` <112da2f7-eb5a-4112-9be0-29fa3d73045c@easy-emacs.de>
2024-12-26 8:37 ` Enhancing ELisp for AI Work Jean Louis
2025-01-04 10:44 ` Andreas Röhler
2025-01-04 18:09 ` Jean Louis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).