* announce: guile_llama_cpp 0.1 release
@ 2024-06-03 4:45 Andy Tai
2024-06-03 5:39 ` Nala Ginrut
0 siblings, 1 reply; 2+ messages in thread
From: Andy Tai @ 2024-06-03 4:45 UTC (permalink / raw)
To: guile-user
# guile_llama_cpp
GNU Guile binding for llama.cpp
This is version 0.1, Copyright 2024 Li-Cheng (Andy) Tai, atai@atai.org
Available as https://codeberg.org/atai/guile_llama_cpp/releases/download/0.1/guile_llama_cpp-0.1.tar.gz
Guile_llama_cpp wraps around llama.cpp APIs so llama.cpp can be
accessed from Guile scripts and programs, in a manner
similar to llama-cpp-ython allowing the use of llama.cpp in Python programs.
Currently a simple Guile script is provided to allow simple "chat"
with a LLM in gguf format.
## setup and build
guile_llama_cpp is written in GNU Guile and C++ and requires
Swig 4.0 or later, GNU guile 3.0, and llama.cpp (obviosuly)
installed on your system.
From sources, guile_llama_cpp can be built via the usual GNU convention,
export LLAMA_CFLAGS=-I<llama_install_dir>/include
export LLAMA_LIBS=-L<llama_install_dir>/lib -lllama
./configure --prefix=<install dir>
make
make install
Once in the future llama.cpp provides pkg-config support, the first
two "export" lines can be omitted.
If you are running GNU Guix on your system, you can get a shall with
all needed dependencies set up with
guix shell -D -f guix.scm
and then use the usual
configure && make && make install
commands to build.
## run
To use guile_llama_cpp to chat with a LLM (Large Language Model), you
need to first download a LLM in gguf format.
See instructions on the web such as
https://stackoverflow.com/questions/67595500/how-to-download-a-model-from-huggingface
As an example, using a "smaller" LLM "Phi-3-mini" from Microsoft; we
would first download the model in gguf format via wget:
wget https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/Phi-3-mini-4k-instruct-q4.gguf
then you can chat with it, in the build directory:
./pre-inst-env simple.scm "What are the planets?"
Phi-3-mini-4k-instruct-q4.gguf
The general form to do a chat with a model is to invoke the script
scripts/simple.scm
simple.scm prompt_text model_file_path
in the build directory, pretend the command with
./pre-inst-env
as it sets up the needed paths and environment variables for proper
guile invocation.
Currently, the chat supported is limited; you would see the replies
from the LLM cut of after a sentence or so.
The outputs length issue will be further addressed in future releases.
## roadmap
* support for continuous chat, with long replies
* support for expose the LLM as a web end point, using a web server
built in Guile, so
the LLM can be done via a web interface, to allow chatting with remote users
* support for embedding LLMs in Guile programs for scenarios like LLM
driven software
agents
## license
Copyright 2024 Li-Cheng (Andy) Tai
atai@atai.org
This program is licensed under the GNU Lesser General Public License, version 3
or later, as published by the Free Software Foundation. See the license
text in the file COPYING.
gde_appmenu is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General
Public License for more details.
Hopefully this program is useful.
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: announce: guile_llama_cpp 0.1 release
2024-06-03 4:45 announce: guile_llama_cpp 0.1 release Andy Tai
@ 2024-06-03 5:39 ` Nala Ginrut
0 siblings, 0 replies; 2+ messages in thread
From: Nala Ginrut @ 2024-06-03 5:39 UTC (permalink / raw)
To: Andy Tai; +Cc: Guile User
Thanks for the work!
I've tried a half baked poc but never have time to finish it.
So glad to see someone can finish it!
Best regards.
On Mon, Jun 3, 2024, 13:46 Andy Tai <atai@atai.org> wrote:
> # guile_llama_cpp
>
> GNU Guile binding for llama.cpp
>
> This is version 0.1, Copyright 2024 Li-Cheng (Andy) Tai, atai@atai.org
> Available as
> https://codeberg.org/atai/guile_llama_cpp/releases/download/0.1/guile_llama_cpp-0.1.tar.gz
>
>
> Guile_llama_cpp wraps around llama.cpp APIs so llama.cpp can be
> accessed from Guile scripts and programs, in a manner
> similar to llama-cpp-ython allowing the use of llama.cpp in Python
> programs.
>
> Currently a simple Guile script is provided to allow simple "chat"
> with a LLM in gguf format.
>
> ## setup and build
>
> guile_llama_cpp is written in GNU Guile and C++ and requires
>
> Swig 4.0 or later, GNU guile 3.0, and llama.cpp (obviosuly)
>
> installed on your system.
>
> From sources, guile_llama_cpp can be built via the usual GNU convention,
>
> export LLAMA_CFLAGS=-I<llama_install_dir>/include
> export LLAMA_LIBS=-L<llama_install_dir>/lib -lllama
>
> ./configure --prefix=<install dir>
> make
> make install
>
> Once in the future llama.cpp provides pkg-config support, the first
> two "export" lines can be omitted.
>
> If you are running GNU Guix on your system, you can get a shall with
> all needed dependencies set up with
>
> guix shell -D -f guix.scm
>
> and then use the usual
>
> configure && make && make install
>
> commands to build.
>
> ## run
>
> To use guile_llama_cpp to chat with a LLM (Large Language Model), you
> need to first download a LLM in gguf format.
> See instructions on the web such as
>
> https://stackoverflow.com/questions/67595500/how-to-download-a-model-from-huggingface
>
> As an example, using a "smaller" LLM "Phi-3-mini" from Microsoft; we
> would first download the model in gguf format via wget:
>
> wget
> https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/Phi-3-mini-4k-instruct-q4.gguf
>
> then you can chat with it, in the build directory:
>
> ./pre-inst-env simple.scm "What are the planets?"
> Phi-3-mini-4k-instruct-q4.gguf
>
> The general form to do a chat with a model is to invoke the script
> scripts/simple.scm
>
> simple.scm prompt_text model_file_path
>
> in the build directory, pretend the command with
>
> ./pre-inst-env
>
> as it sets up the needed paths and environment variables for proper
> guile invocation.
>
> Currently, the chat supported is limited; you would see the replies
> from the LLM cut of after a sentence or so.
> The outputs length issue will be further addressed in future releases.
>
> ## roadmap
>
> * support for continuous chat, with long replies
> * support for expose the LLM as a web end point, using a web server
> built in Guile, so
> the LLM can be done via a web interface, to allow chatting with remote
> users
> * support for embedding LLMs in Guile programs for scenarios like LLM
> driven software
> agents
>
> ## license
>
> Copyright 2024 Li-Cheng (Andy) Tai
> atai@atai.org
>
> This program is licensed under the GNU Lesser General Public License,
> version 3
> or later, as published by the Free Software Foundation. See the license
> text in the file COPYING.
>
> gde_appmenu is distributed in the hope that it will be useful, but
> WITHOUT ANY WARRANTY; without even the implied warranty of
> MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser
> General
> Public License for more details.
>
> Hopefully this program is useful.
>
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2024-06-03 5:39 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-03 4:45 announce: guile_llama_cpp 0.1 release Andy Tai
2024-06-03 5:39 ` Nala Ginrut
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).