-
Notifications
You must be signed in to change notification settings - Fork 14.7k
Closed
Labels
Description
Name and Version
b7813
Operating systems
Windows
Which llama.cpp modules do you know to be affected?
No response
Command line
llama-cli.exe --help
llama-cli.exe --version
llama-cli -hf ggml-org/gemma-3-1b-it-GGUFProblem description & steps to reproduce
I downloaded llama-b7813-bin-win-cpu-x64 from GitHub, unzipped it, and then ran the command using cmd, but it doesn't seem to work.
It just outputs a line of command and then exits, without any prompt.
C:\Users\jack\Downloads\llama-b7813-bin-win-cpu-x64>llama-cli.exe --help
load_backend: loaded RPC backend from C:\Users\jack\Downloads\llama-b7813-bin-win-cpu-x64\ggml-rpc.dll
No matter what command I enter, it always outputs this statement.such as
llama-cli.exe --version
load_backend: loaded RPC backend from C:\Users\jack\Downloads\llama-b7813-bin-win-cpu-x64\ggml-rpc.dll
I installed the VC++ Redistributable 2015-2022.
My system is Windows 10 1909
First Bad Commit
No response
Relevant log output
Logs
solban1