Skip to content

Misc. bug: llama cpp always outputs a line of information and then exits #19083

@df56h

Description

@df56h

Name and Version

b7813

Operating systems

Windows

Which llama.cpp modules do you know to be affected?

No response

Command line

llama-cli.exe --help
llama-cli.exe --version
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF

Problem description & steps to reproduce

I downloaded llama-b7813-bin-win-cpu-x64 from GitHub, unzipped it, and then ran the command using cmd, but it doesn't seem to work.

It just outputs a line of command and then exits, without any prompt.

C:\Users\jack\Downloads\llama-b7813-bin-win-cpu-x64>llama-cli.exe --help
load_backend: loaded RPC backend from C:\Users\jack\Downloads\llama-b7813-bin-win-cpu-x64\ggml-rpc.dll

No matter what command I enter, it always outputs this statement.such as

llama-cli.exe --version
load_backend: loaded RPC backend from C:\Users\jack\Downloads\llama-b7813-bin-win-cpu-x64\ggml-rpc.dll

I installed the VC++ Redistributable 2015-2022.

My system is Windows 10 1909

First Bad Commit

No response

Relevant log output

Logs

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions