Skip to content

Actions: Xarbirus/llama.cpp

flake8 Lint

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
111 workflow runs
111 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

server: Add "tokens per second" information in the backend (#10548)
flake8 Lint #111: Commit 64ed209 pushed by Xarbirus
December 2, 2024 22:01 26s master
December 2, 2024 22:01 26s
llama : add comments about KV cache state after error
flake8 Lint #110: Commit 9ef5d08 pushed by ggerganov
November 13, 2024 17:59 24s ggml_status_to_user
November 13, 2024 17:59 24s
llama: updated comments
flake8 Lint #109: Commit 0638c44 pushed by Xarbirus
November 10, 2024 22:08 22s ggml_status_to_user
November 10, 2024 22:08 22s
llama: correct reverting of the entire batch.
flake8 Lint #108: Commit ee599f9 pushed by Xarbirus
November 9, 2024 18:39 24s ggml_status_to_user
November 9, 2024 18:39 24s
metal : reorder write loop in mul mat kernel + style (#10231)
flake8 Lint #107: Commit 6423c65 pushed by Xarbirus
November 9, 2024 16:25 26s master
November 9, 2024 16:25 26s
October 23, 2024 19:23 18m 58s
October 23, 2024 19:20 22m 16s
sync : ggml
flake8 Lint #103: Commit 190a37d pushed by Xarbirus
October 23, 2024 19:19 27s master
October 23, 2024 19:19 27s
llama: correct reverting of the entire batch.
flake8 Lint #102: Commit bbf27cc pushed by Xarbirus
October 22, 2024 17:57 18s ggml_status_to_user
October 22, 2024 17:57 18s
llama: restore a kv_cache in case of failed computation
flake8 Lint #101: Commit 0c05c60 pushed by Xarbirus
October 21, 2024 08:51 24s ggml_status_to_user
October 21, 2024 08:51 24s
llama: reverting kv_cache in case of failed compute
flake8 Lint #100: Commit 059e78c pushed by Xarbirus
October 14, 2024 13:37 3m 54s ggml_status_to_user
October 14, 2024 13:37 3m 54s
llama: reverting kv_cache in case of failed compute
flake8 Lint #99: Commit 5535683 pushed by Xarbirus
October 14, 2024 13:32 9m 2s ggml_status_to_user
October 14, 2024 13:32 9m 2s
server : handle "logprobs" field with false value (#9871)
flake8 Lint #98: Commit a89f75e pushed by Xarbirus
October 14, 2024 13:17 29s master
October 14, 2024 13:17 29s
llama: reverting kv_cache in case of failed compute
flake8 Lint #97: Commit 20510ea pushed by Xarbirus
September 24, 2024 19:12 21s ggml_status_to_user
September 24, 2024 19:12 21s
readme : add programmable prompt engine language CLI (#9599)
flake8 Lint #96: Commit 1d48e98 pushed by Xarbirus
September 23, 2024 18:37 27s master
September 23, 2024 18:37 27s
llama-bench: correct argument parsing error message
flake8 Lint #93: Commit 7e7f8b9 pushed by Xarbirus
September 17, 2024 18:30 20s llama-bench-arg-parsing
September 17, 2024 18:30 20s
llama: correct vocab size for logging
flake8 Lint #92: Commit 93ef595 pushed by Xarbirus
September 17, 2024 09:23 21s fixed_n_vocab
September 17, 2024 09:23 21s
llama: log warning if there's no vocab_size in metadata
flake8 Lint #90: Commit 9704f0e pushed by Xarbirus
September 16, 2024 17:30 19s fixed_n_vocab
September 16, 2024 17:30 19s
llama: updated error output for llama_decode_internal and `llama_en…
flake8 Lint #89: Commit 544b266 pushed by Xarbirus
September 16, 2024 16:36 10m 28s fixed_n_vocab
September 16, 2024 16:36 10m 28s
llama: fixed n_vocab for no_vocab models
flake8 Lint #88: Commit a5e87bf pushed by Xarbirus
September 16, 2024 16:30 14m 14s fixed_n_vocab
September 16, 2024 16:30 14m 14s
ggml : move common CPU backend impl to new header (#9509)
flake8 Lint #87: Commit 23e0d70 pushed by Xarbirus
September 16, 2024 16:25 23s master
September 16, 2024 16:25 23s