Skip to content

Actions: ggerganov/llama.cpp

CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
11,642 workflow runs
11,642 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

make : add missing rules for ggml sources
CI #16747: Pull request #10333 synchronize by ggerganov
November 16, 2024 16:04 In progress gg/make-add-missing-rules
November 16, 2024 16:04 In progress
make : add missing rules for ggml sources
CI #16746: Pull request #10333 synchronize by ggerganov
November 16, 2024 16:00 4m 50s gg/make-add-missing-rules
November 16, 2024 16:00 4m 50s
server: (web UI) Add samplers sequence customization (#10255)
CI #16745: Commit bcdb7a2 pushed by ngxson
November 16, 2024 13:26 57m 48s master
November 16, 2024 13:26 57m 48s
make : add missing rules for ggml sources
CI #16744: Pull request #10333 opened by ggerganov
November 16, 2024 08:59 50m 23s gg/make-add-missing-rules
November 16, 2024 08:59 50m 23s
vulkan: Optimize some mat-vec mul quant shaders (#10296)
CI #16742: Commit 772703c pushed by 0cc4m
November 16, 2024 06:27 57m 31s master
November 16, 2024 06:27 57m 31s
Introduce ramalama-core
CI #16741: Pull request #10291 synchronize by ericcurtin
November 16, 2024 05:45 54m 15s ericcurtin:simple-chat-smart
November 16, 2024 05:45 54m 15s
Introduce ramalama-core
CI #16740: Pull request #10291 synchronize by ericcurtin
November 16, 2024 05:45 1m 0s ericcurtin:simple-chat-smart
November 16, 2024 05:45 1m 0s
Introduce ramalama-core
CI #16739: Pull request #10291 synchronize by ericcurtin
November 16, 2024 04:59 45m 54s ericcurtin:simple-chat-smart
November 16, 2024 04:59 45m 54s
Introduce ramalama-core
CI #16738: Pull request #10291 synchronize by ericcurtin
November 16, 2024 04:58 55s ericcurtin:simple-chat-smart
November 16, 2024 04:58 55s
Introduce ramalama-core
CI #16737: Pull request #10291 synchronize by ericcurtin
November 16, 2024 04:52 6m 30s ericcurtin:simple-chat-smart
November 16, 2024 04:52 6m 30s
Introduce ramalama-core
CI #16736: Pull request #10291 synchronize by ericcurtin
November 16, 2024 04:43 9m 36s ericcurtin:simple-chat-smart
November 16, 2024 04:43 9m 36s
vulkan: change an assertion
CI #16734: Pull request #10328 opened by FirstTimeEZ
November 16, 2024 04:18 54m 41s FirstTimeEZ:vulkan-assertion
November 16, 2024 04:18 54m 41s
Introduce ramalama-core
CI #16732: Pull request #10291 synchronize by ericcurtin
November 16, 2024 04:10 33m 23s ericcurtin:simple-chat-smart
November 16, 2024 04:10 33m 23s
Introduce ramalama-core
CI #16731: Pull request #10291 synchronize by ericcurtin
November 16, 2024 03:43 26m 49s ericcurtin:simple-chat-smart
November 16, 2024 03:43 26m 49s
Introduce ramalama-core
CI #16730: Pull request #10291 synchronize by ericcurtin
November 16, 2024 03:27 16m 56s ericcurtin:simple-chat-smart
November 16, 2024 03:27 16m 56s
Introduce ramalama-core
CI #16729: Pull request #10291 synchronize by ericcurtin
November 16, 2024 03:24 3m 13s ericcurtin:simple-chat-smart
November 16, 2024 03:24 3m 13s
Introduce ramalama-core
CI #16728: Pull request #10291 synchronize by ericcurtin
November 16, 2024 03:15 8m 52s ericcurtin:simple-chat-smart
November 16, 2024 03:15 8m 52s
ggml : optimize Q4_0 into Q4_0_X_Y repack (#10324)
CI #16727: Commit 1e58ee1 pushed by slaren
November 16, 2024 00:53 1h 29m 56s master
November 16, 2024 00:53 1h 29m 56s
llama : save number of parameters and the size in llama_model (#10286)
CI #16726: Commit 89e4caa pushed by slaren
November 16, 2024 00:42 1h 17m 29s master
November 16, 2024 00:42 1h 17m 29s
save number of parameters and the size in llama_model, fixes #10285
CI #16725: Pull request #10286 synchronize by FirstTimeEZ
November 16, 2024 00:33 55m 31s FirstTimeEZ:patch-2
November 16, 2024 00:33 55m 31s
ggml: Optimize Q4_0 into Q4_0_X_Y repack
CI #16723: Pull request #10324 opened by eddnjjn
November 15, 2024 21:56 1h 6m 38s eddnjjn:repack_optimization
November 15, 2024 21:56 1h 6m 38s