Project: llamacpp

Language c++
OSS-Fuzz project link
Project repository link
Build status: Fuzzers succeeding: Build log
Build status: Code coverage succeeding: Build log
Build status: Fuzz Introspector succeeding: Build log
Fuzzer count 6
Lines of code 77664
Lines covered 7059
Code coverage 9.09%
Code coverage report Report link
Static reachability 38.65%
Fuzz Introspector report Report link (from 2026-01-11)
Fuzzer Code coverage (lines) Latest Report Comments
fuzz_apply_template (plot) 1.03% (avg: 1.09%, max: 1.15%) 2026-01-11
fuzz_json_to_grammar (plot) 5.61% (avg: 5.58%, max: 5.74%) 2026-01-11
fuzz_load_model (plot) 0.56% (avg: 0.59%, max: 0.63%) 2026-01-11
fuzz_structured (plot) 0.62% (avg: 0.66%, max: 0.7%) 2026-01-11

Historical Progression

Per Fuzzer Progression

This section shows graphs for the coverage results per fuzz target over the past 30 days. Included are the coverage percentage, total number of lines, covered number of lines, and the number of coverage inputs.

Functions of interest to fuzz

This section outlines functions that may be of interest to fuzz. They are based on ranking functions that have a lot of complexity but currently exhibit low code coverage. The complexity is calculated based on the function itself as well as the functions called by the given function, i.e. the tree of code that the function triggers.


This is only a minor amount of introspection information available for this project. Please consult the Fuzz Introspector report for more information, e.g. the introspection table of all functions in the target project available here.

Function name Function source file Accumulated cyclomatic complexity Code coverage
common_init_from_params(common_params&) /src/llama.cpp/common/common.cpp 19805 0.0%
common_init_result::common_init_result(common_params&) /src/llama.cpp/common/common.cpp 18663 0.0%
llama_params_fit /src/llama.cpp/src/llama.cpp 17020 0.0%
llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong*,unsignedint,ggml_log_level) /src/llama.cpp/src/llama.cpp 17002 0.0%
llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong*,unsignedint,ggml_log_level)::$_2::operator()(charconst*,std::__1::vector<llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong*,unsignedint,ggml_log_level)::ngl_t,std::__1::allocator<llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong*,unsignedint,ggml_log_level)::ngl_t>>const&,std::__1::vector<ggml_backend_buffer_type*,std::__1::allocator<ggml_backend_buffer_type*>>const&)const /src/llama.cpp/src/llama.cpp 16516 0.0%
llama_get_device_memory_data(charconst*,llama_model_paramsconst*,llama_context_paramsconst*,std::__1::vector<ggml_backend_device*,std::__1::allocator<ggml_backend_device*>>&,unsignedint&,unsignedint&,unsignedint&,ggml_log_level) /src/llama.cpp/src/llama.cpp 16405 0.0%
llama_model_load_from_splits /src/llama.cpp/src/llama.cpp 13813 0.0%
llama_model::load_tensors(llama_model_loader&) /src/llama.cpp/src/llama-model.cpp 9020 0.0%
ggml_graph_compute_with_ctx /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c 7474 0.0%
ggml_backend_cpu_graph_compute(ggml_backend*,ggml_cgraph*) /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 7469 0.0%