Project: llamacpp

Language c++
OSS-Fuzz project link
Project repository link
Build status: Fuzzers succeeding: Build log
Build status: Code coverage succeeding: Build log
Build status: Fuzz Introspector succeeding: Build log
Fuzzer count 6
Lines of code 79257
Lines covered 7112
Code coverage 8.97%
Code coverage report Report link
Static reachability 38.54%
Fuzz Introspector report Report link (from 2026-02-01)
Fuzzer Code coverage (lines) Latest Report Comments
fuzz_apply_template (plot) 1.04% (avg: 1.04%, max: 1.1%) 2026-02-01
fuzz_json_to_grammar (plot) 5.5% (avg: 5.57%, max: 5.71%) 2026-02-01
fuzz_load_model (plot) 0.55% (avg: 0.56%, max: 0.59%) 2026-02-01
fuzz_structured (plot) 0.61% (avg: 0.62%, max: 0.66%) 2026-02-01

Historical Progression

Per Fuzzer Progression

This section shows graphs for the coverage results per fuzz target over the past 30 days. Included are the coverage percentage, total number of lines, covered number of lines, and the number of coverage inputs.

Functions of interest to fuzz

This section outlines functions that may be of interest to fuzz. They are based on ranking functions that have a lot of complexity but currently exhibit low code coverage. The complexity is calculated based on the function itself as well as the functions called by the given function, i.e. the tree of code that the function triggers.


This is only a minor amount of introspection information available for this project. Please consult the Fuzz Introspector report for more information, e.g. the introspection table of all functions in the target project available here.

Function name Function source file Accumulated cyclomatic complexity Code coverage
common_init_from_params(common_params&) /src/llama.cpp/common/common.cpp 20098 0.0%
common_init_result::common_init_result(common_params&) /src/llama.cpp/common/common.cpp 18943 0.0%
llama_params_fit /src/llama.cpp/src/llama.cpp 17291 0.0%
llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong*,unsignedint,ggml_log_level) /src/llama.cpp/src/llama.cpp 17273 0.0%
llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong*,unsignedint,ggml_log_level)::$_2::operator()(charconst*,std::__1::vector<llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong*,unsignedint,ggml_log_level)::ngl_t,std::__1::allocator<llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong*,unsignedint,ggml_log_level)::ngl_t>>const&,std::__1::vector<ggml_backend_buffer_type*,std::__1::allocator<ggml_backend_buffer_type*>>const&)const /src/llama.cpp/src/llama.cpp 16783 0.0%
llama_get_device_memory_data(charconst*,llama_model_paramsconst*,llama_context_paramsconst*,std::__1::vector<ggml_backend_device*,std::__1::allocator<ggml_backend_device*>>&,unsignedint&,unsignedint&,unsignedint&,ggml_log_level) /src/llama.cpp/src/llama.cpp 16672 0.0%
llama_model_load_from_splits /src/llama.cpp/src/llama.cpp 14000 0.0%
llama_model::load_tensors(llama_model_loader&) /src/llama.cpp/src/llama-model.cpp 9167 0.0%
ggml_graph_compute_with_ctx /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c 7505 0.0%
ggml_backend_cpu_graph_compute(ggml_backend*,ggml_cgraph*) /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 7500 0.0%