Project: llamacpp

Language c++
OSS-Fuzz project link
Project repository link
Build status: Fuzzers succeeding: Build log
Build status: Code coverage succeeding: Build log
Build status: Fuzz Introspector succeeding: Build log
Fuzzer count 6
Lines of code 75888
Lines covered 7026
Code coverage 9.26%
Code coverage report Report link
Static reachability 38.67%
Fuzz Introspector report Report link (from 2025-12-28)
Fuzzer Code coverage (lines) Latest Report Comments
fuzz_apply_template (plot) 1.09% (avg: 1.13%, max: 1.15%) 2025-12-28
fuzz_json_to_grammar (plot) 5.73% (avg: 5.57%, max: 5.74%) 2025-12-28
fuzz_load_model (plot) 0.6% (avg: 0.62%, max: 0.63%) 2025-12-28
fuzz_structured (plot) 0.66% (avg: 0.69%, max: 0.7%) 2025-12-28

Historical Progression

Per Fuzzer Progression

This section shows graphs for the coverage results per fuzz target over the past 30 days. Included are the coverage percentage, total number of lines, covered number of lines, and the number of coverage inputs.

Functions of interest to fuzz

This section outlines functions that may be of interest to fuzz. They are based on ranking functions that have a lot of complexity but currently exhibit low code coverage. The complexity is calculated based on the function itself as well as the functions called by the given function, i.e. the tree of code that the function triggers.


This is only a minor amount of introspection information available for this project. Please consult the Fuzz Introspector report for more information, e.g. the introspection table of all functions in the target project available here.

Function name Function source file Accumulated cyclomatic complexity Code coverage
common_init_from_params(common_params&) /src/llama.cpp/common/common.cpp 19207 0.0%
common_init_result::common_init_result(common_params&) /src/llama.cpp/common/common.cpp 17731 0.0%
llama_params_fit /src/llama.cpp/src/llama.cpp 16567 0.0%
llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong,unsignedint,ggml_log_level) /src/llama.cpp/src/llama.cpp 16549 0.0%
llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong,unsignedint,ggml_log_level)::$_2::operator()(charconst*,std::__1::vector<llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong,unsignedint,ggml_log_level)::ngl_t,std::__1::allocator<llama_params_fit_impl(charconst*,llama_model_params*,llama_context_params*,float*,llama_model_tensor_buft_override*,unsignedlong,unsignedint,ggml_log_level)::ngl_t>>const&,std::__1::vector<ggml_backend_buffer_type*,std::__1::allocator<ggml_backend_buffer_type*>>const&)const /src/llama.cpp/src/llama.cpp 16111 0.0%
llama_get_device_memory_data(charconst*,llama_model_paramsconst*,llama_context_paramsconst*,std::__1::vector<ggml_backend_device*,std::__1::allocator<ggml_backend_device*>>&,unsignedint&,unsignedint&,unsignedint&,ggml_log_level) /src/llama.cpp/src/llama.cpp 16002 0.0%
llama_model_load_from_splits /src/llama.cpp/src/llama.cpp 13593 0.0%
llama_model::load_tensors(llama_model_loader&) /src/llama.cpp/src/llama-model.cpp 8853 0.0%
ggml_graph_compute_with_ctx /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c 7474 0.0%
ggml_backend_cpu_graph_compute(ggml_backend*,ggml_cgraph*) /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 7469 0.0%