API
CTBenchmarks.benchmarkCTBenchmarks.benchmark_dataCTBenchmarks.build_payloadCTBenchmarks.filter_models_for_backendCTBenchmarks.generate_metadataCTBenchmarks.is_cuda_onCTBenchmarks.prettymemoryCTBenchmarks.prettytimeCTBenchmarks.print_benchmark_lineCTBenchmarks.runCTBenchmarks.sanitize_for_jsonCTBenchmarks.save_jsonCTBenchmarks.set_print_levelCTBenchmarks.solve_and_extract_dataCTBenchmarks.strip_benchmark_value
CTBenchmarks.benchmark — Methodbenchmark(;
outpath,
problems,
solver_models,
grid_sizes,
disc_methods,
tol,
ipopt_mu_strategy,
print_trace,
max_iter,
max_wall_time,
grid_size_max_cpu
) -> NothingRun benchmarks on optimal control problems and save results to a JSON file.
This function performs the following steps:
- Detects CUDA availability and filters out :exa_gpu if CUDA is not functional
- Runs benchmarks using
benchmark_data()to generate a DataFrame of results - Collects environment metadata (Julia version, OS, machine, timestamp)
- Builds a JSON-friendly payload combining results and metadata
- Saves the payload to
outpath/data.jsonas pretty-printed JSON
The JSON file can be easily loaded and converted back to a DataFrame using:
using JSON, DataFrames
data = JSON.parsefile("path/to/data.json")
df = DataFrame(data["results"])When run in the GitHub Actions workflow, Project.toml and Manifest.toml are automatically copied to the output directory by the workflow itself. This ensures reproducibility of benchmark results.
This function returns nothing. The output path is managed by the calling main() function in benchmark scripts, which returns the outpath for the workflow to use.
Arguments
outpath: Path to directory where results will be saved (ornothingto skip saving)problems: Vector of problem names (Symbols)solver_models: Vector of Pairs mapping solver => models (e.g., [:ipopt => [:JuMP, :adnlp], :madnlp => [:exa, :exa_gpu]])grid_sizes: Vector of grid sizes (Int)disc_methods: Vector of discretization methods (Symbols)tol: Solver tolerance (Float64)ipopt_mu_strategy: Mu strategy for Ipopt (String)print_trace: Boolean - whether to print solver output (for debugging)max_iter: Maximum number of iterations (Int)max_wall_time: Maximum wall time in seconds (Float64)grid_size_max_cpu: Maximum grid size for CPU models (Int)
Returns
nothing
CTBenchmarks.benchmark_data — Methodbenchmark_data(;
problems,
solver_models,
grid_sizes,
disc_methods,
tol,
ipopt_mu_strategy,
print_trace
max_iter,
max_wall_time,
grid_size_max_cpu
) -> DataFrameRun benchmarks on optimal control problems and return results as a DataFrame.
For each combination of problem, solver, model, and grid size, this function:
- Sets up and solves the optimization problem
- Captures timing and memory statistics using
@btimedorCUDA.@timed - Extracts solver statistics (objective value, iterations)
- Stores all data in a DataFrame row
Arguments
problems: Vector of problem names (Symbols)solver_models: Vector of Pairs mapping solver => models (e.g., [:ipopt => [:JuMP, :adnlp], :madnlp => [:exa, :exa_gpu]])grid_sizes: Vector of grid sizes (Int)disc_methods: Vector of discretization methods (Symbols)tol: Solver tolerance (Float64)ipopt_mu_strategy: Mu strategy for Ipopt (String)print_trace: Boolean - whether to print solver output (for debugging)max_iter: Maximum number of iterations (Int)max_wall_time: Maximum wall time in seconds (Float64)
Returns
A DataFrame with columns:
problem: Symbol - problem namesolver: Symbol - solver used (:ipopt or :madnlp)model: Symbol - model type (:JuMP, :adnlp, :exa, or :exa_gpu)disc_method: Symbol - discretization methodgrid_size: Int - number of grid pointstol: Float64 - solver tolerancemu_strategy: Union{String, Missing} - mu strategy for Ipopt (missing for MadNLP)print_level: Any - print level for solver (Int for Ipopt, MadNLP.LogLevels for MadNLP)max_iter: Int - maximum number of iterationsmax_wall_time: Float64 - maximum wall time in secondsbenchmark: NamedTuple - full benchmark object from @btimed or CUDA.@timedobjective: Union{Float64, Missing} - objective function value (missing if failed)iterations: Union{Int, Missing} - number of solver iterations (missing if failed)status: Any - termination status (type depends on solver/model)success: Bool - whether the solve succeeded
CTBenchmarks.build_payload — Methodbuild_payload(results::DataFrame, meta::Dict) -> DictCombine benchmark results DataFrame and metadata into a JSON-friendly dictionary. The DataFrame is converted to a vector of dictionaries (one per row) for easy JSON serialization and reconstruction.
CTBenchmarks.filter_models_for_backend — Methodfilter_models_for_backend(models::Vector{Symbol}, disc_method::Symbol) -> Vector{Symbol}Filter solver models depending on backend availability and discretization support.
- GPU models (ending with
_gpu) are kept only if CUDA is available. - JuMP models are kept only when
disc_method == :trapeze.
CTBenchmarks.generate_metadata — Methodgenerate_metadata() -> Dict{String, String}Return metadata about the current environment:
timestamp(UTC, ISO8601)julia_versionosmachinehostnamepkg_status- output of Pkg.status() with ANSI colorsversioninfo- output of versioninfo() with ANSI colorspkg_manifest- output of Pkg.status(mode=PKGMODE_MANIFEST) with ANSI colors
CTBenchmarks.is_cuda_on — Methodis_cuda_on() -> BoolReturn true if CUDA is functional on this machine.
CTBenchmarks.prettymemory — Methodprettymemory(bytes::Integer) -> StringFormat a memory footprint bytes into a human-readable string using binary prefixes (bytes, KiB, MiB, GiB) with two decimal places.
CTBenchmarks.prettytime — Methodprettytime(t::Real) -> StringFormat a duration t expressed in seconds into a human-readable string with three decimal places and adaptive units (ns, μs, ms, s).
CTBenchmarks.print_benchmark_line — Methodprint_benchmark_line(model::Symbol, stats::NamedTuple)Print a formatted line summarizing benchmark statistics for model with colors. Handles both CPU benchmarks (from @btimed) and GPU benchmarks (from CUDA.@timed).
Displays: time, allocations/memory, objective, iterations, and success status
CTBenchmarks.run — FunctionRun the benchmarks for a specific version.
Arguments
version::Symbol: version to run (:complete or :minimal)outpath::Union{AbstractString, Nothing}: directory path to save results (nothing for no saving)print_trace::Bool: whether to print the trace of the solver
Returns
nothing
CTBenchmarks.sanitize_for_json — Methodsanitize_for_json(obj)Recursively replace NaN and Inf values with null for JSON compatibility.
CTBenchmarks.save_json — Methodsave_json(payload::Dict, outpath::AbstractString)Save a JSON payload to a file. Creates the parent directory if needed. Uses pretty printing for readability. Sanitizes NaN and Inf values to null for JSON compatibility.
CTBenchmarks.set_print_level — Methodset_print_level(solver::Symbol, print_trace::Bool) -> IntSet print level based on solver and print_trace flag.
CTBenchmarks.solve_and_extract_data — Methodsolve_and_extract_data(problem, solver, model, grid_size, disc_method,
tol, mu_strategy, print_level, max_iter, max_wall_time) -> NamedTupleSolve an optimal control problem and extract performance and solver statistics.
This internal helper function handles the solve process and data extraction for different model types (JuMP, adnlp, exa, exa_gpu).
Arguments
problem::Symbol: problem name (e.g., :beam, :chain)solver::Symbol: solver to use (:ipopt or :madnlp)model::Symbol: model type (:JuMP, :adnlp, :exa, or :exa_gpu)grid_size::Int: number of grid pointsdisc_method::Symbol: discretization methodtol::Float64: solver tolerancemu_strategy::Union{String, Missing}: mu strategy for Ipopt (missing for MadNLP)print_level::Union{Int, MadNLP.LogLevels, Missing}: print level for solver (Int for Ipopt, MadNLP.LogLevels for MadNLP)max_iter::Int: maximum number of iterationsmax_wall_time::Float64: maximum wall time in seconds
Returns
A NamedTuple with fields:
benchmark: full benchmark object from @btimed (CPU) or CUDA.@timed (GPU)objective::Union{Float64, Missing}: objective function value (missing if failed)iterations::Union{Int, Missing}: number of solver iterations (missing if failed)status::Any: termination status (type depends on solver/model)success::Bool: whether the solve succeeded
CTBenchmarks.strip_benchmark_value — Methodstrip_benchmark_value(bench)Remove the value field from benchmark outputs (NamedTuple or Dict) to ensure JSON-serializable data while preserving all other statistics.