Public API
This page lists the exported symbols of CTBenchmarks.
Load all public symbols into the current scope with:
using CTBenchmarksAlternatively, load only the module with:
import CTBenchmarksand then prefix all calls with CTBenchmarks. to create CTBenchmarks.<NAME>.
benchmark
CTBenchmarks.benchmark — Function
benchmark(
;
problems,
solver_models,
grid_sizes,
disc_methods,
tol,
ipopt_mu_strategy,
print_trace,
max_iter,
max_wall_time
)
Run benchmarks on optimal control problems and build a JSON-ready payload.
This function performs the following steps:
- Detects CUDA availability and filters out :exa_gpu if CUDA is not functional
- Runs benchmarks using
benchmark_data()to generate a DataFrame of results - Collects environment metadata (Julia version, OS, machine, timestamp)
- Builds a JSON-friendly payload combining results and metadata
- Returns the payload as a
Dict
The JSON file can be easily loaded and converted back to a DataFrame using:
using JSON, DataFrames
data = JSON.parsefile("path/to/data.json")
df = DataFrame(data["results"])Arguments
problems: Vector of problem names (Symbols)solver_models: Vector of Pairs mapping solver => models (e.g., [:ipopt => [:jump, :adnlp], :madnlp => [:exa, :exa_gpu]])grid_sizes: Vector of grid sizes (Int)disc_methods: Vector of discretization methods (Symbols)tol: Solver tolerance (Float64)ipopt_mu_strategy: Mu strategy for Ipopt (String)print_trace: Boolean - whether to print solver output (for debugging)max_iter: Maximum number of iterations (Int)max_wall_time: Maximum wall time in seconds (Float64)
Returns
Dict
Example
julia> using CTBenchmarks
julia> payload = CTBenchmarks.benchmark(
problems = [:beam],
solver_models = [:ipopt => [:jump]],
grid_sizes = [100],
disc_methods = [:trapeze],
tol = 1e-6,
ipopt_mu_strategy = "adaptive",
print_trace = false,
max_iter = 1000,
max_wall_time = 60.0,
)
Dict{String, Any} with 3 entries:
"metadata" => Dict{String, Any}(...)
"results" => Vector{Dict}(...)
"solutions" => Any[...]plot_solutions
CTBenchmarks.plot_solutions — Function
plot_solutions(payload::Dict, output_dir::AbstractString)
Generate PDF plots comparing solutions for each (problem, grid_size) pair.
This is the main entry point for visualizing benchmark results. It creates comprehensive comparison plots where all solver-model combinations for a given problem and grid size are overlaid on the same figure, enabling easy visual comparison of solution quality and convergence behavior.
Arguments
payload::Dict: Benchmark results dictionary containing:"results": Vector of result dictionaries with fields:problem,grid_size,model,solver, etc."solutions": Vector of solution objects (OptimalControl.Solution or JuMP.Model)
output_dir::AbstractString: Directory where PDF files will be saved (created if not exists)
Output
Generates one PDF file per (problem, gridsize) combination with filename format: `<problem>N<grid_size>.pdf`
Each plot displays:
- State and costate trajectories (2 columns)
- Control trajectories (full width below)
- All solver-model combinations overlaid with consistent colors and markers
- Success/failure indicators (✓/✗) in legend
Details
- OptimalControl solutions are plotted first (simple overlay)
- JuMP solutions are plotted last (for proper subplot layout)
- Uses consistent color and marker schemes via
get_colorandget_marker_style - Handles missing or failed solutions gracefully
Example
julia> using CTBenchmarks
julia> payload = Dict(
"results" => [...], # benchmark results
"solutions" => [...] # solution objects
)
julia> CTBenchmarks.plot_solutions(payload, "plots/")
📊 Generating solution plots...
- Plotting beam with N=100 (4 solutions)
✓ Saved: beam_N100.pdf
✅ All solution plots generated in plots/run
CTBenchmarks.run — Function
run(; ...) -> Dict
run(version::Symbol; filepath, print_trace) -> Dict
Run comprehensive benchmarks on optimal control problems with various solvers and discretization methods.
This function executes a predefined benchmark suite that evaluates the performance of different optimal control solvers (Ipopt, MadNLP) across multiple models (JuMP, ADNLP, Exa, Exa-GPU) and problems. Results are collected in a structured dictionary and optionally saved to JSON.
Arguments
version::Symbol: Benchmark suite version to run (default::complete):complete: Full suite with 14 problems, multiple grid sizes (100, 200, 500), and two discretization methods:minimal: Quick suite with only the beam problem and grid size 100 (useful for testing)
filepath::Union{AbstractString, Nothing}: Optional path to save results as JSON file (must end with.json). Ifnothing, results are only returned in memory.print_trace::Bool: Whether to print solver trace information during execution (default:false)
Returns
Dict: Benchmark results containing timing data, solver statistics, and metadata for each problem-solver-model combination
Throws
CTBase.IncorrectArgument: Iffilepathis provided but does not end with.jsonErrorException: Ifversionis neither:completenor:minimal
Example
julia> using CTBenchmarks
julia> # Run minimal benchmark and save results
julia> results = run(:minimal; filepath="results.json")
julia> # Run complete benchmark without saving
julia> results = run(:complete)
julia> # Run with solver trace output
julia> results = run(:minimal; print_trace=true)See Also
benchmark: Core benchmarking function with full customization