Private API
This page lists the non-exported (internal) symbols of CTBenchmarks.
Access these symbols with:
import CTBenchmarks
CTBenchmarks.<NAME>ITERATION
CTBenchmarks.ITERATION — Constant
ITERATION::Base.RefValue{Int}Internal counter used to track how many times the JuMP solve loop has been executed, in order to adjust the solver print level after the first iteration.
benchmark_data
CTBenchmarks.benchmark_data — Function
benchmark_data(
;
problems,
solver_models,
grid_sizes,
disc_methods,
tol,
ipopt_mu_strategy,
print_trace,
max_iter,
max_wall_time
)
Run benchmarks on optimal control problems and return results as a DataFrame.
For each combination of problem, solver, model, and grid size, this function:
- Sets up and solves the optimization problem
- Captures timing and memory statistics using
@btimedorCUDA.@timed - Extracts solver statistics (objective value, iterations)
- Stores all data in a DataFrame row
Arguments
problems: Vector of problem names (Symbols)solver_models: Vector of Pairs mapping solver => models (e.g., [:ipopt => [:jump, :adnlp], :madnlp => [:exa, :exa_gpu]])grid_sizes: Vector of grid sizes (Int)disc_methods: Vector of discretization methods (Symbols)tol: Solver tolerance (Float64)ipopt_mu_strategy: Mu strategy for Ipopt (String)print_trace: Boolean - whether to print solver output (for debugging)max_iter: Maximum number of iterations (Int)max_wall_time: Maximum wall time in seconds (Float64)
Returns
A DataFrame with columns:
problem: Symbol - problem namesolver: Symbol - solver used (:ipopt or :madnlp)model: Symbol - model type (:jump, :adnlp, :exa, or :exa_gpu)disc_method: Symbol - discretization methodgrid_size: Int - number of grid pointstol: Float64 - solver tolerancemu_strategy: Union{String, Missing} - mu strategy for Ipopt (missing for MadNLP)max_iter: Int - maximum number of iterationsmax_wall_time: Float64 - maximum wall time in secondsbenchmark: NamedTuple - full benchmark object from @btimed or CUDA.@timedobjective: Union{Float64, Missing} - objective function value (missing if failed)iterations: Union{Int, Missing} - number of solver iterations (missing if failed)status: Any - termination status (type depends on solver/model)success: Bool - whether the solve succeededcriterion: Union{String, Missing} - optimization sense ("min" or "max", missing if failed)solution: Any - underlying solution object (JuMP model or OptimalControl solution)
build_payload
CTBenchmarks.build_payload — Function
build_payload(
results::DataFrames.DataFrame,
meta::Dict,
config::Dict
) -> Dict
Combine benchmark results, metadata, and configuration into a JSON-friendly payload.
The results DataFrame is converted to a vector of dictionaries (one per row) for easy JSON serialisation and reconstruction. Solutions are extracted and kept in memory (not serialised to JSON) for later plot generation.
Arguments
results::DataFrame: Benchmark results table produced bybenchmark_datameta::Dict: Environment metadata produced bygenerate_metadataconfig::Dict: Configuration describing the benchmark run (problems, solvers, grids, etc.)
Returns
Dict: Payload with three keys:"metadata"– merged metadata and configuration"results"– vector of row dictionaries obtained fromresults"solutions"– vector of solution objects (kept in memory only)
Example
julia> using CTBenchmarks
julia> payload = CTBenchmarks.build_payload(results, meta, config)
Dict{String, Any} with 3 entries:
"metadata" => Dict{String, Any}(...)
"results" => Vector{Dict}(...)
"solutions" => Any[...]costate_multiplier
CTBenchmarks.costate_multiplier — Function
costate_multiplier(criterion) -> Int64
Determine the sign used to plot costates based on the optimization criterion.
For maximisation problems, costates are plotted with a positive sign. For minimisation problems (the default), costates are plotted with a negative sign so that their visual behaviour matches the usual optimal control conventions.
Arguments
criterion: Optimization criterion (:min,:max, ormissing).
Returns
Int:+1if the problem is a maximisation,-1otherwise.
Example
julia> using CTBenchmarks
julia> CTBenchmarks.costate_multiplier(:min)
-1
julia> CTBenchmarks.costate_multiplier(:max)
1create_jump_layout
CTBenchmarks.create_jump_layout — Function
create_jump_layout(
n::Int64,
m::Int64,
problem::Symbol,
grid_size::Int64,
state_labels::Vector{<:AbstractString},
control_labels::Vector{<:AbstractString}
) -> Any
Create a nested plot layout for JuMP solutions.
Generates a multi-panel layout with states and costates in two columns, and controls spanning the full width below. This layout facilitates easy visual comparison of multiple solutions overlaid on the same plots.
Arguments
n::Int: Number of statesm::Int: Number of controlsproblem::Symbol: Problem name (for plot styling)grid_size::Int: Grid size (used for sizing calculations)state_labels::Vector{<:AbstractString}: Labels for state componentscontrol_labels::Vector{<:AbstractString}: Labels for control components
Returns
Plots.Plot: Nested plot layout with (n + n + m) accessible subplots
Layout Structure
- Left column: State trajectories (n subplots)
- Right column: Costate trajectories (n subplots)
- Bottom: Control trajectories (m subplots, full width)
Details
Subplots are accessed linearly:
plt[1:n]= statesplt[n+1:2n]= costatesplt[2n+1:2n+m]= controls
Example
julia> using CTBenchmarks
julia> state_labels = ["x₁", "x₂", "x₃"]
julia> control_labels = ["u₁", "u₂"]
julia> plt = CTBenchmarks.create_jump_layout(3, 2, :beam, 100, state_labels, control_labels)filter_models_for_backend
CTBenchmarks.filter_models_for_backend — Function
filter_models_for_backend(
models::Vector{Symbol},
disc_method::Symbol
) -> Vector{Symbol}
Filter solver models depending on backend availability and discretization support.
- GPU models (ending with
_gpu) are kept only if CUDA is available. - JuMP models are kept only when
disc_method == :trapeze.
Arguments
models::Vector{Symbol}: Candidate model types (e.g.[:jump, :adnlp, :exa, :exa_gpu])disc_method::Symbol: Discretization method (:trapezeor:midpoint)
Returns
Vector{Symbol}: Filtered list of models that are compatible with the current backend configuration.
Example
julia> using CTBenchmarks
julia> CTBenchmarks.filter_models_for_backend([:jump, :exa, :exa_gpu], :trapeze)
3-element Vector{Symbol}:
:jump
:exa
:exa_gpuformat_solution_label
CTBenchmarks.format_solution_label — Function
format_solution_label(
model::Symbol,
solver::Symbol,
success::Bool
) -> String
Format a short label for use in plot legends, combining success status with the model and solver names.
The label starts with a tick or cross depending on whether the solution was successful, followed by model-solver.
Arguments
model::Symbol: Model name (e.g.:jump,:adnlp,:exa)solver::Symbol: Solver name (e.g.:ipopt,:madnlp)success::Bool: Whether the solve succeeded (true) or failed (false)
Returns
String: A label such as"✓ jump-ipopt"or"✗ exa-madnlp"
Example
julia> using CTBenchmarks
julia> CTBenchmarks.format_solution_label(:jump, :ipopt, true)
"✓ jump-ipopt"
julia> CTBenchmarks.format_solution_label(:exa, :madnlp, false)
"✗ exa-madnlp"generate_metadata
CTBenchmarks.generate_metadata — Function
generate_metadata() -> Dict{String, String}
Collect metadata about the current Julia environment for benchmark reproducibility.
The returned dictionary includes a timestamp, Julia version, OS and machine information, as well as textual snapshots of the package environment.
Returns
Dict{String,String}: Dictionary with keys"timestamp": Current time in UTC (ISO8601-like formatting)"julia_version": Julia version string"os": Kernel/OS identifier"machine": Hostname of the current machine"pkg_status": Output ofPkg.status()with ANSI colours"versioninfo": Output ofversioninfo()with ANSI colours"pkg_manifest": Output ofPkg.status(mode=PKGMODE_MANIFEST)with ANSI colours
Example
julia> using CTBenchmarks
julia> meta = CTBenchmarks.generate_metadata()
Dict{String, String} with 7 entries:
"timestamp" => "2025-11-15 18:30:00 UTC"
"julia_version" => "1.10.0"
"os" => "Linux"
⋮get_color
CTBenchmarks.get_color — Function
get_color(
model::Union{String, Symbol},
solver::Union{String, Symbol},
idx::Int64
) -> Symbol
Return a consistent color for a given (model, solver) pair.
This function ensures visual consistency across plots by assigning fixed colors to known (model, solver) combinations. For unknown combinations, it cycles through a default palette based on the provided index.
Fixed Mappings
(adnlp, ipopt)→:blue(exa, ipopt)→:red(adnlp, madnlp)→:green(exa, madnlp)→:orange(jump, ipopt)→:purple(jump, madnlp)→:brown(exa_gpu, madnlp)→:cyan
Arguments
model::Union{Symbol,String}: Model name (case-insensitive)solver::Union{Symbol,String}: Solver name (case-insensitive)idx::Int: Index for palette fallback (used if pair not in fixed mappings)
Returns
Symbol: Color symbol suitable for Plots.jl (e.g.,:blue,:red)
Example
julia> using CTBenchmarks
julia> CTBenchmarks.get_color(:adnlp, :ipopt, 1)
:blue
julia> CTBenchmarks.get_color(:unknown, :solver, 2)
:redget_dimensions
CTBenchmarks.get_dimensions — Function
get_dimensions(
group::DataFrames.SubDataFrame
) -> Tuple{Any, Any}
Get state and control dimensions from the first available solution in a group.
Extracts the problem dimensions (number of states and controls) by examining the first solution in the group. Works with both OptimalControl.Solution and JuMP.Model objects.
Arguments
group::SubDataFrame: DataFrame subset with solution rows
Returns
Tuple{Int, Int}:(n, m)where n = number of states, m = number of controls
Example
julia> using CTBenchmarks
julia> n, m = CTBenchmarks.get_dimensions(group)
(3, 2)get_left_margin
CTBenchmarks.get_left_margin — Function
get_left_margin(problem::Symbol) -> Measures.AbsoluteLength
Get the left margin for plots based on the problem.
Different problems may require different margins to accommodate axis labels and titles. The beam problem uses a smaller margin (5mm) while other problems use 20mm.
Arguments
problem::Symbol: Problem name (e.g.,:beam,:shuttle)
Returns
Plots.Measure: Left margin in millimeters (5mm or 20mm)
Example
julia> using CTBenchmarks
julia> CTBenchmarks.get_left_margin(:beam)
5 mm
julia> CTBenchmarks.get_left_margin(:shuttle)
20 mmget_marker_indices
CTBenchmarks.get_marker_indices — Function
get_marker_indices(
idx::Int64,
card_g::Int64,
grid_size::Int64,
marker_interval::Int64
) -> StepRange{Int64, Int64}
Calculate marker indices with offset to avoid superposition between curves.
When multiple curves are overlaid on the same plot, markers can overlap and obscure the visualization. This function staggers the marker positions across curves by applying an offset based on the curve index.
Arguments
idx::Int: Curve index (1-based)card_g::Int: Total number of curvesgrid_size::Int: Number of grid points on the curvemarker_interval::Int: Base spacing between markers
Returns
UnitRange{Int}: Range of indices for marker placement
Details
For curve idx out of card_g curves, the first marker is offset by:
offset = (idx - 1) * marker_interval / card_gExample
julia> using CTBenchmarks
julia> CTBenchmarks.get_marker_indices(1, 3, 100, 20)
1:20:101
julia> CTBenchmarks.get_marker_indices(2, 3, 100, 20)
8:20:101get_marker_style
CTBenchmarks.get_marker_style — Function
get_marker_style(
model::Union{String, Symbol},
solver::Union{String, Symbol},
idx::Int64
) -> Symbol
Get marker shape and spacing for a given (model, solver) pair.
This function provides consistent marker styles for known (model, solver) combinations and automatically calculates appropriate marker spacing based on grid size to avoid visual clutter while maintaining visibility.
Fixed Mappings
(adnlp, ipopt)→:circle(exa, ipopt)→:square(adnlp, madnlp)→:diamond(exa, madnlp)→:utriangle(jump, ipopt)→:dtriangle(jump, madnlp)→:star5(exa_gpu, madnlp)→:hexagon
Arguments
model::Union{Symbol,String}: Model name (case-insensitive)solver::Union{Symbol,String}: Solver name (case-insensitive)idx::Int: Index for marker fallback (used if pair not in fixed mappings)grid_size::Int: Number of grid points on the curve
Returns
Tuple{Symbol, Int}:(marker_shape, marker_interval)where:marker_shape: Symbol for marker type (e.g.,:circle,:square)marker_interval: Spacing between markers (calculated asmax(1, grid_size ÷ 6))
Example
julia> using CTBenchmarks
julia> CTBenchmarks.get_marker_style(:adnlp, :ipopt, 1, 200)
(:circle, 33)
julia> CTBenchmarks.get_marker_style(:unknown, :solver, 2, 100)
(:square, 16)get_solution_dimensions
CTBenchmarks.get_solution_dimensions — Function
get_solution_dimensions(
solution::CTModels.Solution
) -> Tuple{Int64, Int64}
Extract state and control dimensions from an OptimalControl solution.
Arguments
solution::OptimalControl.Solution: OptimalControl solution object
Returns
Tuple{Int, Int}:(n, m)where n = number of states, m = number of controls
get_solution_dimensions(
solution::JuMP.Model
) -> Tuple{Any, Any}
Extract state and control dimensions from a JuMP model solution.
Arguments
solution::JuMP.Model: JuMP model solution object
Returns
Tuple{Int, Int}:(n, m)where n = number of states, m = number of controls
is_cuda_on
CTBenchmarks.is_cuda_on — Function
is_cuda_on() -> Bool
Check whether CUDA is available and functional on this machine.
This function is used to decide whether GPU-based models (those whose name ends with _gpu) can be run in the benchmark suite.
Returns
Bool:trueif CUDA is functional,falseotherwise.
Example
julia> using CTBenchmarks
julia> CTBenchmarks.is_cuda_on()
falseplot_jump_group
CTBenchmarks.plot_jump_group — Function
plot_jump_group(
jump_rows::DataFrames.SubDataFrame,
plt,
color_idx::Int64,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64
) -> Tuple{Any, Int64}
plot_jump_group(
jump_rows::DataFrames.SubDataFrame,
plt,
color_idx::Int64,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64,
card_g_override::Union{Nothing, Int64}
) -> Tuple{Any, Int64}
Plot all JuMP solutions in a group with consistent styling.
This function creates the plot layout if plt is nothing, then adds all JuMP solutions from the group. JuMP solutions require special layout handling compared to OptimalControl solutions.
Arguments
jump_rows::SubDataFrame: Rows containing JuMP solutionsplt: Existing plot (ornothingto create new)color_idx::Int: Current color index for consistent stylingproblem::Symbol: Problem namegrid_size::Int: Grid sizen::Int: Number of statesm::Int: Number of controlscard_g_override::Union{Int,Nothing}: Override for total number of curves (for marker offset)
Returns
Tuple{Plots.Plot, Int}: Updated plot and next color index
plot_jump_solution
CTBenchmarks.plot_jump_solution — Function
plot_jump_solution(
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64,
criterion
) -> Any
plot_jump_solution(
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64,
criterion,
marker
) -> Any
plot_jump_solution(
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64,
criterion,
marker,
marker_interval
) -> Any
plot_jump_solution(
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64,
criterion,
marker,
marker_interval,
idx::Int64
) -> Any
plot_jump_solution(
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64,
criterion,
marker,
marker_interval,
idx::Int64,
card_g::Int64
) -> Any
Create a new multi-panel plot for a single JuMP solution.
Generates a comprehensive visualization with state, costate, and control trajectories from a JuMP model, with spaced markers and legend entry indicating success status.
Arguments
solution: JuMP.Model objectmodel::Symbol: Model name (for legend)solver::Symbol: Solver name (for legend)success::Bool: Whether the solution converged successfullycolor: Color symbol (fromget_color)problem::Symbol: Problem name (for plot styling)grid_size::Int: Grid sizen::Int: Number of statesm::Int: Number of controlscriterion: Optimization criterion (:minor:max, affects costate sign)marker: Marker shape symbol (default::circle)marker_interval::Int: Spacing between markers (default: 10)idx::Int: Curve index for marker offset (default: 1)card_g::Int: Total number of curves for marker offset (default: 1)
Returns
Plots.Plot: Multi-panel plot with (n + n + m) subplots
plot_jump_solution!
CTBenchmarks.plot_jump_solution! — Function
plot_jump_solution!(
plt,
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
n::Int64,
m::Int64,
criterion
) -> Any
plot_jump_solution!(
plt,
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
n::Int64,
m::Int64,
criterion,
marker
) -> Any
plot_jump_solution!(
plt,
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
n::Int64,
m::Int64,
criterion,
marker,
marker_interval
) -> Any
plot_jump_solution!(
plt,
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
n::Int64,
m::Int64,
criterion,
marker,
marker_interval,
idx::Int64
) -> Any
plot_jump_solution!(
plt,
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
n::Int64,
m::Int64,
criterion,
marker,
marker_interval,
idx::Int64,
card_g::Int64
) -> Any
Add a JuMP solution to an existing multi-panel plot.
Appends state, costate, and control trajectories from a JuMP model to existing subplots with spaced markers and consistent styling. Updates the legend with success status.
Arguments
plt: Existing Plots.Plot to modifysolution: JuMP.Model objectmodel::Symbol: Model name (for legend)solver::Symbol: Solver name (for legend)success::Bool: Whether the solution converged successfullycolor: Color symbol (fromget_color)n::Int: Number of statesm::Int: Number of controlscriterion: Optimization criterion (:minor:max, affects costate sign)marker: Marker shape symbol (default::none)marker_interval::Int: Spacing between markers (default: 10)idx::Int: Curve index for marker offset (default: 1)card_g::Int: Total number of curves for marker offset (default: 1)
Returns
Plots.Plot: Modified plot with new solution added
Note
Even with nested layout, subplots are accessed linearly:
plt[1:n]= statesplt[n+1:2n]= costatesplt[2n+1:2n+m]= controls
plot_ocp_group
CTBenchmarks.plot_ocp_group — Function
plot_ocp_group(
ocp_rows::DataFrames.SubDataFrame,
plt,
color_idx::Int64,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64
) -> Tuple{Any, Int64}
plot_ocp_group(
ocp_rows::DataFrames.SubDataFrame,
plt,
color_idx::Int64,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64,
card_g_override::Union{Nothing, Int64}
) -> Tuple{Any, Int64}
Plot all OptimalControl solutions in a group with consistent styling.
This function creates the base plot if plt is nothing, then adds all OptimalControl solutions from the group with consistent colors and markers. It manages color indexing across multiple groups to ensure visual consistency.
Arguments
ocp_rows::SubDataFrame: Rows containing OptimalControl solutionsplt: Existing plot (ornothingto create new)color_idx::Int: Current color index for consistent stylingproblem::Symbol: Problem namegrid_size::Int: Grid sizen::Int: Number of statesm::Int: Number of controlscard_g_override::Union{Int,Nothing}: Override for total number of curves (for marker offset)
Returns
Tuple{Plots.Plot, Int}: Updated plot and next color index
plot_ocp_solution
CTBenchmarks.plot_ocp_solution — Function
plot_ocp_solution(
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64,
marker,
marker_interval
) -> Any
plot_ocp_solution(
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64,
marker,
marker_interval,
idx::Int64
) -> Any
plot_ocp_solution(
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
problem::Symbol,
grid_size::Int64,
n::Int64,
m::Int64,
marker,
marker_interval,
idx::Int64,
card_g::Int64
) -> Any
Create a new multi-panel plot for a single OptimalControl solution.
Generates a comprehensive visualization with state, costate, and control trajectories, with spaced markers for improved visibility and a legend entry indicating success status.
Arguments
solution: OptimalControl.Solution objectmodel::Symbol: Model name (for legend)solver::Symbol: Solver name (for legend)success::Bool: Whether the solution converged successfullycolor: Color symbol (fromget_color)problem::Symbol: Problem name (for plot styling)grid_size::Int: Grid sizen::Int: Number of statesm::Int: Number of controlsmarker: Marker shape symbol (fromget_marker_style)marker_interval::Int: Spacing between markersidx::Int: Curve index for marker offset (default: 1)card_g::Int: Total number of curves for marker offset (default: 1)
Returns
Plots.Plot: Multi-panel plot with (n + n + m) subplots
plot_ocp_solution!
CTBenchmarks.plot_ocp_solution! — Function
plot_ocp_solution!(
plt,
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
n::Int64,
m::Int64,
marker,
marker_interval
) -> Any
plot_ocp_solution!(
plt,
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
n::Int64,
m::Int64,
marker,
marker_interval,
idx::Int64
) -> Any
plot_ocp_solution!(
plt,
solution,
model::Symbol,
solver::Symbol,
success::Bool,
color,
n::Int64,
m::Int64,
marker,
marker_interval,
idx::Int64,
card_g::Int64
) -> Any
Add an OptimalControl solution to an existing multi-panel plot.
Appends state, costate, and control trajectories to existing subplots with spaced markers and consistent styling. Updates the legend with success status.
Arguments
plt: Existing Plots.Plot to modifysolution: OptimalControl.Solution objectmodel::Symbol: Model name (for legend)solver::Symbol: Solver name (for legend)success::Bool: Whether the solution converged successfullycolor: Color symbol (fromget_color)n::Int: Number of statesm::Int: Number of controlsmarker: Marker shape symbol (fromget_marker_style)marker_interval::Int: Spacing between markersidx::Int: Curve index for marker offset (default: 1)card_g::Int: Total number of curves for marker offset (default: 1)
Returns
Plots.Plot: Modified plot with new solution added
plot_solution_comparison
CTBenchmarks.plot_solution_comparison — Function
plot_solution_comparison(
group::DataFrames.SubDataFrame,
problem::Symbol,
grid_size::Int64
) -> Any
Create a comprehensive comparison plot for all solutions in a group.
This function orchestrates the plotting of all OptimalControl and JuMP solutions for a given problem and grid size, arranging them in a multi-panel layout with consistent styling.
Arguments
group::SubDataFrame: DataFrame subset with rows for the same (problem, grid_size)problem::Symbol: Problem name (used for plot styling, e.g., left margin)grid_size::Int: Grid size (used for marker spacing calculations)
Returns
Plots.Plot: Multi-panel plot with states, costates, and controls
Layout
- Top panels: State trajectories (n columns)
- Middle panels: Costate trajectories (n columns)
- Bottom panels: Control trajectories (m columns, full width)
Strategy
- OptimalControl solutions plotted first (simple overlay with
plot!) - JuMP solutions plotted last (for proper subplot layout)
- All solutions use consistent colors and markers via
get_colorandget_marker_style - Success/failure indicators (✓/✗) shown in legend
prettymemory
CTBenchmarks.prettymemory — Function
prettymemory(b) -> String
Format a memory footprint bytes into a human-readable string using binary prefixes (bytes, KiB, MiB, GiB) with two decimal places.
The function uses standard binary units (1024 bytes = 1 KiB) and automatically selects the most appropriate unit based on the magnitude of the input value.
Arguments
bytes::Integer: Memory size in bytes (must be non-negative)
Returns
String: Formatted memory string with two decimal places and unit suffix
Example
julia> using CTBenchmarks
julia> CTBenchmarks.prettymemory(512)
"512 bytes"
julia> CTBenchmarks.prettymemory(1048576)
"1.00 MiB"
julia> CTBenchmarks.prettymemory(2147483648)
"2.00 GiB"prettytime
CTBenchmarks.prettytime — Function
prettytime(t) -> String
Format a duration t expressed in seconds into a human-readable string with three decimal places and adaptive units (ns, μs, ms, s).
The function automatically selects the most appropriate unit based on the magnitude of the input value, ensuring readable output across a wide range of timescales.
Arguments
t::Real: Duration in seconds (can be positive or negative)
Returns
String: Formatted time string with three decimal places and unit suffix
Example
julia> using CTBenchmarks
julia> CTBenchmarks.prettytime(0.001234)
"1.234 ms"
julia> CTBenchmarks.prettytime(1.5)
"1.500 s "
julia> CTBenchmarks.prettytime(5.6e-7)
"560.000 ns"print_benchmark_line
CTBenchmarks.print_benchmark_line — Function
print_benchmark_line(model::Symbol, stats::NamedTuple)
Print a formatted line summarizing benchmark statistics for model with colors.
This function formats and displays benchmark results in a human-readable table row, including execution time, memory usage, solver objective value, iteration count, and success status. It automatically detects and handles both CPU benchmarks (from @btimed) and GPU benchmarks (from CUDA.@timed).
Arguments
model::Symbol: Name of the model being benchmarked (e.g.,:jump,:adnlp)stats::NamedTuple: Statistics dictionary containing:benchmark: Timing and memory data (Dict or NamedTuple) with fields::time: Execution time in seconds:bytesor:cpu_bytes,:gpu_bytes: Memory allocation
objective: Solver objective value (ormissing)iterations: Number of solver iterations (ormissing)success: Boolean indicating successful completioncriterion: Optimization criterion (e.g.,:min,:max) ormissingstatus: Error message (used when benchmark is missing)
Output
Prints a colored, formatted line to stdout with:
- Success indicator (✓ in green or ✗ in red)
- Model name in magenta
- Formatted execution time
- Iteration count
- Objective value in scientific notation
- Criterion type
- Memory usage (CPU and/or GPU)
Example
julia> using CTBenchmarks
julia> stats = (
benchmark = (time = 0.123, bytes = 1048576),
objective = 42.5,
iterations = 100,
success = true,
criterion = :min
)
julia> CTBenchmarks.print_benchmark_line(:jump, stats)
✓ | jump | time: 0.123 s | iters: 100 | obj: 4.250000e+01 (min) | CPU: 1.00 MiBsave_json
CTBenchmarks.save_json — Function
save_json(payload::Dict, filepath::AbstractString) -> Int64
Save a JSON payload to a file. Creates the parent directory if needed and uses pretty printing for readability.
The payload is typically produced by build_payload. The "solutions" entry is excluded from serialisation so that the JSON contains only metadata and results.
Arguments
payload::Dict: Benchmark results with metadatafilepath::AbstractString: Full path to the output JSON file (including filename)
Returns
Nothing: Writes the JSON file as a side effect.
Example
julia> using CTBenchmarks
julia> payload = CTBenchmarks.build_payload(results, meta, config)
julia> CTBenchmarks.save_json(payload, "benchmarks.json")set_print_level
CTBenchmarks.set_print_level — Function
set_print_level(
solver::Symbol,
print_trace::Bool
) -> Union{Int64, MadNLP.LogLevels}
Set print level based on solver and print_trace flag.
For Ipopt, this returns an integer verbosity level. For MadNLP, it returns a MadNLP.LogLevels value. The flag print_trace is typically propagated from high-level benchmarking options.
Arguments
solver::Symbol: Solver name (:ipoptor:madnlp)print_trace::Bool: Whether detailed solver output should be printed
Returns
IntorMadNLP.LogLevels: Print level appropriate for the chosen solver
Example
julia> using CTBenchmarks
julia> CTBenchmarks.set_print_level(:ipopt, true)
5
julia> CTBenchmarks.set_print_level(:madnlp, false)
MadNLP.ERRORsolve_and_extract_data
CTBenchmarks.solve_and_extract_data — Function
solve_and_extract_data(
problem::Symbol,
solver::Symbol,
model::Symbol,
grid_size::Int64,
disc_method::Symbol,
tol::Float64,
mu_strategy::Union{Missing, String},
print_trace::Bool,
max_iter::Int64,
max_wall_time::Float64
) -> NamedTuple{(:benchmark, :objective, :iterations, :status, :success, :criterion, :solution)}
Solve an optimal control problem and extract performance and solver statistics.
This internal helper function orchestrates the solve process for different model types (JuMP, adnlp, exa, exa_gpu) and captures timing, memory, and solver statistics. It handles error cases gracefully by returning missing values instead of propagating exceptions.
Arguments
problem::Symbol: problem name (e.g.,:beam,:chain)solver::Symbol: solver to use (:ipoptor:madnlp)model::Symbol: model type (:jump,:adnlp,:exa, or:exa_gpu)grid_size::Int: number of grid pointsdisc_method::Symbol: discretization method (:trapezeor:midpoint)tol::Float64: solver tolerancemu_strategy::Union{String, Missing}: mu strategy for Ipopt (missing for MadNLP)print_trace::Bool: whether to emit detailed solver outputmax_iter::Int: maximum number of iterationsmax_wall_time::Float64: maximum wall time in seconds
Returns
A NamedTuple with fields:
benchmark: full benchmark object from@btimed(CPU) orCUDA.@timed(GPU)objective::Union{Float64, Missing}: objective function value (missing if failed)iterations::Union{Int, Missing}: number of solver iterations (missing if failed)status::Any: termination status (type depends on solver/model)success::Bool: whether the solve succeededcriterion::Union{String, Missing}: optimization sense ("min"or"max", missing if failed)solution::Union{Any, Missing}: the solution object (JuMP model or OCP solution, missing if failed)
Details
Model-specific logic:
- JuMP (
:jump): Uses@btimedfor CPU benchmarking, requires:trapezediscretization - GPU (
:exa_gpu): UsesCUDA.@timedfor GPU benchmarking, requires MadNLP solver and functional CUDA - OptimalControl (
:adnlp,:exa): Uses@btimedfor CPU benchmarking with OptimalControl backend
Solver configuration:
- Ipopt: Configured with MUMPS linear solver, mu strategy, and second-order barrier
- MadNLP: Configured with MUMPS linear solver
Print level adjustment: The solver print level is reduced after the first iteration to avoid excessive output during benchmarking (controlled by the ITERATION counter).
Error handling: If any solve fails, returns a NamedTuple with success=false and missing values for objective, iterations, and solution, allowing batch processing to continue.
Throws
AssertionError: If GPU model is used without MadNLP, without functional CUDA, if JuMP model uses non-trapeze discretization, or if Ipopt is used without mu_strategy.
strip_benchmark_value
CTBenchmarks.strip_benchmark_value — Function
strip_benchmark_value(bench) -> NamedTuple
Remove the value field from benchmark outputs (NamedTuple or Dict) to ensure JSON-serializable data while preserving all other statistics.
The value field typically contains the actual return value from the benchmarked code, which may not be JSON-serializable. This function strips it out while keeping timing, memory allocation, and other benchmark statistics intact.
Arguments
bench: Benchmark output (NamedTuple, Dict, or other type)
Returns
- Same type as input, with
valuefield removed (if present)
Details
Three methods are provided:
- Default: Returns input unchanged (for types without a
valuefield) - NamedTuple: Reconstructs NamedTuple without the
:valuekey - Dict: Creates new Dict excluding both
:valueand"value"keys
Example
julia> using CTBenchmarks
julia> bench_nt = (time=0.001, alloc=1024, value=42)
(time = 0.001, alloc = 1024, value = 42)
julia> CTBenchmarks.strip_benchmark_value(bench_nt)
(time = 0.001, alloc = 1024)
julia> bench_dict = Dict("time" => 0.001, "value" => 42)
Dict{String, Float64} with 2 entries:
"time" => 0.001
"value" => 42
julia> CTBenchmarks.strip_benchmark_value(bench_dict)
Dict{String, Float64} with 1 entry:
"time" => 0.001