Documentation Generation Process
This page explains how the CTBenchmarks.jl documentation is generated and how benchmark results are turned into rich documentation pages with figures, tables, and environment information.
It is mainly intended for developers who want to:
- Understand the
docs/make.jlpipeline. - Add documentation for a new benchmark.
- Extend the existing template/figure system.
If you only want to add a benchmark to the CI pipeline, see Add a new benchmark first. For documentation-specific details, come back to this page.
High-Level Overview
The documentation build has three main stages:
Prepare the environment and utilities
- Copy
Project.tomlandManifest.tomlunderdocs/src/assets/toml/. - Load documentation utilities from
docs/src/docutils/utils.jl.
- Copy
Generate and process templates
- Automatically generate
.md.templatefiles for per-problem pages (core benchmark problems). - Process template files (including manual templates) to produce temporary
.mdfiles that Documenter can read. - While processing templates, replace special blocks such as
INCLUDE_ENVIRONMENT,INCLUDE_FIGURE, andINCLUDE_TEXT(with legacyINCLUDE_ANALYSISstill supported as an alias).
- Automatically generate
Build and deploy documentation
- Call
makedocswith the processed.mdfiles. - Clean up all generated templates and figures.
- Deploy the documentation to GitHub Pages via
deploydocs.
- Call
All of this is orchestrated by docs/make.jl.
docs/make.jl
├─ copy Project/Manifest → docs/src/assets/toml
├─ include docs/src/docutils/utils.jl
├─ with_processed_template_problems("docs/src") do core_problems
│ └─ with_processed_templates([core/cpu.md, core/gpu.md, core/problems], ... ) do
│ └─ makedocs(...)
└─ deploydocs(...)Documentation Utilities Directory (docs/src/docutils)
The docs/src/docutils/ directory contains the Julia code used only at documentation build time:
CTBenchmarksDocUtils.jl– main module that includes all submodules and exports the public API used in templates anddocs/make.jl.utils.jl– entry point loaded byinclude(joinpath(@__DIR__, "src", "docutils", "utils.jl"))indocs/make.jland by@setup BENCHblocks viainclude(normpath(joinpath(@__DIR__, "..", "docutils", "utils.jl"))).modules/– helper modules implementing template generation/processing, figure generation, performance profiles, environment/log printers, and text generation.
Unlike docs/src/assets/, which holds static files (TOML, benchmark JSON, generated figures) that are copied to the built site, the docutils directory is an internal implementation detail and is not deployed as part of the web assets.
docs/make.jl: Orchestrating the Build
The main steps in docs/make.jl are:
Configuration
draft = falsecontrols execution of@exampleblocks.exclude_problems_from_draftcan force specific problem pages to execute their examples even in draft mode.debug = falsecontrols the verbosity of logs from documentation utilities: when set totrue, additional per-block messages and full stacktraces are printed for easier debugging of template and figure generation.
Environment files
Project.tomlandManifest.tomlare copied intodocs/src/assets/toml/so that the exact environment used for the documentation is preserved.
Documentation utilities
include("src/docutils/utils.jl")loads all helper modules: template generation, template processing, figure generation, plotting, and log/environment printers.
Template generation for problems
with_processed_template_problems(joinpath(@__DIR__, "src"); ...) do core_problems:- Calls into
TemplateGenerator.write_core_benchmark_templatesto create.md.templatefiles for all core benchmark problems. - Returns a list of generated template paths and the list of problem names
core_problems. - Ensures that all generated
.md.templatefiles are deleted afterwards.
- Calls into
Flow (problems):
with_processed_template_problems(src) do core_problems ├─ write_core_benchmark_templates(src, draft, exclude) │ ├─ read core benchmark JSONs │ ├─ collect all problems │ └─ write core/problems/<problem>.md.template ├─ core_problems = list of problem names └─ f(core_problems) # calls into template processing + makedocs # finally: remove generated .md.template files endTemplate processing
Inside the
do core_problemsblock, we call:with_processed_templates( [ joinpath("core", "cpu.md"), joinpath("core", "gpu.md"), joinpath("core", "problems"), ], joinpath(@__DIR__, "src"), joinpath(@__DIR__, "src", "assets", "md"), ) do makedocs(; ...) endwith_processed_templates(fromTemplateProcessor.jl) takes a list of template files/directories and:- Resolves them to concrete template paths (e.g.,
core/cpu.md.template,core/problems/*.md.template). - Processes each template, replacing
INCLUDE_ENVIRONMENT,INCLUDE_FIGURE, andINCLUDE_TEXTblocks (and legacyINCLUDE_ANALYSISblocks) and writing the resulting.mdfiles. - Collects all figure paths generated during processing.
- Runs
makedocs. - Cleans up all generated
.mdfiles and figures in afinallyblock.
- Resolves them to concrete template paths (e.g.,
Flow (templates):
with_processed_templates(files, src, assets_md) do ├─ construct_template_files(files, src) │ └─ expand directories → list of *.md.template ├─ process_templates(...) │ ├─ for each template: │ │ ├─ replace_environment_blocks │ │ └─ replace_figure_blocks → generate figures (SVG + PDF) │ └─ write processed .md files ├─ makedocs(...) └─ finally ├─ remove generated .md files └─ remove generated figures (if any) endBuilding and deployment
makedocsbuilds the HTML documentation.deploydocspublishes it to GitHub Pages.
The important takeaway: problem pages and some benchmark pages are not written by hand. They are generated and then processed via templates.
Automatic Problem Pages
Problem pages under docs/src/core/problems/ are generated automatically from benchmark data using TemplateGenerator.jl.
Core benchmark templates
The function write_core_benchmark_templates:
- Reads the list of core benchmarks (e.g.,
core-ubuntu-latest,core-moonshot-cpu,core-moonshot-gpu). - For each benchmark, determines which problems appear in its JSON results (e.g.,
beam,crane, ...). - Builds a set of all problems across all core benchmarks.
- For each problem, calls
generate_template_problem_from_listto create a.md.templatefile undercore/problems/.
core-*.json (benchmark results)
└─ write_core_benchmark_templates
├─ get_problems_in_benchmarks → [problem_1, problem_2, ...]
└─ for each problem
└─ generate_template_problem_from_list
└─ core/problems/<problem>.md.templateStructure of a generated problem page
Inside generate_template_problem and generate_template_problem_from_list, a typical problem page contains:
- A title and description for the problem.
- A single
@setup BENCHblock that loadsutils.jl. - One section per benchmark configuration (e.g., one for
core-ubuntu-latest, one forcore-moonshot-cpu, etc.). For each section:- An
INCLUDE_ENVIRONMENTblock that will display environment and configuration information. - One or several
INCLUDE_FIGUREblocks for plots such as:- Global performance profiles.
- Time vs grid size (line and bar plots).
- A
@example BENCHblock that calls_print_benchmark_logwith the correspondingbench_idto print detailed results.
- An
You do not edit these pages by hand. They are regenerated from templates whenever documentation is built.
Template Processing and Special Blocks
Template files (both auto-generated and manual) may contain special blocks of the form:
<!-- INCLUDE_ENVIRONMENT: ... --><!-- INCLUDE_FIGURE: ... --><!-- INCLUDE_TEXT: ... -->
These are handled by TemplateProcessor.jl.
INCLUDE_ENVIRONMENT
INCLUDE_ENVIRONMENT blocks are used to inject environment and configuration information for a given benchmark. They look like:
<!-- INCLUDE_ENVIRONMENT:
BENCH_ID = "core-ubuntu-latest"
ENV_NAME = BENCH
-->During template processing:
- The parameter block is parsed by
parse_include_params. - The environment template
environment.md.templateis loaded. - Variables such as
BENCH_IDandENV_NAMEare substituted. - The template is rendered using helper functions from
PrintEnvConfig.jl, typically including:- Download links for
Project.toml,Manifest.tomland the benchmark script via_downloads_toml. - Basic metadata (timestamp, Julia version, OS, machine) via
_basic_metadata. - Optional detailed metadata (
_version_info,_complete_manifest,_print_config).
- Download links for
The resulting Markdown replaces the original comment block in the generated .md file.
core/...md.template
└─ <!-- INCLUDE_ENVIRONMENT: BENCH_ID = "core-ubuntu-latest", ... -->
└─ replace_environment_blocks
└─ environment.md.template + PrintEnvConfig helpers
└─ Markdown block (links + metadata + config)INCLUDE_FIGURE
INCLUDE_FIGURE blocks are used to generate and insert plots. For example:
<!-- INCLUDE_FIGURE:
FUNCTION = _plot_profile_default_cpu
ARGS = core-ubuntu-latest
-->During processing:
- The function name and arguments are parsed from the block.
FigureGeneration.jllooks up the function in theFIGURE_FUNCTIONSregistry, which currently includes (among others):_plot_profile_default_cpu_plot_profile_default_iter_plot_time_vs_grid_size_plot_time_vs_grid_size_bar
- The plotting function is called in the
BENCHenvironment with string arguments. - Two files are generated in the figures directory (SVG + PDF), with a unique basename derived from the template name, function name, and arguments.
- The template processor emits Markdown that:
- Embeds the SVG figure in the page.
- Wraps the SVG in a link pointing to the PDF.
As a result, figures in the documentation are clickable and open a PDF version suitable for high-quality printing.
core/...md.template
└─ <!-- INCLUDE_FIGURE: FUNCTION = _plot_profile_default_cpu, ARGS = core-ubuntu-latest -->
└─ replace_figure_blocks
├─ call_figure_function(FUNCTION, ARGS)
├─ generate_figure_files → SVG + PDF in assets/plots
└─ emit @raw html block (img SVG, link PDF)INCLUDE_TEXT
INCLUDE_TEXT blocks are used to generate textual analysis or other Markdown-compatible content from benchmark results, such as performance-profile summaries or tables.
For example:
<!-- INCLUDE_TEXT:
FUNCTION = _analyze_profile_default_cpu
ARGS = core-ubuntu-latest
-->During processing:
- The function name and arguments are parsed from the block.
TextGeneration.jllooks up the function in theTEXT_FUNCTIONSregistry, which currently includes:_analyze_profile_default_cpu_analyze_profile_default_iter_print_benchmark_table_results
- The text function is called with string arguments and must return a Markdown-compatible string (for example, the output of
analyze_performance_profile(pp), a benchmark table, or a string containing@raw htmlblocks for more advanced layouts). - The returned content is inlined directly into the generated
.mdfile.
Example: Dynamic multi-problem benchmark table
A common usage of INCLUDE_TEXT is to render benchmark-result tables via _print_benchmark_table_results:
<!-- INCLUDE_TEXT:
FUNCTION = _print_benchmark_table_results
ARGS = core-ubuntu-latest
-->- If the benchmark contains a single problem, the function returns a standard Markdown table.
- If the benchmark contains multiple problems, it returns a
@raw htmlblock containing:- a
<select>element listing all problems, - one HTML table per problem, each wrapped in a
<div>and toggled via a small JavaScript snippet, - persistence of the last selected problem using
window.localStoragewith a key derived from the benchmark ID.
- a
This allows long per-problem tables to remain compact and navigable in the rendered documentation while being generated from a single INCLUDE_TEXT block in the template.
Figure Types, Analysis, and Helper Functions
Several helper modules provide the concrete plots and textual outputs:
Performance profiles —
PerformanceProfileCore.jl+PlotPerformanceProfile.jl+AnalyzePerformanceProfile.jl+TextGeneration.jl_plot_profile_default_cpu(bench_id)/_plot_profile_default_iter(bench_id)(called viaINCLUDE_FIGURE)_analyze_profile_default_cpu(bench_id)/_analyze_profile_default_iter(bench_id)(called viaINCLUDE_TEXT)- Together, these functions compute, plot, and summarize Dolan–Moré-style performance profiles over
(problem, grid_size)instances and(model, solver)combinations.
Time vs grid size —
PlotTimeVsGridSize.jl_plot_time_vs_grid_size(problem, bench_id, src_dir)_plot_time_vs_grid_size_bar(problem, bench_id, src_dir)- Line and bar plots showing mean solve time as a function of grid size.
Benchmark logs —
PrintLogResults.jl_print_benchmark_log(bench_id, src_dir; problems=nothing)- Prints a tree-structured log by problem, solver, discretization, grid size, and model, with colored formatting.
Environment and configuration —
PrintEnvConfig.jl_downloads_toml(bench_id, src_dir)_basic_metadata(bench_id, src_dir)_version_info(bench_id, src_dir)_complete_manifest(bench_id, src_dir)_print_config(bench_id, src_dir)
These functions are all made available by utils.jl and are typically used indirectly via INCLUDE_ENVIRONMENT, INCLUDE_FIGURE, or @example BENCH blocks.
Adding Documentation for a Benchmark
There are two complementary ways benchmark results appear in the documentation.
1. Automatic per-problem pages (core benchmarks)
For core benchmarks, once the benchmark JSON files are present under docs/src/assets/benchmarks/<id>/<id>.json, the corresponding problem pages are generated automatically by write_core_benchmark_templates.
You do not need to create these pages manually. The system inspects the JSON results, discovers which problems were benchmarked, and creates one section per benchmark configuration in the appropriate problem page.
2. Manual benchmark pages
You can also write dedicated pages for specific benchmarks, such as docs/src/core/cpu.md.template or docs/src/benchmark-<name>.md.template.
The general pattern for such a page is:
Add a single
@setup BENCHblock at the top of the page:```@setup BENCH # Load utilities include(normpath(joinpath(@__DIR__, "..", "docutils", "utils.jl"))) ```For each benchmark you want to show, add:
An
INCLUDE_ENVIRONMENTblock with a literalBENCH_ID:<!-- INCLUDE_ENVIRONMENT: BENCH_ID = "core-ubuntu-latest" ENV_NAME = BENCH -->One or more
INCLUDE_FIGUREblocks, for example a CPU-time performance profile and an iterations profile:<!-- INCLUDE_FIGURE: FUNCTION = _plot_profile_default_cpu ARGS = core-ubuntu-latest --> <!-- INCLUDE_FIGURE: FUNCTION = _plot_profile_default_iter ARGS = core-ubuntu-latest -->Optional
INCLUDE_TEXTblocks to insert textual analysis:<!-- INCLUDE_TEXT: FUNCTION = _analyze_profile_default_cpu ARGS = core-ubuntu-latest --> <!-- INCLUDE_TEXT: FUNCTION = _print_benchmark_table_results ARGS = core-ubuntu-latest -->A
@example BENCHblock to print the benchmark log:```@example BENCH _print_benchmark_log("core-ubuntu-latest") # hide ```
Add your page to
docs/make.jlin thepageslist so that Documenter knows about it.
For a minimal template example, see the "Documentation page" step in Add a new benchmark. That section is intentionally concise and defers to this page for full details of the template processing pipeline.
Summary
docs/make.jldrives the whole documentation build: copying environment files, generating templates, processing them, and callingmakedocs.- Problem pages for core benchmarks are generated automatically from benchmark results.
- Template processing replaces
INCLUDE_ENVIRONMENT,INCLUDE_FIGURE, andINCLUDE_TEXTblocks (or legacyINCLUDE_ANALYSISblocks) with rich content, figures, and textual analyses. - Helper modules provide plotting, logging, and environment/configuration utilities.
- To document a new benchmark, you can rely on the automatic problem pages and optionally add a manual page following the template pattern above.