Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
quarto-dev
GitHub Repository: quarto-dev/quarto-cli
Path: blob/main/tests/README.md
6446 views

Running tests for Quarto

The tests/ folder is the place for everything related to testing of quarto-cli.

We run several type of tests

  • Unit tests, located in unit/ folder

  • Integration tests, located in integration/ folder

  • smoke tests localed in smoke folder

Tests are run in our CI workflow on GHA at each commit, and for each PR.

How the tests are created and organized ?

Tests are running through Deno.test() framework, adapted for our Quarto project and all written in Typescript. Infrastructure are in tests.ts, tests.deps.ts verify.ts and utils.ts which contains the helper functions that can be used.

  • unit/ and integration/, smoke/contain some .ts script representing each tests.

  • docs/ is a special folder containing of the necessary files and projects used for the tests.

Running the tests locally

Dependencies requirements

Here are what is expected in the environment for the tests :

  • R should be installed and in PATH - rig is a good tool to manage R versions. e.g rig install 4.4.2 and rig default 4.4.2 to install and set the version to 4.4.2

    • On Windows, Rtools should be too (for source package installation)

  • Python should be installed and in PATH - pyenv is a good option to manage Python versions.

  • Julia should be installed and in PATH - juliaup is a good option to manage Julia versions.

    • On Windows, one way is using winget install julia -s msstore and then add %LOCALAPPDATA%/Programs/Julia/bin to PATH

Running tests require to have a local environment setup with Quarto development, TinyTeX, R, Python and Julia.

To help with this configuration, the tests/ folder contains configure-test-env.sh and configure-test-env.ps1. It will check for the tools and update the dependencies to what is used by Quarto tests. Running the script at least one will insure you are correctly setup. Then, it is run as part of running the tests so that dependencies are always updated. Set QUARTO_TESTS_NO_CONFIG to skip this step when running tests.

Optional test dependencies

The configure scripts also check for optional tools that some tests require. Tests will gracefully skip when these tools are not available, but having them installed enables full test coverage:

Java (version 8, 11, 17, or 21)

  • Required for: PDF standard validation tests using veraPDF

  • The script will install veraPDF automatically if Java is found using quarto install verapdf

Node.js (version 18 or later) and npm

  • Required for: Playwright integration tests and JATS/MECA validation

  • Installation: Download from https://nodejs.org/ or use a version manager like nvm

  • The script will:

    • Check Node.js version and warn if < 18

    • Install the meca package globally for MECA validation

    • Install Playwright and its dependencies

    • Set up the multiplex server for Playwright tests

    • Install Playwright browsers (Chrome, Firefox, etc.)

pdftotext (from poppler)

  • Required for: Some PDF text extraction tests

  • Installation:

    • Ubuntu/Debian: sudo apt-get install poppler-utils

    • macOS: brew install poppler

    • Windows: scoop install poppler (auto-installed if Scoop is available)

rsvg-convert (from librsvg)

  • Required for: PDF tests with SVG image conversion

  • Installation:

    • Ubuntu/Debian: sudo apt-get install librsvg2-bin

    • macOS: brew install librsvg

    • Windows: scoop install librsvg (auto-installed if Scoop is available)

On Windows, the scripts will attempt to auto-install poppler and librsvg via Scoop if it's available on your system.

Dependencies are managed using the following tools:

R

We use renv. renv.lock and renv/ folders are the files used to recreate the environment for R.

Updating renv.lock is done using renv::snapshot(). File shouldn't be modified manually.

Our project is using explicit dependencies discovery through a DESCRIPTION file. This is to avoid a costly scanning of all files in tests/ to guess R dependencies. This means that if you need to add a test with a new R package dependencies:

  • Add package(s) to DESCRIPTION in tests/

  • renv::install() the package into the project library

  • Finish to work on your test

  • renv::snapshot() to record the new dependency in the renv.lock

  • Commit the new DESCRIPTION and renv.lock

See documentation if you need to tweak the R environment.

After a dependency update, you can run configure-test-env.sh or configure-test-env.ps1 to update the environment, or manually run renv::restore() to recreate the environment with new versions. Be sure to update your R version if needed.

Python

We now use uv (previously, it was pipenv) to manage dependencies and recreate easily on all OS. uv will not be installed as part of the configuration - so it needs to be installed manually - see various way at: https://docs.astral.sh/uv/getting-started/installation/

uv will handle the python versions, including its installation, based on the .python-version we have in tests/ folder. It will also manage the virtual environment in .venv folder.

A virtual environment will be created locally in .venv folder (ignored on git) and activated when running tests. uv run can help activating the environment outside of running tests to run a command in the environment.

pyproject.toml contains our dependencies requirement for the tests project. It can be manually updated but it is best to just use uv commands. For instance, adding a new dependency can be done with uv add plotly and it will update the file, update the uv.lock and install in the virtual environment. uv.lock should never be updated manually, and it is tracked by git, as it allows to recreate the exact environment on different environment (Linux, Mac, Windows, locally and on CI).

See other uv command if you need to do more.

For a change of python versionn, .python-version needs to be updated, and then uv will take care of the rest. configure-test-env script will check for uv and if installed, it will called uv sync to make sure the project virtual environment is up to date with the lockfile.

Note that ./run-test.ps1 and .run-tests.sh :

  • run configure-test-env script by default, unless QUARTO_TESTS_NO_CONFIG environment variable is set to a non-empty value.

  • Activate the local virtualenv espected in .venv. Set QUARTO_TESTS_FORCE_NO_VENV to a non-empty value to prevent this behavior. (It replaces QUARTO_TESTS_FORCE_NO_PIPENV which still is considered for backward compatibility but deprecated)

Julia

Julia uses built-in package manager Pkg.jl- we provide Project.toml and Manifest.toml to recreate the environment.

Project.toml contains our direct dependency and Manifest.toml is the lock file that will be created (Pkg.resolve()).

Important: All test dependencies must be in the main tests/ environment. Julia searches UP the directory tree for Project.toml starting from the document being rendered.

Adding a new package dependency:

cd tests julia --project=. -e 'using Pkg; Pkg.add("PackageName")' ./configure-test-env.sh # or .ps1 on Windows

Do NOT create local Project.toml files in test subdirectories (e.g., tests/docs/*/Project.toml). Julia will use that environment instead of the main tests/ environment. The configure-test-env scripts only manage the main environment, so tests with local environments will fail in CI even if they work locally.

Note: This applies to ALL engines (Julia, Python, R). Python and R will also use local .venv/ or renv.lock if present. The quarto-cli test infrastructure uses a single managed environment per language at tests/, and CI only configures these main environments.

See documentation on how to add, remove, update if you need to tweak the Julia environment.

How to run tests locally ?

Tests are run using run-tests.sh on UNIX, and run-tests.ps1 on Windows.

# run all tests ./run-tests.sh # run a specific tests file ./run-tests.sh smoke/extensions/extension-render-doc.test.ts
# run all tests ./run-tests.ps1 # run a specific tests file ./run-tests.ps1 smoke/extensions/extension-render-doc.test.ts

Test environment variables

The test scripts support several environment variables to control their behavior:

QUARTO_TESTS_NO_CONFIG

  • Skip running configure-test-env scripts

  • Useful for faster test runs when environment is already configured

  • Tests will still activate .venv if present

QUARTO_TESTS_NO_CONFIG="true" ./run-tests.sh
$env:QUARTO_TESTS_NO_CONFIG="true" ./run-tests.ps1

QUARTO_TESTS_FORCE_NO_VENV (replaces deprecated QUARTO_TESTS_FORCE_NO_PIPENV)

  • Skip activating the .venv virtual environment

  • Tests will use system Python packages instead of UV-managed dependencies

  • Use with caution: Python tests may fail if dependencies aren't in system Python

QUARTO_TESTS_FORCE_NO_VENV="true" ./run-tests.sh
$env:QUARTO_TESTS_FORCE_NO_VENV="true" ./run-tests.ps1

Quick test runs with run-fast-tests scripts

For convenience, run-fast-tests.sh and run-fast-tests.ps1 are provided to skip environment configuration:

# Linux/macOS ./run-fast-tests.sh # Windows ./run-fast-tests.ps1

These scripts set QUARTO_TESTS_NO_CONFIG automatically. Use after running configure-test-env at least once.

QUARTO_TEST_KEEP_OUTPUTS (or use --keep-outputs/-k flag)

  • Keep test output artifacts instead of cleaning them up

  • Useful for debugging test failures or inspecting generated files

  • Can be set via environment variable or command-line flag

# Using flag ./run-tests.sh --keep-outputs ./run-tests.sh -k # Using environment variable QUARTO_TEST_KEEP_OUTPUTS="true" ./run-tests.sh
# Using flag ./run-tests.ps1 --keep-outputs ./run-tests.ps1 -k # Using environment variable $env:QUARTO_TEST_KEEP_OUTPUTS="true" ./run-tests.ps1

Other environment variables

  • QUARTO_TEST_VERBOSE - Enable verbose test output

  • QUARTO_TESTS_NO_CHECK - Not currently used (legacy variable)

About smoke-all tests

docs/smoke-all/ is a specific folder to run some tests written directly within .qmd, .md or .ipynb files (but files starting with _ will be ignored). They are run through the smoke/smoke-all.tests.ts script. To ease running smoke-all tests, run-tests.sh has a special behavior where it will run ./smoke/smoke-all.tests.ts when passed a .qmd, .md or .ipynb file, not starting with _.

# run tests for all documents in docs/smoke-all/ ./run-tests.sh smoke/smoke-all.tests.ts # run tests for some `.qmd` document in a specific place (using glob) ./run-tests.sh docs/smoke-all/2022/**/*.qmd # or using longer version ./run-tests.sh smoke/smoke-all.test.ts -- docs/smoke-all/2022/**/*.qmd # run test for a specific document ./run-tests.sh docs/smoke-all/2023/01/04/issue-3847.qmd # or using using longer version ./run-tests.sh smoke/smoke-all.test.ts -- docs/smoke-all/2023/01/04/issue-3847.qmd
Examples of tests output after it ran
$ ./run-tests.sh smoke/smoke-all.test.ts -- docs/smoke-all/2023/01/04/issue-3847.qmd > Checking and configuring environment for tests >>>> Configuring R environment * The library is already synchronized with the lockfile. >>>> Configuring Python environment Setting up python environnement with pipenv Installing dependencies from Pipfile.lock (0ded54)... To activate this project's virtualenv, run pipenv shell. Alternatively, run a command inside the virtualenv with pipenv run. >>>> Configuring Julia environment Setting up Julia environment Building Conda ─→ `~/.julia/scratchspaces/44cfe95a-1eb2-52ea-b672-e2afdf69b78f/e32a90da027ca45d84678b826fffd3110bb3fc90/build.log` Building IJulia → `~/.julia/scratchspaces/44cfe95a-1eb2-52ea-b672-e2afdf69b78f/59e19713542dd9dd02f31d59edbada69530d6a14/build.log` >>>> Configuring TinyTeX environment Setting GH_TOKEN env var for Github Download. tinytex is already installed and up to date. > Activating virtualenv for Python tests Check file:///home/cderv/project/quarto-cli/tests/smoke/smoke-all.test.ts running 1 test from ./smoke/smoke-all.test.ts [smoke] > quarto render docs/smoke-all/2023/01/04/issue-3847.qmd --to html ... ------- output ------- [verify] > No Errors or Warnings ----- output end ----- [smoke] > quarto render docs/smoke-all/2023/01/04/issue-3847.qmd --to html ... ok (320ms) ok | 1 passed | 0 failed (1s) > Exiting virtualenv activated for tests
# run tests for all documents in docs/smoke-all/ ./run-tests.ps1 smoke/smoke-all.tests.ts # run tests for some `.qmd` document in a specific place (using glob) ./run-tests.ps1 docs/smoke-all/2022/**/*.qmd # Or using longer version ./run-tests.ps1 smoke/smoke-all.test.ts -- docs/smoke-all/2022/**/*.qmd # run test for a specific document ./run-tests.ps1 docs/smoke-all/2023/01/04/issue-3847.qmd # Or using longer version ./run-tests.ps1 smoke/smoke-all.test.ts -- docs/smoke-all/2023/01/04/issue-3847.qmd
Examples of tests output after it ran
 ./run-tests.ps1 smoke/smoke-all.test.ts -- docs/smoke-all/2023/01/04/issue-3847.qmd > Setting all the paths required... > Checking and configuring environment for tests >>>> Configuring R environment * The library is already synchronized with the lockfile. >>>> Configuring python environment Setting up python environnement with pipenv Installing dependencies from Pipfile.lock (0ded54)... To activate this project's virtualenv, run pipenv shell. Alternatively, run a command inside the virtualenv with pipenv run. >>>> Configuring Julia environment Setting up Julia environment Building Conda ─→ `C:\Users\chris\.julia\scratchspaces\44cfe95a-1eb2-52ea-b672-e2afdf69b78f\e32a90da027ca45d84678b826fffd3110bb3fc90\build.log` Building IJulia → `C:\Users\chris\.julia\scratchspaces\44cfe95a-1eb2-52ea-b672-e2afdf69b78f\59e19713542dd9dd02f31d59edbada69530d6a14\build.log` >>>> Configuring TinyTeX environment tinytex is already installed and up to date. > Preparing running tests... > Activating virtualenv for Python tests > Running tests with "C:\Users\chris\Documents\DEV_R\quarto-cli\package\dist\bin\tools\deno.exe test --config test-conf.json --unstable-ffi --allow-read --allow-write --allow-run --allow-env --allow-net --check --importmap=C:\Users\chris\Documents\DEV_R\quarto-cli\src\dev_import_map.json smoke/smoke-all.test.ts -- docs/smoke-all/2023/01/04/issue-3847.qmd" running 1 test from ./smoke/smoke-all.test.ts [smoke] > quarto render docs\smoke-all\2023\01\04\issue-3847.qmd --to html ... ------- output ------- [verify] > No Errors or Warnings ----- output end ----- [smoke] > quarto render docs\smoke-all\2023\01\04\issue-3847.qmd --to html ... ok (650ms) ok | 1 passed | 0 failed (2s)
Controlling test execution with metadata

Smoke-all tests support metadata in the _quarto.tests.run key to control when tests are run:

  • Skip test unconditionally:

    _quarto: tests: run: skip: true # Skip with default message skip: "Reason for skipping this test" # Skip with custom message

    Use this when a test needs to be temporarily disabled while an issue is being investigated, or when a test is pending an upstream fix. Include a descriptive message explaining why.

  • Skip tests on CI:

    _quarto: tests: run: ci: false
  • Skip tests on specific operating systems (blacklist):

    _quarto: tests: run: not_os: linux # Don't run on Linux not_os: [linux, darwin] # Don't run on Linux or macOS not_os: windows # Don't run on Windows
  • Run tests only on specific operating systems (whitelist):

    _quarto: tests: run: os: darwin # Run only on macOS os: [windows, darwin] # Run only on Windows or macOS

Valid OS values are: linux, darwin (macOS), windows

This is useful when tests require platform-specific dependencies or have known platform-specific issues that need separate investigation.

Snapshot testing

Use ensureSnapshotMatches to compare rendered output against a saved snapshot file:

_quarto: tests: html: ensureSnapshotMatches: []

The snapshot file should be saved alongside the output with a .snapshot extension (e.g., output.html.snapshot).

When a snapshot test fails:

  • A unified diff is displayed with colored output (red for removed, green for added)

  • A word-level diff shows changes with surrounding context

  • For whitespace-only changes, special markers visualize invisible characters:

    • for newlines, for tabs, · for spaces

  • A .diff file is saved next to the output for later inspection

  • The .diff file is automatically cleaned up when the snapshot passes

Limitations

  • smoke-all.test.ts accept only one argument. You need to use glob pattern to run several smoke-all test documents.

  • Individual smoke-all tests and other test can't be run at the same time with run-test.[sh|ps1]. This is because smoke-all.test.ts requires arguments. If a smoke-all document and another smoke-test are passed as argument, the smoke-all test will be prioritize and other will be ignored (with a warning).

Example with Linux:

You can do

# run all smoke-all tests and another smoke test ./run-tests.sh smoke/extensions/extension-render-doc.test.ts smoke/smoke-all.test.ts # run tests for some `.qmd` document in a specific place (using glob) ./run-tests.sh docs/smoke-all/2022/**/*.qmd

Don't do

# run .qmd smoke-all test and another smoke test - smoke-all test will have the priority and other will be ignored (with a warning) ./run-test.sh smoke/extensions/extension-render-doc.test.ts ./docs/smoke-all/2023/01/04/issue-3847.qmd # run smoke-all.test.ts with argument and another smoke test ./run-tests.sh smoke/extensions/extension-render-doc.test.ts smoke/smoke-all.test.ts -- ./docs/smoke-all/2023/01/04/issue-3847.qmd

Debugging within tests

.vscode/launch.json has a Run Quarto test configuration that can be used to debug when running tests. One need to modify the program and args fields to match the test to run.

Example:

"program": "smoke/smoke-all.test.ts", // test script here "args": ["--", "tests/docs/smoke-all/2023/01/04/issue-3847.qmd"], // args to the script here, like in command line smoke/smoke-all.test.t -- .\docs\smoke-all\2023\01\19\2107.qmd

Short version can't be use here as we are calling deno test directly and not run-tests.sh script.

Parallel testing

Linux only

This lives in run-parallel-tests.ts and called through run-parallel-tests.sh.

How does is works ?

  • It requires a text file with tested timed and following a specific format. (Default is timing.txt and here is an example in our repo)

  • Based on this file, the tests will be split in buckets to minimize the tests time (buckets are filled by their minimum overall time).

  • Then ./run-tests.sh will be run for each bucket from deno using Promise.all() and run-tests.sh on the whole bucket's test files, so that the buckets are ran in parallel.

This is a simple way to run all the tests or a subset of tests in parallel locally.

About timed tests

To create a timed test file like timing.txt, this command needs to be run.

QUARTO_TEST_TIMING='timing.txt' ./run-tests.sh

When this is done, any other argument will be ignored, and the following happens

  • All the *.test.ts file are found and run individually using /usr/bin/time to store timing in the file

  • When smoke-all.test.ts is found, all the *.qmd, *.md and *.ipynb in docs/smoke-all/ not starting with _ are found and run individually using same logic. This means each smoke-all test is timed.

The results is written in the $QUARTO_TEST_TIMING file. Here is an example:

./smoke/directives/include-fixups.test.ts 0.02 real 0.02 user 0.00 sys ./smoke/filters/filters.test.ts 3.26 real 3.79 user 0.47 sys ./smoke/filters/editor-support.test.ts 0.72 real 0.58 user 0.14 sys ./smoke/engine/include-engine-detection.test.ts 3.61 real 3.11 user 0.24 sys ./smoke/smoke-all.test.ts -- docs/smoke-all/2022/12/12/code-annotation.qmd 4.81 real 4.32 user 0.33 sys ./smoke/smoke-all.test.ts -- docs/smoke-all/2022/12/9/jats/computations.out.ipynb 2.22 real 2.83 user 0.27 sys

This will be read by run-parallel-tests.ts to get the real value and fill the bucket based on it.

Specific behavior for smoke-all.test.ts

smoke-all tests are special because they are in the form of individual .qmd or .ipynb document that needs to be run using smoke-all.test.ts script, with arguments. Unfortunately, this prevent running individual smoke-all documents in same buclets as other individual smoke test (which are their own .test.ts file).

So, if the timed file contains some individual timing for smoke-all documents like this

./smoke/smoke-all.test.ts -- docs/smoke-all/2022/12/12/code-annotation.qmd

then they are ignored and .smoke-all.test.ts will be run in its own bucket. It will usually be the longest test run.

Individual smoke-all tests timing are useful for Quarto parallelized smoke tests on GHA CI as the buckets are split into their own runners and each test in a bucket if run using run-test.sh. This allows a bucket to contains some *.test.ts but also some document *.qmd or *.ipynb. More details in test-smoke.yml and test-smokes-parallel.yml

Arguments that control behavior

  • -n=: Number of buckets to create to run in parallel. run-parallel-tests.sh -n=5 split tests in 5 buckets and run them at the same time. For local run, n should be a number of core. For CI run, n will be the number of runners to use at the same time (mulplied by 2 because Linux and Windows are ran on CI).

  • --verbose: show some verbosity. Otherwise, no specific logging in console in done.

  • --dry-run: show the buckets of tests, but do not run. Otherwise, they are run.

  • --timing-file=: Which file to use as timed tests information to creates the buckets. (default to timing.txt ). run-parallel-tests.sh --timing-file='timing2.txt' will use timing2.txt to run the file.

  • --json-for-ci: Special flag to trigger splitting tests in buckets for the parallel run on CI and that makes run-parallel-tests.sh outputs JSON string specifically formatted for GHA processing.

About tests in CI with GHA

  • test-smokes-parallel.yml will be triggered to load timing-for-ci.txt and split tests in buckets. It will create a matrix to trigger test-smokes.yml on workflow_call event for each bucket.

    • PR against main and commits to main will trigger this workflow, and tests will be ran in parallel jobs.

    • A workflow_dispatch event can be used to trigger it through API call, gh CLI tool or GHA GUI online.

  • test-smokes.yml is the main CI workflow which configure the environment, and run the tests on Ubuntu and Windows.

    • If it was triggerred by workflow_call, then it will run each test in using run-tests.[sh|ps1] in a for-loop.

    • Scheduled tests are still run daily in their sequential version.