Running tests for Quarto
The tests/ folder is the place for everything related to testing of quarto-cli.
We run several type of tests
Unit tests, located in
unit/folderIntegration tests, located in
integration/foldersmoke tests localed in
smokefolder
Tests are run in our CI workflow on GHA at each commit, and for each PR.
How the tests are created and organized ?
Tests are running through Deno.test() framework, adapted for our Quarto project and all written in Typescript. Infrastructure are in tests.ts, tests.deps.ts verify.ts and utils.ts which contains the helper functions that can be used.
unit/andintegration/,smoke/contain some.tsscript representing each tests.docs/is a special folder containing of the necessary files and projects used for the tests.
Running the tests locally
Dependencies requirements
Here are what is expected in the environment for the tests :
R should be installed and in PATH - rig is a good tool to manage R versions. e.g
rig install 4.4.2andrig default 4.4.2to install and set the version to 4.4.2On Windows, Rtools should be too (for source package installation)
Python should be installed and in PATH - pyenv is a good option to manage Python versions.
On Windows, it will be
pyenv-winto manage versions. Otherwise or install from https://www.python.org/ manually or usingwinget.
Julia should be installed and in PATH - juliaup is a good option to manage Julia versions.
On Windows, one way is using
winget install julia -s msstoreand then add%LOCALAPPDATA%/Programs/Julia/binto PATH
Running tests require to have a local environment setup with Quarto development, TinyTeX, R, Python and Julia.
To help with this configuration, the tests/ folder contains configure-test-env.sh and configure-test-env.ps1. It will check for the tools and update the dependencies to what is used by Quarto tests. Running the script at least one will insure you are correctly setup. Then, it is run as part of running the tests so that dependencies are always updated. Set QUARTO_TESTS_NO_CONFIG to skip this step when running tests.
Optional test dependencies
The configure scripts also check for optional tools that some tests require. Tests will gracefully skip when these tools are not available, but having them installed enables full test coverage:
Java (version 8, 11, 17, or 21)
Required for: PDF standard validation tests using veraPDF
The script will install veraPDF automatically if Java is found using
quarto install verapdf
Node.js (version 18 or later) and npm
Required for: Playwright integration tests and JATS/MECA validation
Installation: Download from https://nodejs.org/ or use a version manager like nvm
The script will:
Check Node.js version and warn if < 18
Install the
mecapackage globally for MECA validationInstall Playwright and its dependencies
Set up the multiplex server for Playwright tests
Install Playwright browsers (Chrome, Firefox, etc.)
pdftotext (from poppler)
Required for: Some PDF text extraction tests
Installation:
Ubuntu/Debian:
sudo apt-get install poppler-utilsmacOS:
brew install popplerWindows:
scoop install poppler(auto-installed if Scoop is available)
rsvg-convert (from librsvg)
Required for: PDF tests with SVG image conversion
Installation:
Ubuntu/Debian:
sudo apt-get install librsvg2-binmacOS:
brew install librsvgWindows:
scoop install librsvg(auto-installed if Scoop is available)
On Windows, the scripts will attempt to auto-install poppler and librsvg via Scoop if it's available on your system.
Dependencies are managed using the following tools:
R
We use renv. renv.lock and renv/ folders are the files used to recreate the environment for R.
Updating renv.lock is done using renv::snapshot(). File shouldn't be modified manually.
Our project is using explicit dependencies discovery through a DESCRIPTION file. This is to avoid a costly scanning of all files in tests/ to guess R dependencies. This means that if you need to add a test with a new R package dependencies:
Add package(s) to
DESCRIPTIONintests/renv::install()the package into the project libraryFinish to work on your test
renv::snapshot()to record the new dependency in therenv.lockCommit the new
DESCRIPTIONandrenv.lock
See documentation if you need to tweak the R environment.
After a dependency update, you can run configure-test-env.sh or configure-test-env.ps1 to update the environment, or manually run renv::restore() to recreate the environment with new versions. Be sure to update your R version if needed.
Python
We now use uv (previously, it was pipenv) to manage dependencies and recreate easily on all OS. uv will not be installed as part of the configuration - so it needs to be installed manually - see various way at: https://docs.astral.sh/uv/getting-started/installation/
uv will handle the python versions, including its installation, based on the .python-version we have in tests/ folder. It will also manage the virtual environment in .venv folder.
A virtual environment will be created locally in .venv folder (ignored on git) and activated when running tests. uv run can help activating the environment outside of running tests to run a command in the environment.
pyproject.toml contains our dependencies requirement for the tests project. It can be manually updated but it is best to just use uv commands. For instance, adding a new dependency can be done with uv add plotly and it will update the file, update the uv.lock and install in the virtual environment. uv.lock should never be updated manually, and it is tracked by git, as it allows to recreate the exact environment on different environment (Linux, Mac, Windows, locally and on CI).
See other uv command if you need to do more.
For a change of python versionn, .python-version needs to be updated, and then uv will take care of the rest. configure-test-env script will check for uv and if installed, it will called uv sync to make sure the project virtual environment is up to date with the lockfile.
Note that ./run-test.ps1 and .run-tests.sh :
run
configure-test-envscript by default, unlessQUARTO_TESTS_NO_CONFIGenvironment variable is set to a non-empty value.Activate the local virtualenv espected in
.venv. SetQUARTO_TESTS_FORCE_NO_VENVto a non-empty value to prevent this behavior. (It replacesQUARTO_TESTS_FORCE_NO_PIPENVwhich still is considered for backward compatibility but deprecated)
Julia
Julia uses built-in package manager Pkg.jl- we provide Project.toml and Manifest.toml to recreate the environment.
Project.toml contains our direct dependency and Manifest.toml is the lock file that will be created (Pkg.resolve()).
Important: All test dependencies must be in the main tests/ environment. Julia searches UP the directory tree for Project.toml starting from the document being rendered.
Adding a new package dependency:
Do NOT create local Project.toml files in test subdirectories (e.g., tests/docs/*/Project.toml). Julia will use that environment instead of the main tests/ environment. The configure-test-env scripts only manage the main environment, so tests with local environments will fail in CI even if they work locally.
Note: This applies to ALL engines (Julia, Python, R). Python and R will also use local .venv/ or renv.lock if present. The quarto-cli test infrastructure uses a single managed environment per language at tests/, and CI only configures these main environments.
See documentation on how to add, remove, update if you need to tweak the Julia environment.
How to run tests locally ?
Tests are run using run-tests.sh on UNIX, and run-tests.ps1 on Windows.
Test environment variables
The test scripts support several environment variables to control their behavior:
QUARTO_TESTS_NO_CONFIG
Skip running
configure-test-envscriptsUseful for faster test runs when environment is already configured
Tests will still activate
.venvif present
QUARTO_TESTS_FORCE_NO_VENV (replaces deprecated QUARTO_TESTS_FORCE_NO_PIPENV)
Skip activating the
.venvvirtual environmentTests will use system Python packages instead of UV-managed dependencies
Use with caution: Python tests may fail if dependencies aren't in system Python
Quick test runs with run-fast-tests scripts
For convenience, run-fast-tests.sh and run-fast-tests.ps1 are provided to skip environment configuration:
These scripts set QUARTO_TESTS_NO_CONFIG automatically. Use after running configure-test-env at least once.
QUARTO_TEST_KEEP_OUTPUTS (or use --keep-outputs/-k flag)
Keep test output artifacts instead of cleaning them up
Useful for debugging test failures or inspecting generated files
Can be set via environment variable or command-line flag
Other environment variables
QUARTO_TEST_VERBOSE- Enable verbose test outputQUARTO_TESTS_NO_CHECK- Not currently used (legacy variable)
About smoke-all tests
docs/smoke-all/ is a specific folder to run some tests written directly within .qmd, .md or .ipynb files (but files starting with _ will be ignored). They are run through the smoke/smoke-all.tests.ts script. To ease running smoke-all tests, run-tests.sh has a special behavior where it will run ./smoke/smoke-all.tests.ts when passed a .qmd, .md or .ipynb file, not starting with _.
Examples of tests output after it ran
Examples of tests output after it ran
Controlling test execution with metadata
Smoke-all tests support metadata in the _quarto.tests.run key to control when tests are run:
Skip test unconditionally:
Use this when a test needs to be temporarily disabled while an issue is being investigated, or when a test is pending an upstream fix. Include a descriptive message explaining why.
Skip tests on CI:
Skip tests on specific operating systems (blacklist):
Run tests only on specific operating systems (whitelist):
Valid OS values are: linux, darwin (macOS), windows
This is useful when tests require platform-specific dependencies or have known platform-specific issues that need separate investigation.
Snapshot testing
Use ensureSnapshotMatches to compare rendered output against a saved snapshot file:
The snapshot file should be saved alongside the output with a .snapshot extension (e.g., output.html.snapshot).
When a snapshot test fails:
A unified diff is displayed with colored output (red for removed, green for added)
A word-level diff shows changes with surrounding context
For whitespace-only changes, special markers visualize invisible characters:
⏎for newlines,→for tabs,·for spaces
A
.difffile is saved next to the output for later inspectionThe
.difffile is automatically cleaned up when the snapshot passes
Limitations
smoke-all.test.tsaccept only one argument. You need to use glob pattern to run several smoke-all test documents.Individual
smoke-alltests and other test can't be run at the same time withrun-test.[sh|ps1]. This is becausesmoke-all.test.tsrequires arguments. If a smoke-all document and another smoke-test are passed as argument, the smoke-all test will be prioritize and other will be ignored (with a warning).
Example with Linux:
You can do
Don't do
Debugging within tests
.vscode/launch.json has a Run Quarto test configuration that can be used to debug when running tests. One need to modify the program and args fields to match the test to run.
Example:
Short version can't be use here as we are calling deno test directly and not run-tests.sh script.
Parallel testing
Linux only
This lives in run-parallel-tests.ts and called through run-parallel-tests.sh.
How does is works ?
It requires a text file with tested timed and following a specific format. (Default is
timing.txtand here is an example in our repo)Based on this file, the tests will be split in buckets to minimize the tests time (buckets are filled by their minimum overall time).
Then
./run-tests.shwill be run for each bucket from deno usingPromise.all()andrun-tests.shon the whole bucket's test files, so that the buckets are ran in parallel.
This is a simple way to run all the tests or a subset of tests in parallel locally.
About timed tests
To create a timed test file like timing.txt, this command needs to be run.
When this is done, any other argument will be ignored, and the following happens
All the
*.test.tsfile are found and run individually using/usr/bin/timeto store timing in the fileWhen
smoke-all.test.tsis found, all the*.qmd,*.mdand*.ipynbindocs/smoke-all/not starting with_are found and run individually using same logic. This means eachsmoke-alltest is timed.
The results is written in the $QUARTO_TEST_TIMING file. Here is an example:
This will be read by run-parallel-tests.ts to get the real value and fill the bucket based on it.
Specific behavior for smoke-all.test.ts
smoke-all tests are special because they are in the form of individual .qmd or .ipynb document that needs to be run using smoke-all.test.ts script, with arguments. Unfortunately, this prevent running individual smoke-all documents in same buclets as other individual smoke test (which are their own .test.ts file).
So, if the timed file contains some individual timing for smoke-all documents like this
then they are ignored and .smoke-all.test.ts will be run in its own bucket. It will usually be the longest test run.
Individual smoke-all tests timing are useful for Quarto parallelized smoke tests on GHA CI as the buckets are split into their own runners and each test in a bucket if run using run-test.sh. This allows a bucket to contains some *.test.ts but also some document *.qmd or *.ipynb. More details in test-smoke.yml and test-smokes-parallel.yml
Arguments that control behavior
-n=: Number of buckets to create to run in parallel.run-parallel-tests.sh -n=5split tests in 5 buckets and run them at the same time. For local run,nshould be a number of core. For CI run,nwill be the number of runners to use at the same time (mulplied by 2 because Linux and Windows are ran on CI).--verbose: show some verbosity. Otherwise, no specific logging in console in done.--dry-run: show the buckets of tests, but do not run. Otherwise, they are run.--timing-file=: Which file to use as timed tests information to creates the buckets. (default totiming.txt).run-parallel-tests.sh --timing-file='timing2.txt'will usetiming2.txtto run the file.--json-for-ci: Special flag to trigger splitting tests in buckets for the parallel run on CI and that makesrun-parallel-tests.shoutputs JSON string specifically formatted for GHA processing.
About tests in CI with GHA
test-smokes-parallel.ymlwill be triggered to loadtiming-for-ci.txtand split tests in buckets. It will create a matrix to triggertest-smokes.ymlonworkflow_callevent for each bucket.PR against main and commits to main will trigger this workflow, and tests will be ran in parallel jobs.
A
workflow_dispatchevent can be used to trigger it through API call,ghCLI tool or GHA GUI online.
test-smokes.ymlis the main CI workflow which configure the environment, and run the tests on Ubuntu and Windows.If it was triggerred by
workflow_call, then it will run each test in usingrun-tests.[sh|ps1]in a for-loop.Scheduled tests are still run daily in their sequential version.