Path: blob/master/test/docs/OpenJ9TestUserGuide.md
5985 views
How-to Run Tests
Let's create the following scenario: For the purpose of this example, we assume you have an OpenJ9 SDK ready for testing. Below are the specific commands you'd run to clone test framework TKG, compile and run tests. Details are explained in Tasks in OpenJ9 Test section below.
Prerequisites
Please read Prerequisites.md for details on what tools should be installed on your test machine to run tests.
Tasks in OpenJ9 Test
1. Configure environment
Environment variables
required:
optional:
Auto detection
By default, AUTO_DETECT is turned on. SPEC, JDK_VERSION, and JDK_IMPL do not need to be exported. If you do not wish to use AUTO_DETECT and export SPEC, JDK_VERSION, and JDK_IMPL manually, you can export AUTO_DETECT to false. e.g.,
Please refer *.spec files in buildspecs for possible SPEC values.
Dependent libs for tests
Please read DependentLibs.md for details.
2. Compile tests
Compile and run all tests (found in directories set by BUILD_LIST environment variable)
Only compile but do not run tests (found in directories set by BUILD_LIST variable)
3. Add more tests
For new functionality
If you have added new features to OpenJ9, you will likely need to add new tests. Check out openj9/test/functional/TestExample/src/org/openj9/test/MyTest.java for the format to use.
If you have many new test cases to add and special build requirements, then you may want to copy the TestExample update the build.xml and playlist.xml files to match your new Test class names. The playlist.xml format is defined in TKG/playlist.xsd.
A test can be tagged with following elements: - level: [sanity|extended|special] (extended default value) - group: [functional|system|openjdk|external|perf|jck] (required to provide one group per test) - type: [regular|native] (if a test is tagged with native, it means this test needs to run with test image (native test libs); NATIVE_TEST_LIBS needs to be set for local testing; if Grinder is used, native test libs download link needs to be provided in addition to SDK download link in CUSTOMIZED_SDK_URL; for details, please refer to How-to-Run-a-Grinder-Build-on-Jenkins; default to regular) - impl: [openj9|hotspot|ibm] (filter test based on exported JDK_IMPL value; a test can be tagged with multiple impls at the same time; default to all impls) - version: [8|8+|9|9+|10|10+|11|11+|Panama|Valhalla] (filter test based on exported JDK_VERSION value; a test can be tagged with multiple versions at the same time; if a test tagged with a number (e.g., 8), it will used to match JDK_VERSION; if a test tagged with a number followed by + sign, any JDK_VERSION after the number will be a match; default to always match)
Most OpenJ9 FV tests are written with TestNG. We leverage TestNG groups to create test make targets. This means that minimally your test source code should belong to either level.sanity or level.extended group to be included in main OpenJ9 builds.
4. Run tests
Run a group of tests (where group can be functional|system|openjdk|external|perf)
make _group
e.g.,
Run a level of tests (where level can be sanity|extended|special)
make _level
e.g.,
Run a type of test (where type can be regular|native)
make _type
e.g.,
Run a level of tests with specified group
make _level.group
e.g.,
Run a level of tests with specified type
make _level.type
e.g.,
Run a group of tests with specified type
make _group.type
e.g.,
Run a specified level, group and type together
make _level.group.type
note that with each '.' in the make target, the breadth of tests narrows (_sanity > _sanity.functional > _sanity.functional.native) e.g.,
Run a specific individual test target (where test targets are defined in playlist files, see testExample
make _testTargetName_xxx
e.g., the 1st variation in playlist is suffixed by _0, 2nd variation by _1, and so forth
The suffix number refers to the variation in the playlist.xml file
Run all variations in the test target
make _testTargetName
e.g.,
Above command will run all possible variations in _testExample target
Run a list of tests
make _testList TESTLIST=testTargetName1,testTargetName2,testTargetName3
e.g.,
Run all tests
compile & run tests
run all tests without recompiling them
Run tests against specific (e.g., hotspot 8) SDK
<impl>
and <version>
elements are used to annotate tests in playlist.xml, so that the tests will be run against the targeted JDK_IMPL and JDK_VERSION (and is determined by the SDK defined in TEST_JDK_HOME variable).
For example, adding a <versions><version>8</version></versions>
block into the target definition of TestExample would mean that test would only get run against jdk8 and would be skipped for other JDK versions. If <versions>
or <impls>
are not included in the target definition, then it is assumed that ALL versions and implementations are valid for that test target.
Rerun the failed tests from the last run
With a different set of JVM options
There are 3 ways to add options to your test run:
If you simply want to add an option for a one-time run, you can either override the original options by using JVM_OPTIONS="your options".
If you want to append options to the set that are already there, use EXTRA_OPTIONS="your extra options". Below example will append to those options already in the make target.
When appending
Xjit
option with braces, you'll need to either enclose them in quotes or escape them. Quotes won't work for tests which use STF framework, ex: system tests, so you'll need to escape them. This is because STF processes the options before forwarding them to the test JVM and it won't forward anything that it doesn't understand. Below is an example of what will be forwarded.
If you want to change test options, you can update playlist.xml in the corresponding test project.
Run test or group of tests multiple times
Using TEST_ITERATIONS to specify the number of iterations. The test (for group target, each test in the group) will be executed multiple times. The test will pass if all the iterations pass. This feature is designed to repetitively run small test targets, use with caution for top-level test targets that may take a long time to run.
or
5. Exclude tests
Automatically exclude a test target
Instead of having to manually create a PR to disable test targets, they can now be automatically disabled via Github workflow (see autoTestPR.yml). In the issue that describes the test failure, add a comment with the following format:
auto exclude test <testName>
If the testName matches the testCaseName defined in <testCaseName>
element of playlist.xml, the entire test suite will be excluded. If the testName is testCaseName followed by _n, only the (n+1)th variation will be excluded.
For example:
To exclude the entire suite:
auto exclude test jdk_test
To exclude the 2nd variation listed which is assigned suffix_1 -Xmx1024m
:
auto exclude test jdk_test_1
To exclude the test for openj9 only:
auto exclude test jdk_test impl=openj9
To exclude the test for adoptopenjdk vendor only:
auto exclude test jdk_test vendor=adoptopenjdk
To exclude the test for java 8 only:
auto exclude test jdk_test ver=8
To exclude the test for all linux platforms:
auto exclude test jdk_test plat=.*linux.*
plat is defined in regular expression. All platforms can be found here: https://github.com/adoptium/aqa-tests/blob/master/buildenv/jenkins/openjdk_tests
To exclude the 2nd variation listed which is assigned suffix_1 -Xmx1024m
against adoptopenjdk openj9 java 8 on windows only:
auto exclude test jdk_test_1 impl=openj9 vendor=adoptopenjdk ver=8 plat=.*windows.*
After the comment is left, there will be a auto PR created with the exclude change in the playlist.xml. The PR will be linked to issue. If the testName can not be found in the repo, no PR will be created and there will be a comment left in the issue linking to the failed workflow run for more details. In the case where the parameter contains space separated values, use single quotes to group the parameter.
Manually exclude a test target
Search the test name to find its playlist.xml file. Add a <disables>
element after <testCaseName>
element. The <disables>
element is used to capsulate all <disable>
elements. <disable>
should always contain a <comment>
element to specify the related issue url (or issue comment url).
For example:
This will disable the entire test suite. The following section describes how to disable the specific test cases.
Exclude a specific test variation:
Add a <variation>
element in the <disable>
element to specify the variation. The <variation>
element must match an element defined in the <variations>
element.
For example, to exclude the test case with variation -Xmx1024m
:
Exclude a test against specific java implementation:
Add a <impl>
element in the <disable>
element to specify the implementation.
For example, to exclude the test for openj9 only:
Exclude a test against specific java vendor:
Add a <vendor>
element in the <disable>
element to specify the vendor information.
For example, to exclude the test for AdoptOpenJDK only:
Exclude a test against specific java version:
Add a <version>
element in the <disable>
element to specify the version.
For example, to exclude the test for java 11 and up:
Exclude a test against specific platform:
Add a <platform>
element in the <disable>
element to specify the platform in regular expression. All platforms can be found here: https://github.com/adoptium/aqa-tests/blob/master/buildenv/jenkins/openjdk_tests
For example, to exclude the test for all linux platforms:
Exclude test against multiple criteria:
Defined a combination of <variation>
, <impl>
, <version>
, and <platform>
in the <disable>
element.
For example, to exclude the test with variation -Xmx1024m
against adoptopenjdk openj9 java 8 on windows only:
Note: Same element cannot be defined multiple times inside one <disable>
element. It is because the elements inside the disable element are in AND relationship.
For example, to exclude test on against hotspot and openj9. It is required to define multiple <disable>
elements, each with a single <impl>
element inside:
Or remove <impl>
element to exclude test against all implementations:
Execute excluded test target
If a test is disabled using <disable>
tag in playlist.xml, it can be executed by specifying the test target or adding disabled
in front of its top-level test target.
Disabled tests and reasons can also be printed through adding echo.disabled
in front of regular target.
more granular exclusion for testNG test
Exclude temporarily on all platforms
Depends on the JDK_VERSION, add a line in the test/TestConfig/resources/excludes/latest_exclude_$(JDK_VERSION).txt file. It is the same format that the OpenJDK tests use, name of test, defect number, platforms to exclude.
To exclude on all platforms, use generic-all. For example:
Note that we additionally added support to exclude individual methods of a test class, by using :methodName behind the class name (OpenJDK does not support this currently). In the example, only the testGetProcessCPULoad method from that class will be excluded (on all platforms/specs).
Exclude temporarily on specific platforms or architectures
Same as excluding on all platforms, you add a line to latest_exclude_$(JDK_VERSION).txt file, but with specific specs to exclude, for example:
This example would exclude all test methods of the TestOperatingSystemMXBean from running on the linux_x86-64 platform. Note: in OpenJ9 the defect numbers would associate with git issue numbers (OpenJDK defect numbers associate with their bug tracking system).
Exclude permanently on all or specific platforms/archs
For tests that should NEVER run on particular platforms or architectures, we should not use the default_exclude.txt file. To disable those tests, we annotate the test class to be disabled. To exclude MyTest from running on the aix platform, for example:
We currently support the following exclusion groups:
6. View results
Results in the console
OpenJ9 tests written in testNG format take advantage of the testNG logger.
If you want your test to print output, you are required to use the testng logger (and not System.out.print statements). In this way, we can not only direct that output to console, but also to various other clients (WIP). At the end of a test run, the results are summarized to show which tests are passed / failed / disabled / skipped. This gives you a quick view of the test names and numbers in each category (passed/failed/disabled/skipped). If you've piped the output to a file, or if you like scrolling up, you can search for and find the specific output of the tests that failed (exceptions or any other logging that the test produces).
SKIPPED
tests
If a test is skipped, it means that this test cannot be run on this platform due to jvm options, platform requirements and/or test capabilities.
DISABLED
tests
If a test is disabled, it means that this test is disabled using <disable>
tag in playlist.xml.
Results in html files
TestNG tests produce html (and xml) output from the tests are created and stored in a test_output_xxxtimestamp folder in the TKG directory (or from where you ran "make test").
The output is organized by tests, each test having its own set of output. If you open the index.html file in a web browser, you will be able to see which tests passed, failed or were skipped, along with other information like execution time and error messages, exceptions and logs from the individual test methods.
TAP result files
As some of the tests are not testNG or junit format, a simple standardized format for test output was needed so that all tests are reported in the same way for a complete test summary. Depending on the requirement there are three different diagnostic levels.
7. Attach a debugger
To a particular test
The command line that is run for each particular test is echo-ed to the console, so you can easily copy the command that is run. You can then run the command directly (which is a direct call to the java executable, adding any additional options, including those to attach a debugger.
8. Move test into different make targets (layers)
From extended to sanity (or vice versa)
For testng tests, change the group annotated at the top of the test class from
level.extended
tolevel.sanity
Change
<level>
fromextended
tosanity
in playlist.xml
How-to Reproduce Test Failures
A common scenario is that automated testing finds a failure and a developer is asked to reproduce it. An openj9 issue is created reporting a failing test. The issue should contain:
link(s) to Jenkins job (which contains all of the info you need, if you get to it before the Jenkins job is deleted, which is quickly to save space on Jenkins master)
test target name (TARGET)
test group / test directory name (one of the following, functional, system, openjdk, external, perf) (BUILD_LIST)
test level (one of the following, sanity, extended, special)
platform(s) the test fails on (Jenkinsfile)
version the test fails in (JDK_VERSION)
implementation the test fails against (JDK_IMPL)
SDK build that was used by the test (in the console output of the Jenkins job, there is java -version info and a link to the SDK used)
A specific example, Issue 6555 Test_openjdk13_j9_sanity.system_ppc64le_linux TestIBMJlmRemoteMemoryAuth_0 crash we get the following info (captured in the name of the issue):
TARGET = TestIBMJlmRemoteMemoryAuth_0
BUILD_LIST = system
JDK_VERSION = 13
JDK_IMPL = openj9 (its implied if the failure was found in openj9 testing at OpenJ9's Jenkins)
Jenkinsfile = openjdk_ppc64le_linux (corresponds to platform to run on)
Since only a link to the Jenkins job was provided in the example issue 6555, we do not have java -version info, we will have to go to the job link to find out the exact SDK build, though it may be sufficient just to rerun the test with the latest nightly build to reproduce. Given those pieces of information, we have enough to try and rerun this test, either in a Grinder job in Jenkins, or locally on a machine (on the same platform as the test failure).
For more details on launching a Grinder job, you can see these instructions on how to run a grinder job.
To try and reproduce the failure locally, please check out this wiki for guidance on reproducing failures locally for details.