The basic principle of test plan XML and tool support is that you can use 'any' executable for testing. Test results are checked from exit codes (automated testing) or prompt (manual testing). Executed test plan XMLs produce XML results - wrapping variety of test methods in consistent format required by test automation and data processing.
The test plan information stored in Test Definition XML files consists of:
1. suite, set, case: Hierarchical structure of the tests. 2. feature, subfeature, requirement: Information about why the tested software has been implemented in the first place (which is why it’s being tested as well). 3. type: Information about the viewpoint of the tests (which quality aspect of the software they are testing, see DevelopmentTestArea in Agile Testing Wiki). 4. level: Information about which test level the tests belong - for test level. 5. domain: Information about which architectural domains the tests are focused on. 6. description: descriptions of the tests (what each test is, and what it is supposed to do). 7. step: Execution instructions (for automated tests), which determine the actual commands to execute to run each test.
Note: that all of the above are not mandatory. Mandatory fields for executing tests are defined with test plan XML validation schema.
There are certain mandatory things you’ll need to provide in a test definition XML.
The structure of the XML and the possible attributes/values are defined in the Test Definition XML schema. It’s important to validate (see validation) your XML against the up-to-date schema (also shown on this page).
Important information that you can’t get directly from the schema, about which attributes are mandatory, which values are possible, and where you should get the values from to be used in the definition, is available in the appendices section on this page.
The main thing in test plan are the test cases, which define what to execute (test steps), what to expect from the execution (expected results) and also some additional information for reporting purposes. An example test case could be:
<case name="my-first-case"> <description>Creating my very first test case</description> <step>ls</step> <step>uname -r</step> </case>
What did we just do? We created a simple test case named
my-first-test-case, which executes two steps and expects that they return the common “success return value” 0. Want to check for some other return value? Use
In case you feel that some test cases are insignificant, i.e. that they shouldn’t be taken into consideration when choosing whether the test run is passed or failed, you can use insignificant-attribute (defaults to
false, i.e. every case is considered to be significant):
<case name="not-so-important-test" insignificant="true">
In addition to test execution, the “story” of the test case should also be told, and for telling this story, we have fields for test type (what quality characteristics are tested) and test level (what level of system is being exercised. The possible values for both of these are listed in the appendices. Let’s add level and type to our test case:
<case name="my-first-case" level="Product" type="Functional"> <description>Creating my very first test case</description> <step expected_result="-1">ls</step> <step>uname -r</step> </case>
And that’s our first test case! Not that difficult...
Grouping cases to sets and suites makes your life easier and more organized. A good idea is to group test cases that test the same features into same sets, and sets that test the same architectural domain under same suites. E.g.:
<suite name="my-multimedia-tests" domain="Multimedia"> <set name="video-playback-tests" feature="Video Playback"> <description>Video</description> <case ... <case ... </set> </suite>
So, sets contain test cases, and may also contain description (the same way as the cases do). More about sets in Test Definition Execution part
Note: There are several attributes that need to be defined on the test case, test set, or test suite level in the test definition XML.
You can find more information about these attributes in the Appendices section of this page.
Test steps will time out by default after 90 seconds. You can change the default time out by adding timeout attribute to your test case. for example
<case name="my-test-case" timeout="120">
Note that this the timeout is for single test step not for the whole case.
For [pre|post]-steps the the default timeout is 180 seconds, it can also be changed:
Now that you have some knowledge on grouping and test cases in general, it is time to put it all together. Before showing an example, it is important to note that all the mentioned case attributes (e.g. level, type) are inheritable. So, if you have e.g. a set which contains only certain type of cases, you can define the type at the set-level, instead of writing it separately into each case.
Now, the example. Let’s first add the mandatory tags, which each test definition should have:
<?xml version="1.0" encoding="UTF-8"?> <testdefinition version="1.0"> </testdefinition>
That’s the mandatory part; we defined that we have a XML-document, and that this particular one is test definition version 1.0. Let’s add more beef to the bones by adding one suite with couple of sets:
<?xml version="1.0" encoding="UTF-8"?> <testdefinition version="1.0"> <suite name="my-multimedia-tests" domain="Multimedia"> <description>Testing AF stuff</description> <set name="video-playback-tests" feature="Video Playback"> <description>Video playback tests</description> </set> <set name="video-recording-tests" feature="Video Recording"> <description>Video recording tests</description> </set> </suite> </testdefinition>
Okay, we have one suite named
my-multimedia-tests with two sets, testing video playback and recording features. Cases are still missing:
<?xml version="1.0" encoding="UTF-8"?> <testdefinition version="1.0"> <suite name="my-multimedia-tests" domain="Multimedia"> <description>Video playback tests</description> <set name="video-playback-tests" feature="Video Playback"> <description>Video playback tests</description> <case name="playback1" type="Functional" level="Component"> <step>execute_playback_test</step> </case> </set> <set name="video-recording-tests" feature="Video Recording"> <description>Video playback tests</description> <case ... </set> </suite> </testdefinition>
And that’s the basic story. Next chapters cover more details about the definition, but you should now have the general understanding of the subject!
In case you want to do some setup and cleaning before and after the cases are executed, sets may have pre- and post-steps in them:
<set ...> <pre_steps> <step>do_some_setup</step> </pre_steps> <case ... <post_steps> <step>clean_up</step> </post_steps> </set>
If you want to ensure that pre-step is executed properly before starting the real testing, you can use expected result also in those:
<pre_steps> <step expected_result="1">do_some_setup_that_may_fail</step> ...
If different tests sets for different hardware are required then hwiddetect feature can be utilised. User can define a command used to get a hardware identifier within hwiddetect tag. The hardware identifier returned by the command is matched with optional hwid attribute of a test set. If not equal, test cases in the set are skipped and are not written to the result file. A test set will never be skipped if hwid attribute has not been defined for it. You can also define multiple hwid values separated by comma for a set.
Command defined by hwiddetect can be a shell command or a separate executable. The executable should be included in the test package. Testrunner-lite removes extra whitespace and linefeeds from the the output of the hwiddetect command so that test developer does not need to care about it.
Example usage of hwiddetect:
<?xml version="1.0" encoding="UTF-8"?> <testdefinition version="1.0"> <hwiddetect>/usr/bin/getmyhwid</hwiddetect> <suite name="suite1"> <set name="test_feature_X_on_hw_bar" hwid="bar"> <case name="test_X_1"> <step>echo "hwid is bar"</step> </case> </set> <set name="test_feature_X_on_hw_foo" hwid="foo"> <case name="test_X_1"> <step>echo "hwid is foo"</step> </case> </set> <set name="test_feature_X_on_hw_foo_or_bar" hwid="foo,bar"> <case name="test_X_1"> <step>echo "hwid is foo or bar"</step> </case> </set> </suite> </testdefinition>
In addition to normal result file, you can also fetch what ever files you need with get-tag:
<set ...> ... <get> <file>/tmp/myadditionalresult.1</file> <file>/tmp/myadditionalresult.2</file> </get> </set>
If not specified by the "manual" attribute all cases are automatic. The value for the attribute is inherited from higher entity (set->case->step). By semi-automatic test case we mean a manual case that has some automatic parts; the idea being that only those steps that have to be manual, are. Note that this does not work the other way around, we cannot have automatic test case with manual steps (would be semantically weird thus our tools do not support it).
The example below tries to clarify the above. The "example_set" shows a manual, automatic and semi-automatic case. The example works with testrunner-lite/testrunner-ui, give it a try: File:Example definition.xml.
<?xml version="1.0" encoding="UTF-8"?> <testdefinition version="1.0"> <suite name="example_suite"> <set name="example_set"> <description>Example test set with manual, automatic and semi-automatic case.</description> <case manual="true" name="manual_case"> <description>Manual test case with three steps inside one step tag.</description> <step>Step 1: execute command ttt on shell. Step 2: write something into edit box. Step 3: press ok button. Expected: Text should be updated into label. </step> </case> <case timeout="96" name="automatic_case"> <description>Automatic test case that executes some shell commands.</description> <step>ls /tmp</step> <step expected_result="2">ls /nosuchfile</step> <step>pwd</step> </case> <case manual="true" name="semi_automatic_case"> <description>A case with two automatic and two manual steps.</description> <step manual="false">xcalc &</step> <step>Step: Type in 2 + 2 =. Expected: 4 is displayed </step> <step>Press x² button. Expected: 16 is displayed.</step> <step manual="false">killall xcalc</step> </case> </set> </suite> </testdefinition>
(A real test developer / tester might come up with nicer examples, any input welcome).
Test definitions are run with test runner (“
testrunner-lite”), which reads the definition and produces a result XML file. We also provide testrunner user interface to support testing - please see repositories to install it and demo video