WORK IN PROGRESS - please contribute by editing this page or posting your comments on discussion are for this page.
The Test Plan defines the Quality Assurance procedures used to verify MeeGo 1.1 release (incl. Core OS, Netbook UX and Handheld UX verticals).
The purpose of the Test Plan is to describe <please contribute> The intended readers for this document are <please contribute>
The Test Plan is intended to provide the vehicle for customers to confirm the functionality and completeness of the MeeGo 1.1 release. It is expected that the satisfaction of the complete series of criteria contained in this plan will signify successful functionality of the integrated release.
The goal is to deliver a product with no open bugs with a severity level of critical and a minimal number of open bugs with severity level major.
During the the release phase aiming to final release, test cases will be executed. The test cases are described in test documents which are in <our_test_case_management_tool>. The actual test case code might be (depending on the license type and test level) part of the source package. Each test case also contains test specific criteria which decide upon test case success of failure.
Following verdicts shall be assigned for a test case after execution
For each feature listed bugs.meego.com, in principle, one or more test cases will exist (see detailed test plans).
Each test case described in the detailed test plan contains the following fields <please contribute according test packaging, XML and DTD>
Components can be tested by a combination of component (unit, module and component integration testing put together), integration and system tests. For a system test the component is typically tested by launching the application in the test environment and by executing functionality of the component using the User Interface, and by verifying the outcome against an 'expected result'. In cases where manual verification is required the expected outcome is specified in the test case. Where possible all or parts of the test steps are automated, but in all cases there will be a manually executed 'Acceptance Test' for each GUI component in which the module will, at least, be verified against a standard checklist. For an Integration test the component is typically tested by launching the component (if it has a GUI), or by launching other applications that make use of the tested component, inside the test environment and by executing functionality of the component using the User Interface. Where applicable, parts of the system may be replaced by stubs to partially insulate the component from its environment. Unit tests are so named because they each test one unit of code. This type of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. Whether a module of code has hundreds of unit tests or only five is irrelevant. One function might have multiple tests, to catch corner cases or other branches in the code. A test suite of unit tests should never cross process boundaries in a program, let alone network connections. Doing so introduces delays that make tests run slowly and discourage developers from running the whole suite. Module test tests a module of code similar way than unit tests - the difference is that test are generated black-box style (without reference to the internal structure of the module). Introducing dependencies on external modules or data also turns unit tests into component integration tests. If one module misbehaves in a chain of interrelated modules, it is not so immediately clear where to look for the cause of the failure. Tests are checking ready made part of the component with simulators (stubs, test drivers) for missing external dependencies.
Tests are written on a highest priority first basis, and secondly on broad coverage.
At this stage we are aiming towards <please contribute>
MeeGo 1.1 is tested in a number of configurations, both in a <Virtual> as well as on reference devices. The public configurations used for this release are <please contribute>
All automated tests are executed in a MeeGo QA automated environment, and typically test results are available for each build. Manual tests are executed regularly, but certainly before each release.
<please contribute> Additional tools such as Valgrind may be used to do a deep analysis of test details/failures and tools such as <please contribute> are used in the documentation process. These tools which will provide additional data used to analyse the quality of the product but are not necessarily used in the Go/No-Go decision making process.
<please contribute> The basic functionality checklist is categorized as below, and the <link to quality awareness wiki-page> has a lot more details:
<please contribute> Below is the usability check list which is referred to while we were performing usability feedback for netbook UX.
The performance target is to produce a quality product with a performance that is competitive in the market. This is going to be achieved by first looking at the different areas that affect the performance of MeeGo 1.1 release, on actual devices, and then trying to measure these values to find out where performance needs to be improved. The focus areas are:
Based on the target system and our goal to have a fast and responsive system we have defined the goals <please contribute> These are our initial targets only. Once we have reached these targets we will continue to improve and create a new set of targets to aim for.
MeeGo can be tested on devices as well as in a <Virtual> environment. Both cases have value and may provide data that would be hard or impossible to acquire 'on the other side'.
Once MeeGo is installed in the <Virtual> environment or installed onto a device, no additional environment setup should be required that isn't already required by MeeGo itself. It is important though that the environment is equal for each testrun, i.e. the same background processes are running for each test run, and that only background processes are running that are required. Certain aspects of MeeGo can only be tested if associated hardware modules are available on the device/desktop machine on which the test is executed. For instance:
Please refer to the manuals that come with these items to setup correctly.
To test MeeGo in a <virtual> environment, MeeGo needs to be built for the desktop which is accomplished by <please contribute>
To test MeeGo on a device, MeeGo needs to be built for that specific device which is accomplished by <please contribute>