Proposal For MeeGo 1.2. This plan is under review. actively revising and updating
This is overall test plan for MeeGo Core of MeeGo open source project, which defines test scope, test strategy, test configurations as well as test execution cycle for MeeGo Core. It will give readers an overview of validation activities for MeeGo Core of MeeGo open source releases. A series of component test plans will also be linked in this overall test plan to cover the detailed test approaches and test design for ingredients of MeeGo Core stack. It will be the joint effort from MeeGo community.
This plan describes MeeGo Core OS QA for MeeGo 1.2 and onwards. Test plan is revised for each MeeGo Release and thus this test plan should be considered as evolving document. New testing methods, practicalities and approaches are described in revisions.
Objectives in MeeGo Core OS QA is to validate the functionality of entire MeeGo Core OS software delivery by performing hourly, daily and weekly testing for weekly releases. More information about testing cycle and test types can be found later in this document. Target is to ensure that:
Weekly testing is cumulative in terms of test coverage. Week to week test cases included in test run will vary and new test cases are introduced. Thus test case coverage increases constantly. Increase is dependent on release content (how many new features there was released in specific weekly release).
For these activities MeeGo Core OS QA follows iteration cycle and process described in Release Engineering’s Process
As addition to fast cycle testing more thorough testing (Full Pass) is done for MeeGo Releases. Full Pass testing is massive test execution for entire test asset available at the moment. With full pass all features are re-verified and regression is measured. This activity is taking place after MeeGo Release Feature Complete. Target is to ensure that:
In order to verify features of MeeGo Core OS requires exhaustive documentation of feature under test. insufficient documentation has negative impact to test asset quality as stated in MCTS Development Guidelines.
For these activities MeeGo Core OS QA follows release cycle described in Release Engineering’s release timeline
The overall objective of MeeGo Core QA is to ensure that MeeGo middleware and OS Base provide stable hardware and usage model independent application services and APIs for building the vertical specific user experience. Each core component has different quality risk regarding to MeeGo integration. For example, some core component is mature in upstream and MeeGo do the integration without heavy new feature development; some component is contributed and open-sourced from proprietary component with heavy development. Considering most of MeeGo Core components will be adopted by multiple vertical usages and run on a number of MeeGo devices, Test execution efficiency shall taken into account when creating the test cases. Given this, there are following strategy considerations:
MeeGo Core OS is verified with test assets available in MeeGo GIT in different projects and other open source projects. One can divide used test cases to two different types:
Typical characteristics for these component test cases are that they verify specific component or library with extensive set of parameters. Often they are missing E2E approach where entire stack is exercised. Test cases may not necessarily leave from component under test.
Typical characteristics for these system test cases are that they are based on use cases or user stories and often testing entire stack from top most interfaces provided by MeeGo Core OS Middleware and exercises HW peripheral beneath SW stack. These types of test suites are the most efficient ones for measuring and providing visibility to maturity of MeeGo Core OS.
MeeGo Core OS QA uses mainly test framework and other testing tools provided by MeeGo QA Tools Team.
In order to ensure that MeeGo is competitive SW platform MeeGo Core OS QA is executing Performance test cases and driving performance improvements to MeeGo Core OS stack. Majority of the performance test cases are measuring raw performance of the system and do not necessarily measure end user experience. End user experience (response time measurements) is measured by Handset UX QA. For more detailed information of End User Experience testing see Handset UX Test Plan.
In order to ensure reliability of MeeGo, MeeGo Core OS QA is executing Reliability test cases and driving reliability improvements to MeeGo Core OS stack. As addition to conventional test types such as Long-lasting and iterative, also Feature Interaction Testing is done as part of reliability testing. Feature Interaction Testing is based on user scenarios.
Test cases are following test type definition Aligned with ISO/IEC 9126-1 Software Quality Model and ISTQB Advanced Level Syllabus. Test types are defined in test areas
It is a fact that Quality Assurance cannot create quality of the release by doing exhaustive testing. Quality is build in development phase by developers contributing to MeeGo.
Developers has significant role also in QA.Here are QA recommendations for developers contributing to MeeGo:
At the end of the day, developer is responsible of Quality of the his/hers delivery.
Even though there are certain problematic when testing code with a code it is very efficient of testing while:
In order to take advantage of items described above test asset shall follow demanding quality standards. Test asset producing lots of false positives and negatives confuses community, provides wrong information about release quality and sends developers to wild goose hunt. This shall never happen. To ensure this MCTS code will follow quality requirements described MCTS Development Guidelines
QA target is to validate MeeGo distribution
Testability of MeeGo 1.2 Core OS features is ensured.
Testability can be seen as main key for the success of QA. In order to ensure high quality QA, testability percentage of the MeeGo 1.2. features shall be 90% or higher.
Well defined test cases are the key to success in MeeGo Core OS Testing. While the objective of testing is assist developers in creating software that functions correctly, quite often testing falls into the trap of attempting to demonstrate that the software works. This shall be avoided. In test case development following should be considered:
MeeGo Core OS Test Design follows spirit of MeeGo QA Common Test Design Process and Guidelines. Specifics being:
Overall the MeeGo Core Testing will cover the MeeGo OS Middlewares layer and MeeGo OS Base layer in MeeGo Architecture:
Specific features to be tested will be aligned with the features under MeeGo Core OS Features product in MeeGo Featurezilla
Following feature category won't be covered in MeeGo Core validation for MeeGo open source releases.
In order to use resources efficiently and control risk level on component maturity. Testing is done in different levels as follows:
In order to understand how well certain component is covered with test cases there shall be test coverage measurement done. This is directly linking to risk level of specific component. Test coverage is based on Function/Method coverage per API.
MCTS API analysis describes methods to be used for test coverage measurement.
Each component will have detailed test plan for the specific component. Test Plans are available in MeeGo Core Test Suite for details (each component name is link to test plan).
Test plan shall follow structure described below: :
See Video playback driver test plan for reference.
MeeGo Core will be tested from the following different test execution levels. Testing Gear Box is as follows.
Testing is done against Trunk:Testing. It will run a portion of fully automated test cases for core components and aims to provide quick acceptance testing to support incremental packages integration. It will be conducted under OTS (Open Test System).
Testing is done against Trunk and also for weekly release prior to release announcement to provide visibility to release quality and to ensure that last fixes does not cause regression to release. Release Engineering includes links to test reports in release announcement. Sanity testing is static set of test cases which is modified on need basis. Thus Sanity test set may contain test cases for functionality which are not introduced yet. These test cases are marked as N/A with comment that feature not integrated yet. Sanity testing consists of:
Daily Sanity testing aims to identify the regressions as early as possible and provide easy to understand visibility to SW maturity. This testing is answering to questions like:
While test cycle needs to be fast, reliability is not reasonable to measure in daily testing.
Weekly Testing is a test cycle against the weekly preview images released from Release Engineering. Test frequency is once a week, which aligns with distribution's weekly image release cadence. Weekly testing is incremental testing and target for weekly testing is to:
New features are verified as soon as they are ready for testing. QA Owners follows release engineering’s release plans and feature status in Featurezilla. When feature is turned to Released sate in Featurezilla, test cases mapped to this feature are taken as part of next weekly testing execution. If test cases for specific feature are passing, Feature shall be marked as verified in Featurezilla.
Regression test cases are chosen amongst test cases designed for newly verified feature and are included in next weekly testing round. Regression is a set of tests to verify that changes from the last build (code enhancements, bug fixes) don’t introduce new issues to the previous working code as well as new features work as expected. This cycle include the tests for previous major bug fixes and areas of the code that will be affected by new implementation. The regression test will be taken in following test cycle:
Bug verification on weekly basis to make sure the bug fixes be verified within one week after bug fixing
Performance or reliability test cases by default are not included in weekly testing. Performance ro Reliability test cases are included only if bug fix has been provided against performance reliability related Bugzilla item or there are other suspicious changes in release content which may have a effect to performance or reliability
In the formal test cycle for milestone test, after a new build passed the weekly test, QA will start the full validation testing for it, following test will be involved:
Purpose of the Full Pass is to measure release maturity in detailed level. In Full Pass testing entire test asset is executed for all the features released and previously marked as verified. Thus visibility and detailed information about release maturity is gained. Target is to have two Full Pass testing cycles during release life cycle. First Pull Pass test round starts at feature complete and last round ends few days prior to release date.
Between these two rounds failing cases and related bugs are followed closely in weekly testing. If there are very good grounds Full Pass testing can be executed more than twice during release cycle life cycle.
MeeGo Core OS is tested with numerous reference devices. The public reference configurations used are:
Each test suite shall contain README file describing test environment in detailed level. In a complex cases specific test environment description can be provided with reference in README file.
Test environment description includes everything needed to run the test or tests. This included hardware and software configuration of the device under test as well as any equipment (and its configuration) outside the device itself.