Monday 4 May 2009

Test framework - Multi-threading

From http://abouttesting.blogspot.com/2008/06/test-framework-brief-description.html

Known limits:
Parallel execution is not implemented (you cannot use multiple test tool instances for a test run in order to decrease test execution time).


Check. Implemented today :-)

Solution: Call a Store procedure with test run id and test run option id (including test tool type) and it returns next compatible test case in the execution queue including Go/NoGo (if NoGo wait until Go critera is met).

Is this a necessary feature?

I have many test suites that are End-to-end based and spans over several test days. Since each day requires a lot of batch processing (included in test suite as well) you want to combine a lot of test scenarios in one test suite instead of having one test suite/test scenario. The largest test suite has now over 900 test cases (multi steps)...so anything that reduces overall execution time is very welcome :-)

4 comments:

Unknown said...

>so anything that reduces overall execution time is very welcome

Indeed :)

Stefan Thelenius said...

Update:

I am down to appr. 5 hours of execution time for 925 test cases (8 test days) using 2 test tool instances...

Unknown said...

Wonder whether your test scripts/results are reliable enough.. How much time does it usually take you (or anyone in your team) to review the results of such test run (925 test cases)?

Stefan Thelenius said...

Good question!

Q: Wonder whether your test scripts/results are reliable enough.. How much time does it usually take you (or anyone in your team) to review the results of such test run (925 test cases)?
A: I believe it is reliable enough even though there are always a risk of false positive result. The trick is to reduce that risk. Here is how we handles it.

1. Each test case has verification points which checks expected states, status, amount etc
2. Several times during a test case execution a special error check function is called to check for error messages, exception stack traces etc. Typically after each submit and always at the end of a test case.
3. Almost all test cases are nested in scenarios so if an undetected error is in test case a, most likely test case b or later will fail somehow.
4. Our AUT has a lot of validation in all application layers.
5. At the end of each test day, daily batches process the data and threw errors when found.
6. We have a special validation batch which checks if the data is valid (used typically on migrated data).
7. Sometimes we create SQL reports in order to check specific cases.
8. When creating new test cases I consult product area experts in order to review the result.

So far the need for manually review the result has not been that necessary unless for new test cases or in high risk areas where you want to do some extra checks.

I guess it is always a balance between value vs. cost...

Regards

/Stefan