IVM top 10 principles of testing

1.Ensure 100% test coverage

The tests planned should cover 100% of the software and hardware functionality that has been defined. No – actually it should cover 100% of what has been provided not just what has been planned. Something provided but not planned will end up being used in some way, and when this is not adequately tested, has often been the cause of failure in past high-reliability projects.

2.Consider coverage density

Quite obviously, spend more time/effort testing important and/or common items. For example single points of failure in hardware and software, or underlying system code. The temptation is to spend more time testing the easiest items to test.

3.Choose an appropriate set of test methods

There are, obviously, different types of test. The various forms of software test that I can think of are listed here. Different parts of a system with different characteristics and different levels of importance will attract different testing methodologies. This is a pragmatic and common-sense approach... just make sure that a note is kept of which method is used for which module, and why that method was chosen.
4.Designers have obligations and should be trusted

I constantly battle people who try to lead to 'documentation blowout'. This is the term I use when the documentation becomes so extensive, and has to cover so many small things that it ends up forming a large proportion of the available work effort.
The solution is two-fold. Firstly to be pragmatic in what is documented (get the most return for the investment) and secondly to trust the designer. Yes, the designer will make mistakes, and some of these may even slip through, but if you don't trust the designer, how can you agree with the design? Conversely, this places a large and weighty responsibility on the designer to not cut corners and to do the job right.
5.Plan the tests in advance, update the test plan later

The test plan should be written before the module under consideration is begun. Not the prototype stuff, but the actual end product design. The test plan tests against the functional requirements predominantly.
But, once the design is complete, the test plan should be revisited to add to and refine the tests in light of what you know about the design. I would be worried if that caused the test plan to be reduced in scope ... but it may be a good time to add tests that exercise a certain part of the design that the designer may feel is prone to being flaky.
6.Finding errors is a good thing
All humans make mistakes. The designer who finds faults with his system during the test phase has done a good job and should be praised. Imagine the consequence of that error NOT being found? Let me repeat this again: all designs have errors. It is good to find them.
7.Version control is your friend
Of course we know this is a good thing! But VC should be applied not only to software, but to the documents used to plan and design that software, to the test plan itself, the test vectors, the applied test, and to the test report and numerical test results. All of these version numbers should be recorded in the test result document.
If the version number of any of these parts changes from what was tested, the test MUST be repeated. Even seemingly simple changes can have unforeseen consequences. With embedded software, this is even more true.
8. Archive the test results

Any numerical results, or test vectors, even obvious ones, should be stored. It may only be necessary to do this for the latest version of test, but if in doubt, keep all.
You can refer to this data later. You may well need it and it could be the only proof you have that the design that you made actually works (and that it really is someone else's fault).

9.Do not be tempted to integrate before testing

This may be good fun in a prototype system, but is the kiss of death to a real production system. Think of it this way – your module contains a virus (an error/mistake). Any other module it touches can become infected with this. The module should remain in isolation until it has been cleared as virus free through testing.

With software this is obvious – copying and merging code is rarely as simple as copying a single function over. There may be globals, function prototypes, header files, constants and so on that need to change. The calling context may need to be changed. It is difficult to remember these changes and reverse them later if the copied code is removed or changed. Over time especially if this is done more than once, errors will creep in.
In hardware, overshoot or undershoot may cause device lifetime problems. In firmware, anyone who has programmed in VHDL knows the clutter that exists in the “Work” directory. Legacy functions can persist in there long after the source code has been removed from a design!
Unless the interface is very, very well defined and protected (for both software and hardware) it really is better to either avoid this temptation, or (in software), do it and then throw the code away afterwards... keep a very well defined roll-back point.

10.Testing is unglamorous

Yes, but so is being the one person who lets the team down.
You can not even guarantee good code by working hard alone, or even by being a 'good' programmer. Good programmers test their code properly, bad ones do not.

No comments:

Post a Comment

Please Provide your feedback here