What are some workable strategies to reduce costs and increa
\"What are some workable strategies to reduce costs and increase quality? How can teams balance demands to reduce costs, meet increasingly complex regulations, and demonstrate agility? Good software quality must, of course, be built-in and not tested-in at the end. So how do we build good quality software?\"
Solution
Companies, project teams and individuals only get what they want if they know what their requirements are for how they are going to use the tool. So the first task is to understand the intended use requirements and get agreement that this is how the tool will be used. Understand the native functionality of the tool when used “as is” and what needs to be configured or customized to get the performance needed. Plan what to do in a validation plan or a project plan. Identify what will be done by whom when. Then, go to work on setting up the tool. Documentation of the process is the key to success. Document what can be done with the tool’s native configurations and any customizations needed to assure that the tool will work as expected. Review the native functions and the configurations and map them back to the intended use requirements to assure that all the needs of the tool as stated in the intended use requirements have been covered. Document the intended uses of the tool in the context of the processes that will use the tool in written procedures. Procedures need to be written at the level needed by the people in the organization who will use them.
One should use code already tested or verified you have through other means. Determine the amount of testing/verification and the extent of testing that needs to be done based on the risk of the failure of the tool. Low-risk tools still require testing and verification that the tool will meet one’s intended uses. Document your testing with test cases or scripts. Document the results; including the objective evidence of the actual results that can be compared to the expected results. Do not just record that the tool worked “as expected” or “passed”. Use good and poor code to test the tool. Keep those pieces of code as part of the test documentation. Do fault insertion to assure that the tool behaves properly in all the expected scenarios. Do testing in the context of the intended use. Understand that risk is based on the intended use of the tool. Know the tool’s limitations.
TWO EXAMPLES OF COMMONLY USED TESTING TOOLS Static Code Analyzer A static code analyzer is a commonly used code review tool. FDA recommends the use of static code analyzers in high-risk applications, especially for medical device software. Static code analyzers will find issues that software engineers know are not associated with the basic code. Many false positive issues are found that software engineers then have to evaluate and investigate. Static code analyzers have coding rules that they enforce. If the code does not conform to their rules, the software engineers will get many issues that need to review and resolve. Once the software engineershave resolved them, this information on the false positives routinely identified by the tool can be used so that the next time the false error is found, in the same way by the static code analyzer there is no need to do another investigation. Some companies change their coding standards to conform to the tool for all future development. However, because they often have built code on old code bases, they maintain a list of false errors they will ignore or not investigate fully. Naturally, this is based on the risk of not investigating these errors. Static code analyzers also allow software engineers or the tool vendor to do custom configurations. These customizations tell the analyzer to ignore some of the false errors that the tool finds but that the developers know are not issues with the code and can be ignored. A company should do extensive research to determine the best static code analyzer for their needs. Code modules with common errors, complexities, and poor coding practices as well as modules with good code should be developed if not already available to the software engineers. Several different manufacturers’ static code analyzers should then be evaluated. The tool that found the most types of errors and issues should be chosen. Limitations of the static code analyzer should be documented so other verification methods can be used to find errors the static code analyzer missed or was not intended to find. A list of common false errors the tool found should be compiled. The false code errors, the investigations into each one, the results of the investigations, and what the software engineers would do with each false error when found in their code modules was prepared. These became part of their standard operating procedures in the use of the tool. “Record and Replay” Another commonly used tool is an automated “record and replay” testing tool. These tools allow testers to record each keystroke as they are running a test manually. The testing team then can schedule testing to be performed any time in the future and the tool will replay each keystroke as if a tester was doing it manually. The advantage of this approach is that the documentation accumulated while recording, such as screen shots or other documentation, is used not only to show that the code passed the test when run manually but also used to compare the tool output the
