Verification

Efficient test case writing and execution

Not long ago, I sat down with three test managers I have recently worked with. They all have extensive background in managing test teams and supervise the writing and execution of test cases in large Medical Device projects. Since we have made the observation that about 50% of the total DHF is consisting of tests, I had long been pondering how the test activities could be done more efficiently.

We talked about how to find the right "review and release" effort (goldilocks principle, "not too much, not too little"), the optimal test case size, the optimal number of fields in a test case and how to deal with the ever reoccurring problem of volatile specifications. I got some interesting input on all topics and I was very satisfied with how the conversation went on.

After a while, one of them said, "Mr Larsson, it is all well and good that you want to optimize the test case writing and execution. I understand your intentions. But, you know, testing is more than just writing and executing. In my opinion, only 30% of the total test effort consists of the writing and execution activities you talk about. 70% is about setting the table. If I were you, I would take a look at those 70%."

Only 30% of the test effort is about writing test cases

I must confess that I did not really understand what he was talking about. In my world, testing is writing and executing test cases. And what did he mean by "setting the table"?

After some prying, we got closer to the heart of the matter: setting the table implies activities such as:

  • Setting up infrastructure (computers, user accounts, instruments etc.)
  • Training testers – get to know the instrument, the “lingo”, the templates and the processes
  • Setting up / calibrating the instruments to test
  • Learning simulation tools, log parsers etc.
  • Generating test data
  • Reviewing specs
  • Dry runs and exploratory testing
  • Collecting test data

These are all auxiliary test activities that lay the foundation on which efficient test case writing and executing is subsequently performed. They might not look particularly impressive at first, but experience has shown that performing these activities carefully, consciensly and consistently pays of immensely. The reverse is also true; failing to give these activities their proper attention will have a severe impact on testing efficiency.

Finally, another test manager said, "Writing and executing a test case is the end of a long journey. It is the result of a long array of preparatory activities. It is how you get to this point that decides how efficiently your writing and execution will be".

Medical Devices and Exploratory Testing

Not long ago I sat down and had a really good talk with Ilari Henrik Aegerter from House Of Test. Ilari has an extensive background in high-quality testing and is one of the leading opinion makers in the area of context-driven testing. Ilari is also a strong advocate of Exploratory Testing and I challenged him to explain how Exploratory Testing could fit into the world of strict and regulated medical device development.

"One way of thinking about testing is value creation. Value is created by increasing the knowledge about the system.” starts Ilari, “Through testing, you gather information about your system’s desirable and undesirable behavior. Now, some testing methodologies accomplish this more efficiently than others.”

"In traditional, scripted testing, the task is to verify that the system behaves according to a formally specified input, a.k.a "confirmatory checking". I see this approach a lot in the medical device industry, often accompanied by extensive documentation, meticulously describing the input and output.” 

“The drawback of this approach is that it only traverses a very narrow path of the system. The generated results is equally meager when it comes to unveiling previously unknown knowledge about the system. It also abstains from tapping into the creativity and system expertise of the tester. 

Let me make this clearer with an illustration.”

 Ilari

“This diagram describes a medical device. As you can see, we have three ways of looking at the product:

  1. The User Needs: this area represents the features set the customer actually wants.
  2. The formal Specifications: this area is made up of the Design Controls, where the manufacturer attempts to formally define the system he is planning to build.
  3. The actual Implementation: this final area represents the feature set the instrument actually supports 

In the ideal case, a medical device is characterized by perfect intersection between all three areas.

Now, let's evaluate what happens when this is not the case.

"Specified but not implemented" - This area translates to turn up as failed tests in the medical device documentation, something that becomes accentuated in companies that focus on verifying formal specifications. Not uncommon in the medical device industry.

"Implemented and specified but not covered by User Needs" - Sometimes called "Gold Plating". From a scripted test perspective, this area is going to transform into passed test cases. One might question the value of implementing things that the Users don't need, but there might exist business reasons that explains this area.

"Implementation and User Needs intersect but are not covered by Specifications" - This is the result of an experienced developer who knows what the User Needs even though it is not specified. Unfortunately, this part is not going to be covered by the formal verification so we can only hope that the implementation actually works.

Finally, the most important area is represented by the "unmet User needs". This area is going to come back as bugs and change requests in the post-market phase. Despite having done the things right, we have apparently failed to do the right things. Some of these user needs might have been explicitly left out for cost reasons or with the intention to be implemented later. However, the critical part consists of the "things we didn't think about".

And voila, here is where exploratory testing can make a big difference. By applying a broader testing mindset, not being constrained by the narrow path of formal specifications, a wider test coverage is reached and more knowledge is obtained about the system. More knowledge is more value. The best part is that studies have shown that exploratory testing is much more efficient in finding bugs per time unit compared to traditional testing."

At this point, I asked Ilari if these unmet User Needs would not be uncovered during the Design Validation? Wasn't this exactly the objective of validation, to check that we have done the right things? 

"Partly", says Ilari, "the notions are related but not the same."

“In recent years, we have seen an increased focus on usability and human factor engineering in medical device development. We have also seen an increased regulatory focus on performing proper validation for each new release. The whole point of these disciplines is to engage the user perspective, since the user probably knows more about the final usage than the manufacturer. Usability and human factor engineering are valuable tools to emphasize, charter and corroborate user needs in the design process.

Exploratory testing is focusing on similar issues. It leverages the creative and critical thinking of the tester, mimicking the thought process of a user. It challenges the tester as a domain expert and explicitly attempts to uncover tacit and unspecified needs."

I was still skeptical. "But, Ilari, we still need to be compliant with the FDA. We still have to show that the specifications are met and we still need to have documented proof. Exploratory testing makes a big point how traditional, scripted testing is obsessing about repeatability and documentation, indicating that these are not important steps." 

"That is not true.", Ilari responds, "I think this is a misconception about exploratory testing, i.e. that it does not test specifications and does not produce documented evidence. Of course exploratory tests are documented. It's just that it might not use the same level of detail and that it better highlights undesired behavior, instead of only focusing on meticulously writing down everything that works as specified.” 

“My opinion is that exploratory testing generates more knowledge about the system. This is particularly valuable during the early development stages. If you have worked in the industry, you know exactly what I am talking about. At this stage, the specs are constantly changing, the device is continuously modified, a lot of factors have not yet stabilized and it is simply not efficient to start writing formalized test scripts at this point. During these phase, exploratory testing is the superior testing method for generating knowledge about the current system state.” 

“This idea is partly reflected in the FDA document Design Considerations for Pivotal Clinical Investigations for Medical Devices - Guidance for Industry, Clinical Investigators, Institutional Review Boards and Food and Drug Administration Staff”. The document states that an exploratory studies in the early, and even pivotal stage of a clinical investigations, is advantegous, simply because it generates more knowledge about the system. In this line of thought, exploratory testing does not replace scripted testing, but certainly precedes and complements it.”

I must say that Ilari is building a case here. We leave the discussion at this point, both excited about the prospects and challenges of exploratory testing in a medical device context.

Test Management in Medical Device Development

Test managers often have to deal with superhuman juggling of timelines, resource allocations and continuously changing specification while facing increasing pressure from management as the shipping-date draws closer. Furthermore, the test manager is responsible for making sure that the traceability is completed, that test data integrity remains intact and that change management procedures are followed correctly, even under situations of extreme stress.

Efficient change management, planning, tracking and reuse of test executions are therefore much appreciated tools in the V&V Managers tool box. The Aligned Elements Test Runs aims to address these challenges. The Test Runs manages the planning, allocation, execution and tracking of test activities. Let's dive into the details.

Planning the Test Run

The Test Run Planning section is the place to define the Test Run context, much as you would write a Verification Plan.

The Test Run Planning information includes:

  • What to test i.e. information the Object Under Test and the addressed configurations
  • When to perform the tests i.e. the planned start and end date for the Test Run
  • Who participates in the test effort i.e. the team of testers and their allocated work
  • How to test the device i.e. which test strategy to use by selecting among the existing test types
  • Why the test is being done i.e. the purpose or reason for performing this particular set of tests on this Object Under Test

Quality Assurance in Aligned Elements

Allocate existing Test Cases from your Test Case library to the Test Run by using simple drag-and-drop. You can use any number of tests of any available types from any number of projects. If needed, add Test Run specific information to the Test Cases and designate the Test Cases to the members of the Test Team.

Once the planning phase is completed, the Test Execution can begin.

The Test Execution phase 

Test Case data is kept separate from the Test Execution result data, permitting a high degree of test case reuse and a clear separation between Test Instructions and Test Results.
Consistency checks are optionally carried out on the Test Case before execution in order to ensure that tests cannot be performed until all foreseen process steps are completed.
During execution, the test input data is optionally kept read-only, preventing testers to modify a reviewed and released Test Case during execution.

All Test Team Members can access the Test Run and simultaneously Execute Tests as well as continuously monitor how the Execution phase progresses.

Test Run Progress Bar

Real-Time Test Execution data is presented through:

  • Individual Test Execution results including any found defects as well as colour coded feedback on states
  • Colour coded test progression statistics, with possibility to drill down on e.g. individual Testers or Test Types
  • Burn down charts, showing how planned Test progress over time corresponds to the actual progression

Defect Tracking

During Test Execution, Defects and Anomalies can be created and traced on-the-fly without having to leave the Test Execution context. The Defects can be tracked in either Aligned Elements internal Issue Management system or already existing integrated Issue Trackers such as Jira, TFS, GitHub, Trac or Countersoft Gemini or any mix of these systems. Created Defects and their corresponding status are displayed in the Test Run.

TestCaseList

Test Case Change Management

When developing medical devices, it is of paramount importance to keep your Design Control under tight change control.
The Test Run assist the testers and test managers in several ways to accomplish this goal, including optional actions:

  • Preventing inconsistent tests from being executed
  • Preventing input data to get modified during test execution
  • Allowing Test Managers to lock/freeze Design Input and Tests during execution
  • Alert testers when attempting to modify tests for which valid results already exist
  • Signalize weather a Test Case has been reviewed and released or not
  • Allowing the user to explicitly invalidate existing test results when a test is updated with major changes

Testing Variants of the Object Under Test

If several variants of the Object Under Test exist, it is sometimes desirable to create variant specific test results for common test cases and subsequently create separate traceabilities for the variants. The Test Run uses a concept called "Configurations" to achieve this behaviour. A Test Case is executed for one or more Configurations to make sure that variant specific test results are kept separate.

The exact data composition of a Configuration is customizable to fit the needs of each customer.

Complete the Test Run

Once all Test Cases has been completed, the Test Run and all its content is set to read-only. Optionally, a Snapshot of all Test Run relevant items is created as part of the completion procedure. A Test Run report containing all the necessary Test Run information can be inserted into any Word Document using Aligned Elements Word Integration with a single drag and drop action.

The Test Run is a Test Managers best friend, providing the flexibility needed during test planning and full transparency during test execution, making it possible to quickly react as real time test events unfold.

Note: Burn Down Charts is under development at the point of writing and planned to be released in the next service pack.