Aligned AG - Verification

Efficient test case writing and execution

Not long ago, I sat down with three test managers I have recently worked with. They all have extensive backgrounds in managing test teams and supervise the writing and execution of test cases in large Medical Device projects. Since we have made the observation that about 50% of the total DHF is consisting of tests, I had long been pondering how the test activities could be done more efficiently.

We talked about how to find the right "review and release" effort (goldilocks principle, "not too much, not too little"), the optimal test case size, the optimal number of fields in a test case, and how to deal with the ever reoccurring problem of volatile specifications. I got some interesting input on all topics and I was very satisfied with how the conversation went on.

After a while, one of them said, "Mr. Larsson, it is all well and good that you want to optimize the test case writing and execution. I understand your intentions. But, you know, testing is more than just writing and executing. In my opinion, only 30% of the total test effort consists of the writing and execution activities you talk about. 70% is about setting the table. If I were you, I would take a look at that 70%."

Only 30% of the test effort is about writing test cases

I must confess that I did not really understand what he was talking about. In my world, testing is writing and executing test cases. And what did he mean by "setting the table"?

After some prying, we got closer to the heart of the matter: setting the table implies activities such as:

  • Setting up infrastructure (computers, user accounts, instruments, etc.)
  • Training testers – get to know the instrument, the “lingo”, the templates, and the processes
  • Setting up / calibrating the instruments to test
  • Learning simulation tools, log parsers, etc.
  • Generating test data
  • Reviewing specs
  • Dry runs and exploratory testing
  • Collecting test data

These are all auxiliary test activities that lay the foundation on which efficient test case writing and execution are subsequently performed. They might not look particularly impressive at first, but experience has shown that performing these activities carefully, consciously, and consistently pays off immensely. The reverse is also true; failing to give these activities their proper attention will have a severe impact on testing efficiency.

Finally, another test manager said, "Writing and executing a test case is the end of a long journey. It is the result of a long array of preparatory activities. It is how you get to this point that decides how efficiently your writing and execution will be".

Medical Devices and Exploratory Testing

Not long ago I sat down and had a really good talk with Ilari Henrik Aegerter from House Of Test. Ilari has an extensive background in high-quality testing and is one of the leading opinion makers in the area of context-driven testing. Ilari is also a strong advocate of Exploratory Testing and I challenged him to explain how Exploratory Testing could fit into the world of strict and regulated medical device development.

"One way of thinking about testing is value creation. Value is created by increasing the knowledge about the system.” starts Ilari, “Through testing, you gather information about your system’s desirable and undesirable behavior. Now, some testing methodologies accomplish this more efficiently than others.”

"In traditional, scripted testing, the task is to verify that the system behaves according to a formally specified input, a.k.a "confirmatory checking". I see this approach a lot in the medical device industry, often accompanied by extensive documentation, meticulously describing the input and output.” 

“The drawback of this approach is that it only traverses a very narrow path of the system. The generated results is equally meager when it comes to unveiling previously unknown knowledge about the system. It also abstains from tapping into the creativity and system expertise of the tester. 

Let me make this clearer with an illustration.”


“This diagram describes a medical device. As you can see, we have three ways of looking at the product:

  1. The User Needs: this area represents the features set the customer actually wants.
  2. The formal Specifications: this area is made up of the Design Controls, where the manufacturer attempts to formally define the system he is planning to build.
  3. The actual Implementation: this final area represents the feature set the instrument actually supports 

In the ideal case, a medical device is characterized by a perfect intersection between all three areas.

Now, let's evaluate what happens when this is not the case.

"Specified but not implemented" - This area translates to turn up as failed tests in the medical device documentation, something that becomes accentuated in companies that focus on verifying formal specifications. Not uncommon in the medical device industry.

"Implemented and specified but not covered by User Needs" - Sometimes called "Gold Plating". From a scripted test perspective, this area is going to transform into passed test cases. One might question the value of implementing things that the Users don't need, but there might exist business reasons that explain this area.

"Implementation and User Needs intersect but are not covered by Specifications" - This is the result of an experienced developer who knows what the User Needs even though it is not specified. Unfortunately, this part is not going to be covered by the formal verification so we can only hope that the implementation actually works.

Finally, the most important area is represented by the "unmet User needs". This area is going to come back as bugs and change requests in the post-market phase. Despite having done the things right, we have apparently failed to do the right things. Some of these user needs might have been explicitly left out for cost reasons or with the intention to be implemented later. However, the critical part consists of the "things we didn't think about".

And voila, here is where exploratory testing can make a big difference. By applying a broader testing mindset, not being constrained by the narrow path of formal specifications, wider test coverage is reached and more knowledge is obtained about the system. More knowledge is more value. The best part is that studies have shown that exploratory testing is much more efficient in finding bugs per time unit compared to traditional testing."

At this point, I asked Ilari if these unmet User Needs would not be uncovered during the Design Validation? Wasn't this exactly the objective of validation, to check that we have done the right things? 

"Partly", says Ilari, "the notions are related but not the same."

“In recent years, we have seen an increased focus on usability and human factor engineering in medical device development. We have also seen an increased regulatory focus on performing proper validation for each new release. The whole point of these disciplines is to engage the user's perspective since the user probably knows more about the final usage than the manufacturer. Usability and human factor engineering are valuable tools to emphasize, charter, and corroborate user needs in the design process.

Exploratory testing is focusing on similar issues. It leverages the creative and critical thinking of the tester, mimicking the thought process of a user. It challenges the tester as a domain expert and explicitly attempts to uncover tacit and unspecified needs."

I was still skeptical. "But, Ilari, we still need to be compliant with the FDA. We still have to show that the specifications are met and we still need to have documented proof. Exploratory testing makes a big point how traditional, scripted testing is obsessing about repeatability and documentation, indicating that these are not important steps." 

"That is not true.", Ilari responds, "I think this is a misconception about exploratory testing, i.e. that it does not test specifications and does not produce documented evidence. Of course, exploratory tests are documented. It's just that it might not use the same level of detail and that it better highlights undesired behavior, instead of only focusing on meticulously writing down everything that works as specified.” 

“My opinion is that exploratory testing generates more knowledge about the system. This is particularly valuable during the early development stages. If you have worked in the industry, you know exactly what I am talking about. At this stage, the specs are constantly changing, the device is continuously modified, a lot of factors have not yet stabilized and it is simply not efficient to start writing formalized test scripts at this point. During this phase, exploratory testing is the superior testing method for generating knowledge about the current system state.” 

“This idea is partly reflected in the FDA document Design Considerations for Pivotal Clinical Investigations for Medical Devices - Guidance for Industry, Clinical Investigators, Institutional Review Boards, and Food and Drug Administration Staff”. The document states that exploratory studies in the early, and even pivotal stage of clinical investigations, is advantageous, simply because it generates more knowledge about the system. In this line of thought, exploratory testing does not replace scripted testing, but certainly precedes and complements it.”

I must say that Ilari is building a case here. We leave the discussion at this point, both excited about the prospects and challenges of exploratory testing in a medical device context.

Reaping the Benefits of Agile Testing in Medical Device Development

Traditionally, many medical device manufacturers have chosen the Waterfall process as the best way of managing their development with the assumption that regulatory bodies preferred this method.

While the regulations do not prescribe a method for designing and developing a product, some FDA QSR related guidance, such as the FDA’s “Design Control Guidance for Medical Device Manufacturers”, use a language that points in this particular direction.




The medical device manufacturers’ focus, due to the uncompromising effects of not being compliant, has always been on creating documents and reviewing those documents - with organization, compliance, and quality being more important than end-user-focused and efficient development. Testing processes have also stayed true to this path. As most of the development in the medical device industry is subjected to these forces, we often see the following “testing truths”

  • Testing starts on completion of components at the end of the development cycle
  • Since formal testing is documentation heavy, most testing is one-off.
  • Involving the end-user tends to begin at design validation after the product is either completed or nearly completed
  • The rush to decrease the time-to-market causes hasty testing without nearly enough meaningful test coverage
  • Manual testing is done really manually – meaning printed tests, checkoff sheets, and then feedback and results are input manually back into the Word test plan.
  • Regression testing is also done manually which causes burned-out testers who are more prone to make errors over time.

With time, these drawbacks have become more and more obvious to the industry as they result in increased cost, lower quality, and lower end-user acceptance.

Although Agile development appears to be a promising alternative, its (unwarranted) reputation of being unstructured and informal, focusing on speed rather than quality, has made the traditional medical device manufacturer shun it.

After all, does it not seem wiser to use a tried and tested, compliant development process, known and accepted by notified bodies, than to gamble with an unknown alternative, even though the cost might be a bit higher?

On the contrary. The truth is that many of the industry’s major players, including GE Healthcare, Medtronic, Roche, St. Jude Medical, Cochlear, Boston Scientific and Toshiba have adopted the agile, iterative development approach. However, going agile is not always an all-or-nothing proposition.

It is up to the manufacturer to pick and choose the parts from the agile methodology that makes the most sense for their business.


Using an iterative approach will allow you to enjoy earlier and more frequent testing. One of the clear benefits of the Agile process is that testing is introduced earlier in the development process. With shorter development cycles, testers have the advantage of finding issues earlier, which should provide an improved overall quality further down-stream. This is also the case for formative usability testing, in line with IEC 62366-1:2015. The earlier we receive feedback from our target customer, the better we can steer development towards producing a viable and accepted product upon release.

Test early, fix early

Introducing test activities early in the development process allows early fixing of detected quality issues. It is a long proven fact that it is much easier and less expensive to fix issues and solve problems earlier in the development cycle when there are fewer dependencies and design is fresh in the mind.

Once your development has progressed, it is natural to forget why something was done and the difficulty to avoid serious impact when introducing a change increases dramatically.

By having frequent release cycles, you have an opportunity to adjust as you go so that your path fits your testing – both for errors as well as for acceptance. Requirements never stop changing and evolving - your testing should mirror this and your development should be in line with the results.

Involve the end-user on a regular basis

The idea of Agile, or the iterative development approach, is to release working prototypes often, review the work done, improve on that work in the next iteration or sprint.

While prototypes may not always fit the development needs of medical devices, the iterative approach focuses on feature-driven iterations which can be tested. Either way, the focus is customer-based – accepting and implementing requirement changes as they come in to better fit the customers’ expectations.

Shorter release schedules allow for more frequent reviews, which helps the development team stay on track while improving both quality and compliance adherence over time.

Testing – especially usability testing, a prescribed activity by MDR and FDA QSR 820 – will be frequent and timely, enabling teams to build quality into the iterations at a much faster speed. This will in turn shorten time-to-market and deliver a higher quality product as you will see an increase in the test coverage capabilities with a more frequent and early testing process.

Working features will be produced in every iteration and verification will be achieved on these frequent builds through unit tests, manual verification & regression testing. By finding the issues early, there is a significant lowering of development risks for the project and ultimately the product.

Addressing high risks with early testing

Using a risk-based approach, an established best practice in medical device development and prescribed by standards like ISO 13485, implies prioritizing design areas of high risk, address these early, and direct efforts and resources to tasks targeted to minimize said risks.

This includes the implementation of the corresponding risk-mitigating features but also on the verification of the effectiveness of these features, as stated in ISO 14971.

Only by verifying that an implemented feature effectively reduces the identified risk, can it be proven that the initial risk has been reduced. If it cannot be proven that a feature actually reduces risk, the risk is considered to be unmitigated which can jeopardize the entire project.

An early proof of risk reduction effectiveness through verification, will in turn lower the business risk of the project.

Automated testing in medical device development

To increase efficiency and continue to lower time-to-market, automating testing wherever possible is a great way to increase test coverage, decrease cost, and lower overall project risk. Regression testing in particular is very labor-intensive and can lead to mistakes due to the stress that it inherently causes.

By automating these and other tests you will see the reliability and predictability of your test plan increase. Testing visibility and transparency will enable you to better budget for future projects in terms of labor, finance, and effort.  

The key to avoiding delays and lowering development risk is to shorten development iterations, which enables you to test early, test frequently, and adjust development to fit the user needs sooner rather than later.

Early identification of risks, issues, and product validation problems can help overcome them before they become project or product killers. Usability testing, when possible, should be done throughout development to maintain the validity of the project and keep development on track.

Automate regression and any other labor-intensive testing wherever possible.

Happy testers tend to be more accurate testers.

A well-tested, compliant product with early usability acceptance should be everybody’s goal – especially when it arrives on schedule.

Test Management in Medical Device Development

Test managers often have to deal with superhuman juggling of timelines, resource allocations and continuously changing specifications while facing increasing pressure from management as the shipping-date draws closer. Furthermore, the test manager is responsible for making sure that the traceability is completed, that test data integrity remains intact, and that change management procedures are followed correctly, even under situations of extreme stress.

Efficient change management, planning, tracking, and reuse of test executions are therefore much appreciated tools in the V&V Managers toolbox. The Aligned Elements Test Runs aims to address these challenges. The Test Runs manages the planning, allocation, execution, and tracking of test activities. Let's dive into the details.

Planning the Test Run

The Test Run Planning section is the place to define the Test Run context, much as you would write a Verification Plan.

The Test Run Planning information includes:

  • What to test i.e. information the Object Under Test and the addressed configurations
  • When to perform the tests i.e. the planned start and end date for the Test Run
  • Who participates in the test effort i.e. the team of testers and their allocated work
  • How to test the device i.e. which test strategy to use by selecting among the existing test types
  • Why the test is being done i.e. the purpose or reason for performing this particular set of tests on this Object Under Test

Quality Assurance in Aligned Elements

Allocate existing Test Cases from your Test Case library to the Test Run by using simple drag-and-drop. You can use any number of tests of any available types from any number of projects. If needed, add Test Run specific information to the Test Cases and designate the Test Cases to the members of the Test Team.

Once the planning phase is completed, the Test Execution can begin.

The Test Execution phase 

Test Case data is kept separate from the Test Execution result data, permitting a high degree of test case reuse and a clear separation between Test Instructions and Test Results.
Consistency checks are optionally carried out on the Test Case before execution in order to ensure that tests cannot be performed until all foreseen process steps are completed.
During execution, the test input data is optionally kept read-only, preventing testers to modify a reviewed and released Test Case during execution.

All Test Team Members can access the Test Run and simultaneously Execute Tests as well as continuously monitor how the Execution phase progresses.

Test Run Progress Bar

Real-Time Test Execution data is presented through:

  • Individual Test Execution results including any found defects as well as colour-coded feedback on states
  • Colour coded test progression statistics, with the possibility to drill down on e.g. individual Testers or Test Types
  • Burndown charts, showing how planned Test progress over time corresponds to the actual progression

Defect Tracking

During Test Execution, Defects and Anomalies can be created and traced on-the-fly without having to leave the Test Execution context. The Defects can be tracked in either Aligned Elements internal Issue Management system or already existing integrated Issue Trackers such as Jira, TFS, GitHub, Trac or Countersoft Gemini or any mix of these systems. Created Defects and their corresponding status are displayed in the Test Run.


Test Case Change Management

When developing medical devices, it is of paramount importance to keep your Design Control under tight change control.
The Test Run assist the testers and test managers in several ways to accomplish this goal, including optional actions:

  • Preventing inconsistent tests from being executed
  • Preventing input data to get modified during test execution
  • Allowing Test Managers to lock/freeze Design Input and Tests during execution
  • Alert testers when attempting to modify tests for which valid results already exist
  • Signalize whether a Test Case has been reviewed and released or not
  • Allowing the user to explicitly invalidate existing test results when a test is updated with major changes

Testing Variants of the Object Under Test

If several variants of the Object Under Test exist, it is sometimes desirable to create variant-specific test results for common test cases and subsequently create separate traceabilities for the variants. The Test Run uses a concept called "Configurations" to achieve this behaviour. A Test Case is executed for one or more Configurations to make sure that variant-specific test results are kept separate.

The exact data composition of a Configuration is customizable to fit the needs of each customer.

Complete the Test Run

Once all Test Cases has been completed, the Test Run and all its content is set to read-only. Optionally, a Snapshot of all Test Run relevant items is created as part of the completion procedure. A Test Run report containing all the necessary Test Run information can be inserted into any Word Document using Aligned Elements Word Integration with a single drag and drop action.

The Test Run is a Test Managers best friend, providing the flexibility needed during test planning and full transparency during test execution, making it possible to quickly react as real-time test events unfold.

Note: Burn Down Charts is under development at the point of writing and planned to be released in the next service pack.