Skip to main content

IEC 62304:2021 is dead. Long live IEC 62304!

{fastsocialshare}


After not releasing the 2019 edition, the 2021 release was also officially cancelled in May this year.

This implies that IEC 62304:2006 + Amd1:2015 is still the most current and official version of the standard. Even if the last two attempts failed, this doesn’t mean that we cannot learn for the future. Sooner or later there will be a new edition of IEC 62304, just wait and see! Let's take a look at the proposed changes in the Version 2 that we will not be able to enjoy, given the cancellation.

New extended Scope

The older (IEC 62304:2006 + Amd1:2015) version was, as the title already implies, “Medical Device software – Software life-cycle processes, targeted for Software for Medical devices. The Version 2.0 title implied an enlarged scope to include any Health software, thus the proposed new name, ‘Health Software - Software life cycle processes’.

The scope boundaries are clarified by Figure 1:

pic1Figure 1 – HEALTH SOFTWARE field of application


Any software-only medical device, is still regulated by IEC 82304-1:2016, but the whole sector of health devices would then have been covered by IEC 62304 Version 2.
Some examples to these categories:

  1. Software as a part of a medical device: software that is an integral part of a device, such as an infusion pump or dialysis machine.
  2. Software as part of specific health hardware: patient wristband software, healthcare software, health app on specific wearable hardware (i.e. watch, wristband, chest band).
  3. Software as a medical device (SaMD): software that is itself a medical device, such as a software application that performs diagnostic image analysis for making treatment decisions.
  4. Software-only product for other health use: hospital information systems, electronic health records, electronic medical records, mobile applications running on devices without physiologic sensors or detectors, software as a service, i.e. software executed in an external environment, providing calculation-results that fulfil the definition of a medical device.

Software Safety Class is now Software Process Rigor Level

IEC 62304 has taught us to work with a Safety Classification for the Software system or parts of the Software system. In the proposed draft of Version 2, this would in the future change towards classifying the process needed to develop the Software system. The 2021 edition calls this Software process rigor level. This is intended to be the first activity in any software project. To perform a risk analysis on the software’s intended use worst-case scenarios to upfront detect and determine what process rigor to use.

The purpose of the software process rigour level is to determine the required rigour of the software processes prior to their start.

The classifications would still be:

A - Any software system failure cannot contribute to a hazardous situation leading to injury or death and considering external risk control measures, no risk controls are needed within the software system to reduce the risk of the hazardous situation.

B - A software system failure can contribute to a hazardous situation which results in non-serious injury.

C - When A or B does not apply. I.e. For serious injuries or death.

The proposed decision flow is described in Figure 2:

pic2

Figure 2 - Assigning software process rigour level


Now, the new approach has actually a big resemblance to how many companies have tried to tackle the challenge of finding a suitable Safety class. The most common approach that we have seen is that the complete system is labelled as A, B or C. Only rarely do we see different classifications used within the same development project. The classification system should in theory give us the possibility to gauge the documentation effort according to the risk posed by a particular part of the software, i.e. it is a system that should save us time and effort on the documentation level. So why not using it? The excuses we have comes across often lands in one of the following three categories:

  1. the effort of disentangling a software architecture that works great technically into an architecture that would isolate risky parts for "the sake of IEC 62304" is "not worthwhile"
  2. the effort of documenting the argumentation of why some parts are less risk is not worth the effort   
  3. all parts of the software genuinely belongs to the same risk class 

The possibility of using different rigour levels for different parts of the software system is still part of the proposed V2 draft, as are all the well-known decomposition levels ‘SW Item, SOUP, SW Unit and Detailed Design’. As we have got used to it from the 2006 + Amd1:2015 edition, you need to argue with some means of segregation between SW Item to assign different levels of rigour in the development process.

So why did IEC 62304:2021 fail?

How is it possible to work for so long on an update only to finally cancel its publishing? It is, honestly, the first time I ever hear of such a cancellation. How bad must the result have been to end up in the dustbin? For an in depth explanation, one of the committee members presents his view on the events that led to the cancellation in this article.

Mr. Kaestner later also published a poll on whether the software safety classifications should be deleted from the standard as a whole. A whopping 66% of the voters were happy to keep the Software Safety Classifications as they exist today, which is on the whole a proof that the community values a risk-based approach to medical device software development.

The hidden treasure in your Technical File

{fastsocialshare}

I, as many of my peers in the medical device industry, am still, after 20 years in the business, amazed at the amount of time and effort spent on technical documentation.

The innovation and progress in the field of technical documentation don't, in any way, reflect the progress that has been made in the medical device innovations themselves.

Up to 50% of the total development effort of medical devices is spent on documentation activities. 

Furthermore, it is apparent that very few people prefer spending time with documentation activities compared to innovation activities.

This is obviously a competitive dimension in our industry with massive efficiency gains to be made.

The one who masters technical documentation better than his competitor will 

  • shift resources to innovation and build better products
  • faster market access 
  • hit the revenue stream earlier

There is simply gold to be picked up from the floor here.

So why are we not talking more about this? Does it have to be this way?

In the video below, presented at MedConf 2020, we elaborate on how medical device manufacturers can trim their documentation efforts while still remaining compliant.

 

 

 

If you want to assess your own documentation process, reach out to us on This email address is being protected from spambots. You need JavaScript enabled to view it.

Delayed but not forgotten – MDR/IVDR should make you work smarter

{fastsocialshare}

Since Covid-19 delayed the introduction of EU-MDR and EU-IVDR until 2021, many companies have been granted an unexpected reprieve from adhering to the changes to the Medical Device Directive (MDD). This extra time should enable many firms to more adequately prepare and roll out these changes in a more orderly fashion.

MDR may also finally compel device manufacturers to eschew their paper or MS Office based processes due to the necessary documentation needed to adhere.

The actual changes in regulations are not as drastic as first appearances may lead one to think, but they are more encompassing. At 174 pages for MDR and 157 for IVDR, the new regulatory documents are physically 3 times longer than the 60-page predecessor, MDD, or the very thin 37 pages for IVDD, and have many more annexes than the former regulations.

That said, is it really such a huge difference for many companies?

Yes, yes, a thousand times yes!

For many companies, the answer is simply “yes”. Many more medical devices will fall under the new regulations and necessary documentation required by MDR. For many midsized and smaller companies, this can be problematic. Larger device manufacturers have the infrastructure already in place to handle the additional QA/RA efforts needed. For those who were able to avoid regulatory control, MDR/IVDR will be a huge change and challenge.

Bigger and broader

The definition of “medical device” will now include many previously non-medical and cosmetic devices. Devices used for sterilizing, disinfecting, and cleaning of devices – including epilation lasers and contact lenses, for example – will now be labeled as medical devices.

If that isn’t enough, many medical devices will move to a higher risk class and there will be a new classification for reusable surgical devices. IVD’s in particular will have a large increase in oversight and there will now be 4 risk classes and those will cover around 90% of those devices.

Previously only 10% were covered by risk classifications whereas, with IVDR, the definition of an IVD has been expanded to include software and companion diagnostics.

Going forward, even laboratory devices, cleaning & sterilizing devices, and instruments are covered by Class A, with Classes C &D handling life-threatening devices, with D being the highest risk classification. Class B will now be the “default group” covering lower-risk devices such as cholesterol, pregnancy, fertility, and other urine-based testing.

Safety, safety, and more safety

There is also a much larger focus on “safety” – the word appears 290 times in MDR while it was only mentioned 40 times in the former MDD. To prove safety and performance claims, device manufacturers will have to create and maintain more in-depth clinical data than previously expected.

There will be a centralized EU portal that device manufacturers will be required to use when reporting their incidents, injuries, and deaths. This will provide a bit more transparency in regards to safety-related information for patients as they will now have access to this data for a longer time frame.

Not only initial risk is covered but also the continuous, lifespan of the device risk has to be measured. In addition to an increased focus on risk, the clinical effectiveness of the device must be documented.

Wider QMS coverage

What all of this means is that almost every medical device company will require a quality management system (QMS). And not just a QMS that complies with ISO 13485:2016, but one that includes post-market surveillance for each device. The goal of this is to assist manufacturers in better understanding their device throughout its entire life cycle. What is also very important is that no products will be “grandfathered” which means you cannot simply reuse your CE info to get this done.

All devices – even those currently on the market – must conform to the new IDVR standards. As a result, most companies’ Technical File (now called your Technical Documentation) will increase in size. As per EU regulators, your clinical evaluation (CE) Technical Documentation will include all technical information demonstrating how performance and safety are verified and validated by your risk assessment, manufacturing, PMS, design controls, etc.

Device manufacturers must also describe how to handle and document changes to the Technical Documentation in a formal procedure. There is no longer a distinction between a Technical File and a Design Dossier.

In the EU MDR Annex 1 replaces “Essential Requirements” with the now named “General Safety and Performance Requirements.” On the surface, the requirements are quite similar, but they now also include requirements for active implantable devices and bring forth the use of a benefit-risk-ratio.

Bigger documentation demands better tools

All of this additional information and documentation will create an all-encompassing technical library of your device. This means many companies should strongly consider abandoning the “paper / MS Office” and e-mail based processes in favor of software tools “built for purpose”.

Such artifact-based systems are better able to support the transparency, traceability, and control needed to meet the new requirements of MDR/IVDR. Having the ability to link the additional safety analysis that is required to requirements, specifications, mitigations, verifications, and validations in a very transparent and efficient manner will benefit both the device manufacturer and auditors.

Being able to find inconsistencies prior to an audit should always be your goal as this will save you time and money. If the change to the regulations due to MDR/IVDR is going to be smooth sailing for your company, you will need to be as confident as possible in your Technical Documentation.

By adopting an artifact-based system that adheres to your process, the learning curve and other barriers to entry are lowered or completely eliminated. With software tools, you can decrease costs, speed up development, lower testing time, decrease risks and have complete faith in the Technical Documentation you are producing.

Utilize the need to adopt MDR/IVDR as an opportunity to effect changes that modernize your processes.

This will enable you to start to work smarter instead of harder.

 

5 points when evaluating an ALM for Medical Device Development

{fastsocialshare}

So you have realized that an Application Lifecycle Management software probably would make your medical device development more efficient?

Maybe you have experienced that your team:

  • spends a lot of time documenting Design Controls?
  • has difficulty maintaining the traceability for your product?
  • lacks confidence in your current technical file?

There are many good ALM tools out there so it should not be too hard to pick one, right?

However, acquiring an ALM tool is a significant investment and you want to make sure that you pick one that addresses the particular challenges facing your team.

So where do you start?

Writing a Request for Proposal

A common starting point is to establish your organization's needs in a “Request for Proposal” document a.k.a. RFP. The RFP will be sent to a number of vendors and will help you 

This approach has several benefits. First of all, to explicitly spell out what the tool shall do for you will force you to concretize your needs. You shall also include other stakeholders to make sure you are informed about and address their concerns as well. This exercise will also reveal conflicting and ambiguous requirements which are always better to address sooner rather than later.

We recommend this RFP practice since it provides structure to the evaluation process. However, the RFP phase can sometimes lead to an “Analysis Paralysis” situation where an unproportionate effort is spent on establishing the “complete” RFP. Sometimes this is caused by incorporating too many stakeholders in the process, conflicting visions or needs or simply as a fear of failure when making a significant investment. Worst case, the RFP process can drag on for months, swell uncontrollably in size and, in the end, cost much more than the tool itself.     

We recommend you prioritizing the core needs that initially triggered the evaluation. Keep the RPF short and concise!

For a medical device ALM, this RFP should at least include requirements for:

  • Requirement Management
  • Establishing Traceability
  • Risk Analysis capabilities
  • Management of Verification & Validation activities
  • Proper versioning of Documentation and Artifacts
  • Establish reports that satisfies standards like ISO 13485 and ISO 14971
  • An automatic Audit Trail of all changes
  • Enable you to fulfill your Quality Management System

(We consider these the minimum features required for compliant medical device development).

Since we operate within the medical device scope, 21 CFR Part 11 Electronic Records (most of the artifacts in an ALM!) and Electronic Signatures (depending on how you implement the sign-off in your organisation) shall be addressed by the ALM. This will generate a number of requirements here summarized as:

  • 21 CFR Part 11 Ready (note that a tool in itself can never be 21 CFR Part 11 compliant. Final compliance always depends on processes within the organization)

As always, remember to make sure the requirements are testable.

Testability equals CSV

In our line of business, a Computer Software Validation, CSV, (more on that here) will eventually be used to validate the ALM. The RFP can provide valuable input to this activity and you should be able to derive your CSV User Requirements from your RFP.

Even though low CSV efforts may not be a formal requirement in itself, the CSV activity will definitely be a cost driver that should have an impact on your purchasing decision. In some cases, the CSV cost can surpass the cost of the tool itself.

The CSV effort is typically driven by:

  • The complexity of your QMS (e.g., number of Design Item types, number of different relationships between design items as well as number of SOPs to test in the validation).
  • The complexity of your CSV process (the rigorousness and granularity required)
  • Approach to tailor the tool itself a.k.a. customizable or configurable.

Finding out the vendors tailoring approach must definitely be part of the RFP. Do not forget to consider required integrations between the ALM and external tools. Integrations are notorious for requiring additional programming efforts.

If the tool has to be scripted/programmed to tailor it to your organization, the system falls into the CSV category “customizable“ for which the validation efforts for a system is expected to be higher (including deeper testing, managing risks, code reviews, etc).   

A configurable ALM is therefore preferable since:

  • The initial tailoring costs are lower
  • The initial and subsequent CSV costs are lower
  • Customization may impact the compatibility with future ALM releases and forward compatibility
  • Administrating the tailoring over time will require specialized personnel

Require the ALM to fit with your Quality Management System

Your QMS, development processes, and templates have been carefully developed to specifically address your company’s unique situation. Compromising your QMS may have significant effects on compliance and efficiency.

Some ALM  tools are pre-configured and you are required to make substantial changes to your QMS in order to fully utilize its potential. Unless you are considering rewriting your QMS for other reasons, we would discourage you from doing too big changes to an existing QMS since it may affect existing certifications and jeopardize compliance.

Another option to bridge the difference between the QMS and the ALM to set up processes and documentation that translates/maps between the tool and your QMS. However, operations that require manual mapping as described are notoriously error-prone and subjects your technical files to an increased risk of errors and inconsistencies.

Instead, the ALM should ideally adapt to your QMS, reflect your processes, use your templates and present your terms and notions in the GUI. This will lead to a higher level of acceptance, lower the learning curve, speed up the tool adoption process and reduce the risk of errors. For these reasons, we at Aligned always strive to make the tool follow the QMS.

The impact on your QMS should ideally not be more than an additional work instruction/SOP, stating that an ALM is used to fulfill certain activities defined by QMS. This work instruction/SOP will be a valuable input to your CSV activities.

To get you started, we have put together an example RFP spreadsheet. Download it here: Medical Device ALM Request for Proposal Template.xls

One Tool to Rule Them All or a Landscape of Tools?

What is the required scope of your ALM? This is a very common question often discussed in terms of cost. Intuitively, one IT tool must be cheaper to maintain than multiple tools, right?

A large tool spanning many areas of activity may have the benefit of providing integration, but from experience, dedicated tools usually are really good at what they do and definitely have their eligibility as well. Within a large organisation there will be tool boundaries. This fact may even simplify the introduction of an ALM tool and lower any business risk if you aim at building a working tool landscape rather than replacing it all at once.

The consideration of integrating with other tools can also be part of the RFP. We recommend however to keep the priorities right between core functionality and ‘nice to have’s’. The need to integrate different tools may be fulfilled over time.

Go through your RFP again and check if tools already existing in your organisation may already fulfill your requirements and adjust any priorities accordingly.

Next Step, gather Hands-on Experience!

Your RFP is not going to get you answers to all of your questions. Now it is time to experience the tool first hand. This is a crucial step in your evaluation. Most vendors will affirm every one of your requirements but you will quickly realize that the “how” is more important than the “what”.

Ask if you can access and evaluate the tool yourself. The vendor’s response will give you some important indications. Do they trust the ALM to be self-explanatory or will they require you to take training?

At this point, there are a few points on what to expect from the vendor:

  • You shouldn’t need to buy any infrastructure to install and run the evaluation version.
  • The evaluation should not require a significant time investment on your part
  • Any training and/or customization cost for the evaluation shall be minimal

Try to account for the subjective impression from the hands-on experience in your evaluation. They will be an important factor in embracing the tool within your organisation.   

Estimating the total cost

To prepare for the last step, it is time to request quotes from your vendors. Make sure these quotes include all cost-driving factors through-out the ALM’s lifecycle to accurately facilitate comparison between vendors.

An accurate lifetime cost estimation shall include:

  • Cost of Licenses
  • Cost of Service and Support (find out what is included)
  • Cost of configuration/customizing (how long until the system can be used?)
  • Cost of training
  • Cost of CSV activities
  • Cost of import/transfer of legacy data into the ALM
  • Cost of IT Infrastructure (include cloud costs, software licenses for OS, databases etc.)

Lifetime costs will inevitably arise from maintaining the tool:

  • Is the tool so complex to administrate that we need to build an internal team (or optionally buy this service from the vendor or 3rd party service provider) covering this new expertise?
  • Or you can rely on the vendor (ideally this is already part of the support agreement) for maintenance

Wrapping up the evaluation

Having the input from the RFP and the hands-on experience, you can now hopefully make a well-informed decision based on the capabilities of the ALM. You have gathered knowledge about how to make the tool will fit in your organisation and IT landscape.

At this stage, as a note of caution: a given tool will not solve all your problems. Tools do speeds up processes, provide structure and checks but in essence, they rely on improving something that already works.

To summarize the points:

  1. Formulate an RFP, focused on medical device development
  2. Plan ahead for CSV
  3. Check for fit with your QMS
  4. Gain hands-on experience
  5. Establish total lifetime cost

Best of luck with your evaluation!

 

An author of IEC 62304 revisits the standard after 14 years

{fastsocialshare}

Some of our most popular blog posts have involved discussions with Christian Kaestner, co-author of IEC 62304 and IEC 82304. He has brought us great insights into the underlying ideas of the people writing these standards and the standards' implications and interpretation throughout the industry.

You can tap into Mr. Kaestner's knowledge in his online course on medical device software and IEC 62304 in collaboration with Medical Device HQ

The full course is found here and a free short version is YouTube.

Once again, we have been fortunate enough to get a few minutes with Mr. Kaestner, who in this post elaborates on the current state of IEC 62304, 14 years after its initial release.

Q: With your extensive experience of both writing the standard and seeing how it has been used by the industry, how do today’s modern software companies approach and implement IEC 62304? Has IEC 62304 successfully stood the test of time? 

A: Well, both yes and no. Yes, because after almost 15 years the standard is now well known and established in the field. No, because the software environment has changed a lot since its first release in 2006. For example, the way we today use apps, cloud-based solutions, and AI, not to mention security aspects and the shift towards using Agile methods for software development! 

Q: Can we expect this to be covered by the upcoming edition 2? 

A: No, not really. I would say the main goal for edition 2 is to align better with IEC 82304-1, which is a product standard for health software (including medical device software). The scope of IEC 62304 edition 2 will be broadened from medical device software to health software.

This expansion has provided some challenges such as ISO 14971 for risk management is currently a normative reference, which means you must comply with also ISO 14971 to comply with IEC 62304. This cannot be required for general health software developers while risk management still is a vital part of a medical device software development process!          

Q: Many medical device software developers want to work Agile but find it difficult when reading IEC 62304, do you have any suggestions? 

A: It is no secret that the standard is written very sequentially and implies a classical V-model approach, but you are free to work in any way you want! If you struggle with implementing the standard, I suggest you consider the standard as a list of requirements that you have to fulfill but you can disregard the order in which they are written.

For example, there is a requirement to “Verify software requirements” but if you find it appropriate, you can do this as part of your software release. I will not say that it is formally incorrect, but I would argue it is a risky approach since late discoveries during the verification might generate a lot of re-work.

Likewise, if you use Scrum, it might be appropriate to include the activity “Verify software requirements” in for example sprint planning and verify the requirements contained in a sprint. You could even consider verifying software requirements on a story-based level!  

 

Figure 1: Incremental approach to verification of software requirements  

Q: The IEC 62304 Software Safety Classification seems to be the most contested part of the standard.  Is the industry using the Software Safety Classification the way it was intended according to you? Are there any aspects of it that you feel should receive more attention? 

A: Interesting questions indeed… There have been many discussions over the past years about whether Software Safety Classification is needed or not. Some argue that Class C software development is the current state-of-the-art, and anything less than that is simply not acceptable these days. Still, Software Safety Classification will survive and remain in edition 2 (with some editorial changes).

Personally, I like the risk-based approach for two reasons; firstly, it increases the likelihood for low-risk software devices to enter the market which otherwise would have been too expensive and not make it to the market. Secondly, it increases the awareness of what parts of your software can be dangerous.  

Q: Can you elaborate a bit more about your second point about awareness? 

A: Early awareness of what can become harmful is a super valuable input to the architectural design because, with this information at hand, you can separate risky and non-risk functionality into different items. If this is done correctly, you can assign the appropriate classification to items and concentrate your safety efforts where it makes the most sense! And this is the essence of IEC 62304: focus your efforts on the risky parts! 

Figure 2: Items in a software system can be classified differently 

A final point on Software Safety Classification: note that the concept is based on injury, not harm as in ISO 14971. Harm is a much broader concept than injury and includes damage to property and environment which strictly speaking does not need to be considered when determining a Software Safety Classification.  

Q: The IEC 62304 Problem Resolution chapter has received some heat regarding the (at least perceived) overly cumbersome approach. Do you think there is any substance to these claims? Are there any best practices for how to best tackle the Problem Resolution requirements of the standard?  

A: Yes, the Problem Resolution is a bit strange to follow in the standard. Throughout the standard, there are requirements to use a “problem resolution process”.

However, in most cases, it does not make sense to use such a process as it is defined by clause 9 “Software problem resolution process”. For example, if a bug is discovered during a system test and the software is still not on the market, it does not make sense to “advice relevant parties”! 

My approach to this is to use a scalable process and use different approaches depending on whether the problem relates to released functionality or not. Where a problem relates to a released functionality, regardless of found internally (“Internal feedback”) or externally (“Feedback”), then all parts of clause 9 apply.

For all other problems, I use selected parts of clause 9 and work with a generic software problem resolution process. 

Figure 3: An approach to manage software problem resolution process

About Christian Kaestner 

Christian has recently released an online course about medical device software and IEC 62304 in collaboration with Medical Device HQ. The full course can be found here, and if you are interested in a free short version, you can find it here on YouTube.

Christian Kaestner is a freelance software medical device consultant who often worksin close collaboration with QAdvis AB in Sweden. He is a member ofthe project team authoring IEC 62304 and was also part of the project team developing IEC 82304-1.

Risk vs Benefit => Residual

{fastsocialshare}

Whenever we come across risk management SOPs from different companies, we always keep an eye open for how that company solved the requirement 7.4 in ISO 14971 to perform a Risk vs Benefit Analysis as well as the handling of Residual Risks.

In excel-based approaches it is not uncommon to see something similar to the table below:

The QMS expects that each risk (representing one row) should be addressed in the spreadsheet. When I see this, I personally feel inclined to ask:

- What happens if someone in the team does not agree that the risk is acceptable, claims that the risk does not outweigh the benefits or simply cannot decide?

In many cases, these extra columns made it into the template after an audit of the QMS to quickly fix any audit findings, a somewhat unfortunate result of increasing regulations.

Let’s look at these requirements in a bit more detail.

Is the risk acceptable?

Answering this question is very much like shooting from the hip. Either the answer is trivial, e.g. the risk is frequent and severe, in which case we should definitely control the risk as per the procedure. The other complex scenario is that it very much depends on a lot of different factors. The complex scenario is only possible to answer in the scope of the benefit of the device. Still, we often come across it as an individual item in the risk analysis.

The solution is straightforward

The principle of lowering risks as much as possible should be applied and when all possibilities for applying risk measures have been looked at, the company needs to do a proper Risk vs Benefit Assessment for the complete device.

Hint: The risk management report is a good location for this. 

Benefit-Risk Analysis

One could imagine that looking at an isolated function and weighing the risk towards the benefit of that function may give us an insight into if the function is acceptable for the device. In most cases, a function cannot be handled as an isolated part of the system, nor can the benefits of that function be easily compared to the risks it may impose on a user, patient, or operator. E.g. Trying to argue that a power unit may impose risks to a patient but having the benefit that the device needs electricity to work is not a meaningful exercise. ISO 14971 (2019) is exceptionally clear in this matter:

 

Good reasoning and sensible pros and cons are asked for

Here we recommend looking at the Device as a whole. Will the discovered residual risks still make it beneficial in the scope of the intended use of the device? In case of doubt, look at trying to apply additional Risk Control Measures to open risks. Remember that all identified risks acceptable or not are considered residual risks for the device. See ISO 14971 (2019) Section 6:

 

Once again: The risk management report is a good location for this. 

New Risks?

Claiming that there are no residual risks involved is in our opinion not possible to handle in a spreadsheet column. The questions cannot simply be answered with a yes or no. Let me be a bit provocative and suggest that this question alone could replace any risk analysis method altogether. Something like this:

Although I’ve seen similar approaches at very large established medical device manufacturers, I do not recommend this approach!

Here is what ISO 14971 (2019) has to say about it:

Bring the analysis to completion

Here we need to use our toolset properly and link the risk control measures to any implementing functions and continue with a new loop of risk analysis.

The outcome of that task will answer if there are still any residual risks present. This exercise may need to be repeated for any suggested risk control measures.

Finally, summarize all your findings in the risk management report and do not forget to remove these columns from your risk analysis templates!

Using existing software when developing a medical device; a gift from the gods or a Trojan horse?

{fastsocialshare}

 

Are you developing software for a medical device?

Then you probably know about IEC 62304 -“Medical device software – Software life cycle processes”. This standard describes what is expected of you when building software for your medical device. As it covers the entire device life-cycle, it specifies not only the development requirements but also covers maintenance, risk management, configuration, and problem resolution activities necessary for compliance.

This article will not analyze the IEC 62304 activities in detail (should you be looking for a more general overview of software development according to IEC 62304, I can highly recommend this presentation.).

Instead, we shall focus on a specific situation: you have an existing piece of software. It was not developed according to IEC 62304, but you want to use it in a device.

What are your options? Can you use that software at all? Is complete re-development the best, or even the only option?

Before answering these questions, it is necessary to step back and consider why this software was not developed according to the IEC 62304.

Is it a SOUP?

Let’s say the software was developed by a 3rd party, commercially or as open source. If so, this case is covered by treating the software as SOUP (Software of Unknown Provenance) according to IEC 62304.

“Unknown” in this context is not about the identity of the maker, but rather the lack of knowledge about the procedures used to develop the SOUP. As a consequence, there is an issue of trust regarding the software’s quality, its level of safety, and its potential risk of failure.

How does IEC 62304 let you get to grips with this lack of trust?

Start out by documenting all the functional/technical requirements that your use of the software results in. This gives you a good basis for writing the corresponding test cases that verifies correct functionality. It also serves as a starting point for identifying and assessing the risks associated with the functions of the SOUP.

After having analyzed the functional risks, you should proceed with identifying and assessing any risks from the integration of the software into your device. As a further risk identification source, you should, if available, get hold of a list of all the known bugs of the SOUP.

For all the identified and assessed risks, derived from the requirements, the integration, and the list of known animalities, you need to reduce the risks by applying appropriate measures.

Now, this might get a bit tricky. In many cases, modifying the SOUP itself might not be a viable option.

One known way forward is to provide dedicated error-handling and comprehensive, over-arching software risk mitigations in the software that uses the SOUP, which itself must be developed according to IEC 62304.

If you can foresee the reuse of this SOUP in other devices, you should consider designing an “insulation layer” around the SOUP. As a result, you get a wrapped SOUP which, in the context of your products, is safe to use.

However, note that all risk analysis activities described above to create a valid, safe SOUP, are conducted with a particular device's particular intended use and a particular risk acceptance profile in mind.

Thus, re-use of the SOUP, leveraging the SOUP documentation described above, is only valid if the target devices have similar intended uses and near-identical risk acceptance profiles. Using the SOUP in an HIV detecting device (Class C) requires a different risk assessment than when used in an allergy detecting device (Class A).

Is it Legacy Software?

Another scenario is that the software was developed in-house and put on the market before IEC 62304 was released.

If you now want to modify the software or use it in a different context, say with updated hardware, you can consider the software as “legacy software”. Amendment I to IEC 62304, released in 2015, describes the necessary steps.

Start out with a risk assessment. As the software is already on the market, you will have Post Market Data to include. Leverage this data as best you can. If your identified risks do not show up in years of carefully collected Post Market Data, you have a strong case for declaring your identified risks as acceptable! Truly a nice shortcut!

On the other hand, if you find reported risks that are not sufficiently addressed, you need to perform the necessary risk mitigation process steps described in 4.4.3 of the standard.

If you plan to modify the software before the next release, the changes must be part of the risk analysis. These modifications need to be made according to your IEC 62304 maintenance process, described in chapter 6 of the IEC 62304.

Amendment I to IEC 62304 can be your friend, especially if your risk assessment concludes that the risks originating from the use of this software are acceptable.

Is it a Prototype?

The most common scenario for having to bring an existing software into the folds of IEC 62304 is this: you developed a software prototype and, let’s be honest, you cut some quality corners during the rapid proof of concept phase.

Now it turns out that management loves your work! They are truly impressed by its dazzling features and, moreover, how fast you were able to achieve such great results! Just quickly make it IEC 62304 compliant and get it to the market!

Do you really have to start all over again? If your prototype is really just a prototype, then you should. But what if it turns out to have a clean architecture, modules separated by interfaces, and even have some unit tests in place? Wouldn’t that be good enough for IEC 62304?

No, not strictly speaking. IEC 62304 is a process standard. It is concerned with _how_ you develop your software, not the software itself. Since it is not possible to alter the past, “retrofitting the software into IEC 62304 compliance”, is formally not a way forward.

BUT. Let’s be a bit more pragmatic here.

It is rarely the case that software development is performed in the exact prescribed order defined by a software development process. Therefore, more often than not, there will be a set of remaining development and documentation tasks to finalize after your software is up and running. It could be those additional tests to write, a code review to add, or updating that detailed design document.

This is not specific to the IEC 62304 but applies to any software development or more general software lifecycle process. At times, the actual development and the process are simply slightly out of synch. As long as everything runs well in the end, all tests are passed, the process is largely followed and the documentation is complete and consistent, there is a good case for claiming compliance.

It would therefore be possible to argue that a prototype “catching up” with IEC 62304 is just an extreme case of this “getting back in synch” scenario.

Are there any downsides with this “catching up” approach?

Yes, potentially. IEC 62304 recognizes that the standard requires an extensive testing and documentation effort by the developers. It also recognizes that some parts of the software are less risky than others. As a result. IEC 62304 provides an opportunity to do a little less testing and documentation for the lower-risk parts, as long as you are rigorous about the high-risk parts. If this separation can be made, a lot of work can be avoided.

Why is this a problem for existing software? Can’t I just do a risk analysis on existing parts and then divide them into “safe” and “risky” ones?

Well, not quite. You will discover that a software architecture being efficient and easy to maintain, which is what most software developer intuitively implement and strive for, is not the same as an architecture which separates high and low-risk parts from a patient safety perspective.

If grouping of safety-relevant functionality into dedicated parts is not done from the start, the riskiness tends to spread out and “contaminate” all parts of the software. An architecture with many high-risk software items (parts) and very few low-risk items will ensue.

You can of course accept this situation and do the additional required steps. The IEC 62304 will allow it. But it is a foregone opportunity compared to designing your software with the IEC 62304 in mind from the very start.


Conclusion

Bottom line, is using an existing software not developed according to IEC 62304 worth the trouble?

As described above, depending on the situation, there are definitely options. It all boils down to comparing the inherited risks of the software to the benefit of using it. A very complex software, being hard to rewrite, but does not introduce significant risk is a good candidate for re-use.

However if the number of inherited risks are high and the offered functionality could also be realized by developing and documenting a new software according to an IEC 62304 compliant process, you should strongly consider doing just that.

 

Medical Device Validation: what is it? Wait! Are there several kinds?

{fastsocialshare}

"Validation" is one of those terms that can be confusing for people new to medical device development.

This word, "validation",  is used in (at least) three different contexts that are not necessarily related.

These are:

  • Design Validation
  • Process Validation
  • Computer Software / System Validation

Let's take a look at each one of these in turn.

Design Validation

What is it?

The regulatory lingo (in this example from FDA 21 CFR 820.3) for "Design Validation" declares that it is about “establishing by objective evidence that device specifications conform with user needs and intended use(s).”

It is a mandatory step in the medical device development process and if you are developing a medical device you will have to perform it at some point.

Why is it done?

Design Validation is about proving that we have "built the right thing". This is often contrasted with Design Verification which is meant to prove that we "built the thing right".

The difference lies in that the latter checks that the device was designed/built according to specifications (which is very important), whereas the former checks that the device works as it is intended to work for the end-user, where the intention is documented in the User Needs and to some extent in the intended use.

As you can see, their regulations have foreseen that there might be a discrepancy between the intention and the specifications derived from the intention when developing the device.

It might be the case that the designers have not solved the user needs in a way that makes the device usable.

"Anecdata" provides endless examples of devices that are built "correctly" according to specs but turns out to be unusable/dangerous when used by the end-users in a real environment e.g. a glucose monitor alarm is not loud enough to be heard when traveling on a noisy bus.

Does this all sound like "usability" and "human factor analysis" activities to you? Then you are exactly right. Design Validation bears many similarities with usability activities. However, Design Validation also includes clinical tests in order to establish that clinical results correspond to expected levels are under intended use.

What is the input for the activity?

How do you know what to validate? Simply put, your User Needs. Therefore, it is important to define the User Needs in a manner that makes them possible to validate in a practical way.

Hence, the next time you write down your User Need, think hard about how you plan to validate them.

When is it done?

Design Validation is usually performed at the end of the design process, often after the Design Verification has been completed but before the medical device is transferred to production.

The instances of your medical device used in the validation should ideally exist in a production-equivalent state.

Who is involved?

The Design Validation tests are generally performed by real users (doctors, nurses, patients, etc.), preferably in the actual intended environment (in the hospital, in the lab, in the ambulance, at home, etc.). The idea is to get as close as possible to "real use by real users".

Engaging this type of personnel can be both complicated and costly. It is therefore important to plan ahead and secure necessary resources (location, personnel, etc.) well in advance.

If the validation reveals serious flaws in the device, which requires re-design, severe consequences on development timelines and costs are at stake.

It is therefore important to involve real users throughout the design process and not only at the end.

How is it documented?

Like all other Design Control stages, the Design Validation needs to be planned, which is documented in a Design Validation Plan, and then executed, which is often documented in validation test cases and results. The summary and outcome are documented in the Design Validation Report.

Traceability, consistency, and completion are of course important steps at this stage. Therefore, Aligned Elements is a great tool to use for these activities.

Process Validation / Production Validation

What is it?

Process Validation most often means validation of your manufacturing process or parts thereof (although many other processes can be subject to this validation as well).

Or in FDA lingo, "process validation means establishing by objective evidence that a process consistently produces a result or product meeting its predetermined specifications."

Why is it done?

So why is this important? FDA requires (21 CFR 820.75) the "results of the process" ("results" = your medical device, "process" = your manufacturing process) to be "fully verified by subsequent inspection and test".

This is a way to say that each unit of your produced medical device must undergo "full" final testing and inspection and thereby we ensure that this particular instance of the medical device is safe for the user to use.

Now, there are cases when it is not possible to verify a safety-relevant characteristic of the device using testing and inspections, with the typical example of sterile devices as prime contender.

To verify a device being sterile, one would need to break the sterilization barrier (open the packaging) which would of course make the device non-sterile and therefore unusable/unsellable.

So when it is not possible to "fully" inspect and test the device, you can resort to process validation.

Thus, you employ process validation to check that the production process gives your medical device that particular, safety-relevant characteristic every time for every device.

What is the input for the activity?

Although ISO 13485 and FDA QSR 820:75 both list process validation as something very important, they are both a bit elusive on exactly which parts to validate. Basically, all the characteristics that cannot be fully covered in the final inspection are eligible to process validation.

For more information, the go-to document for Process Validation, although published in 2004, is the GHTF SG 3 NB 99:10:2004.

When is it done?

Generally, process validation is a pre-production activity, meaning that the production line is fully operational but you have not yet started production "for real".

Who is involved?

Just like for Computer Software Validation, the legal manufacturer is the responsible party. However, it is not uncommon to outsource parts of the work 3rd party supplier or OEM manufacturers. 

How is it documented?

The "GHTF SG 3 NB 99:10:2004" goes a long way in describing the recommended documents. Very broadly speaking, there ought to be a Process Validation Plan, sections describing IQ, OQ, and PQ activities, and finally a Process Validation Report.

 FDA explicitly requires the following from the validation, which ought to be properly documented:

  • All activities which have been carried out must be recorded, including date and signature.
  • Procedures, with which process parameters are shriveled, must be established.
  • Only qualified personnel may validate a process.
  • Methods and data used for controlling and monitoring processes, the date of execution, persons carrying out the validation, as well as relevant equipment must be documented.
  • In case of changes, the manufacturer must assess whether re-validation is necessary and must carry it out if needed.

Computer Software / System Validation

What is it?

This type of validation, often shortened as "CSV", concerns "data processing systems are used as part of production or the quality system" (from FDA QSR 820.70) or, as freely interpreted from ISO 13485:2016, all computerized systems being used in any of the processes regulated by the QM system.

Thus, it does not concern the software in your device. Rather it refers to computer systems used to design and produce your medical device. Aligned Elements is an example of such a system. Other examples include Excel Spreadsheets, e-QMS systems, measuring equipment in production, and so forth.

Why is it done?

A multitude of regulations (FDA 820, FDA CFR 20 Part 11, ISO 13485, GAMP5) recognizes that patients/users are potentially exposed to safety risks if the computer systems employed in the design and production process of the device does not behave as intended.

Computer Software Validation is the suggested remedy to this perceived risk.

What is the input for the activity?

The input for the CSV is the User Requirements that should, as one might think, specify what the software under test can do, but rather what the organization plans/intends/want to do with it.

If you plan to use Excel for calculating a tolerance, then you the User Requirement should state exactly that.

It is recommended to take a risk-based approach to the testing and focus on the User Requirements that render the most severe risk impact if not adequately fulfilled.

So what about the validation of the software used in your device? Since you are then validating the design of your device, go check the Design Validation section. 

When is it done?

CSV should be performed on the target computer system before it is put in use.

Who is involved?

The responsible party getting the CSV activities planned and executed is clearly the organization that uses the computer system. However, the actual leg work is sometimes outsourced to 3rd party suppliers. Note that it is not mandatory for end-users to be involved in the actual testing.

How is it documented?

How the organization performs CSV shall be defined in a company SOP. The SOP shall define which computer system types fall under CSV, how to plan and execute the CSV activities, and under which criteria a re-validation is applicable.

In its most basic form, a CSV plan should state the User Requirements which captures the intended use of the systems. The User Requirements are then tested in IQ, OQ, and PQs, and the results of these tests and, if applicable, deviations are summarized in the Validation Report.

Conclusion

As we can see, although these three types of Validation do not have a lot to do with each other, they are very important to Medical Device manufacturers.

A common trait for all activity groups is that they need to be properly documented, often following the same type of scheme, starting out with a plan document stipulating the "things to check for" (requirements), a number of test protocols to be executed, collection of objective evidence, traceability and report generation.

Aligned Elements is a perfect tool to structure and manage these types of validation activities and manufacturers can therefore benefit from Aligned Elements in all three areas.

For a free trial of Aligned Elements, start here.   

Reaping the Benefits of Agile Testing in Medical Device Development

{fastsocialshare}

Traditionally, many medical device manufacturers have chosen the Waterfall process as the best way of managing their development with the assumption that regulatory bodies preferred this method.

While the regulations do not prescribe a method for designing and developing a product, some FDA QSR related guidance, such as the FDA’s “Design Control Guidance for Medical Device Manufacturers”, use a language that points in this particular direction.

 

FDAProcess

 

The medical device manufacturers’ focus, due to the uncompromising effects of not being compliant, has always been on creating documents and reviewing those documents - with organization, compliance, and quality being more important than end-user-focused and efficient development. Testing processes have also stayed true to this path. As most of the development in the medical device industry is subjected to these forces, we often see the following “testing truths”

  • Testing starts on completion of components at the end of the development cycle
  • Since formal testing is documentation heavy, most testing is one-off.
  • Involving the end-user tends to begin at design validation after the product is either completed or nearly completed
  • The rush to decrease the time-to-market causes hasty testing without nearly enough meaningful test coverage
  • Manual testing is done really manually – meaning printed tests, checkoff sheets, and then feedback and results are input manually back into the Word test plan.
  • Regression testing is also done manually which causes burned-out testers who are more prone to make errors over time.

With time, these drawbacks have become more and more obvious to the industry as they result in increased cost, lower quality, and lower end-user acceptance.

Although Agile development appears to be a promising alternative, its (unwarranted) reputation of being unstructured and informal, focusing on speed rather than quality, has made the traditional medical device manufacturer shun it.

After all, does it not seem wiser to use a tried and tested, compliant development process, known and accepted by notified bodies, than to gamble with an unknown alternative, even though the cost might be a bit higher?

On the contrary. The truth is that many of the industry’s major players, including GE Healthcare, Medtronic, Roche, St. Jude Medical, Cochlear, Boston Scientific and Toshiba have adopted the agile, iterative development approach. However, going agile is not always an all-or-nothing proposition.

It is up to the manufacturer to pick and choose the parts from the agile methodology that makes the most sense for their business.

agile4

Using an iterative approach will allow you to enjoy earlier and more frequent testing. One of the clear benefits of the Agile process is that testing is introduced earlier in the development process. With shorter development cycles, testers have the advantage of finding issues earlier, which should provide an improved overall quality further down-stream. This is also the case for formative usability testing, in line with IEC 62366-1:2015. The earlier we receive feedback from our target customer, the better we can steer development towards producing a viable and accepted product upon release.

Test early, fix early

Introducing test activities early in the development process allows early fixing of detected quality issues. It is a long proven fact that it is much easier and less expensive to fix issues and solve problems earlier in the development cycle when there are fewer dependencies and design is fresh in the mind.

Once your development has progressed, it is natural to forget why something was done and the difficulty to avoid serious impact when introducing a change increases dramatically.

By having frequent release cycles, you have an opportunity to adjust as you go so that your path fits your testing – both for errors as well as for acceptance. Requirements never stop changing and evolving - your testing should mirror this and your development should be in line with the results.

Involve the end-user on a regular basis

The idea of Agile, or the iterative development approach, is to release working prototypes often, review the work done, improve on that work in the next iteration or sprint.

While prototypes may not always fit the development needs of medical devices, the iterative approach focuses on feature-driven iterations which can be tested. Either way, the focus is customer-based – accepting and implementing requirement changes as they come in to better fit the customers’ expectations.

Shorter release schedules allow for more frequent reviews, which helps the development team stay on track while improving both quality and compliance adherence over time.

Testing – especially usability testing, a prescribed activity by MDR and FDA QSR 820 – will be frequent and timely, enabling teams to build quality into the iterations at a much faster speed. This will in turn shorten time-to-market and deliver a higher quality product as you will see an increase in the test coverage capabilities with a more frequent and early testing process.

Working features will be produced in every iteration and verification will be achieved on these frequent builds through unit tests, manual verification & regression testing. By finding the issues early, there is a significant lowering of development risks for the project and ultimately the product.

Addressing high risks with early testing

Using a risk-based approach, an established best practice in medical device development and prescribed by standards like ISO 13485, implies prioritizing design areas of high risk, address these early, and direct efforts and resources to tasks targeted to minimize said risks.

This includes the implementation of the corresponding risk-mitigating features but also on the verification of the effectiveness of these features, as stated in ISO 14971.

Only by verifying that an implemented feature effectively reduces the identified risk, can it be proven that the initial risk has been reduced. If it cannot be proven that a feature actually reduces risk, the risk is considered to be unmitigated which can jeopardize the entire project.

An early proof of risk reduction effectiveness through verification, will in turn lower the business risk of the project.

Automated testing in medical device development

To increase efficiency and continue to lower time-to-market, automating testing wherever possible is a great way to increase test coverage, decrease cost, and lower overall project risk. Regression testing in particular is very labor-intensive and can lead to mistakes due to the stress that it inherently causes.

By automating these and other tests you will see the reliability and predictability of your test plan increase. Testing visibility and transparency will enable you to better budget for future projects in terms of labor, finance, and effort.  

The key to avoiding delays and lowering development risk is to shorten development iterations, which enables you to test early, test frequently, and adjust development to fit the user needs sooner rather than later.

Early identification of risks, issues, and product validation problems can help overcome them before they become project or product killers. Usability testing, when possible, should be done throughout development to maintain the validity of the project and keep development on track.

Automate regression and any other labor-intensive testing wherever possible.

Happy testers tend to be more accurate testers.

A well-tested, compliant product with early usability acceptance should be everybody’s goal – especially when it arrives on schedule.

Escape the Office

If you are developing medical devices, there are an awful lot of Design Control documents that need to be created.

There are several different ways of approaching that task.

Speed up your Computer Software Validation activities

{fastsocialshare}

Computer Software Validation is something we discuss a lot where I work. Since Aligned Elements is a Medical Device Quality Management System relevant software, our customers make sure that their Aligned Elements installations and configurations are validated.

It is a necessary activity, although not always perceived as bringing an awful lot of value. Sometimes the cost of CSV activities actually supersedes the cost of acquiring the software itself. 

I sometimes get the question if I know any best practices when it comes to Computer Software Validation and I have made a few observations from my CSV experiences. 

Let me tell you about the top 3 things that have an impact on the Computer Software Validation effort:

1) How many test cases do you perform?

It is dead simple. More test cases (note: not necessarily "more requirements") is more work. But do YOU have to perform all those test cases? What if the supplier has already verified them? 

You are very much allowed to leverage existing supplier documentation and testing records. A properly conducted Supplier / Vendor assessment can lead you to the conclusion that the suppliers' verification documentation suffices. Remember that the risk we are trying to mitigate with a Computer Software Validation effort is primarily about patient safety. A software like Aligned Elements is not directly involved in patient safety and this fact can be leveraged when assessing and deciding on the validation scope. 

2) How do you record the test results?

The bulk of the CSV work resides in performing and recording the test results. The way you record those results can have a significant impact on the overall effort. In order to record that the actual behavior of a test step corresponds to the expected behavior, is it enough to tick a box (passed/failed)? Or do you need to write a text? Or do you need to make screenshots? As you can imagine, there is a huge difference between the former and the latter.

"But are we not required to take screenshots?".  The short answer is "No, you don't". Not if you do not think it proves anything more than the tester checking a box. FDA requires you to select your own methods and tools for quality assurance. If you have a good case for not making screenshots (which I think you have), you do not have to.

3) Who (and how many) has to sign all these CSV documents?

This might sound a bit odd but more than once I have run into cases where the validation is completed and everything that is missing is a signature from some top management figure. And now we run into a buy-in problem. If this guy has not been involved CSV approach and suddenly disagrees with how it was conducted ("BUT THERE ARE NO SCREENSHOTS?!?"), it can have a significant impact (significant as in "redo the validation").

So the lesson here is to get early buy-in from the people that sign the document. On a general level, reducing the number of signatures will speed up any documentation process. And you might want to contemplate the necessity of having the IQ, OQ, and PQ plans/reports in different documents (more documents to sign) or if you can combine them.

Validating Aligned Elements

When you acquire Aligned Elements, you get free access to our internal verification documents to use in your vendor assessments as well as pre-filled Validation documents to kick-start your validation. Contact us for more information at This email address is being protected from spambots. You need JavaScript enabled to view it.

What's up ahead regarding Computer Software Validation?

FDA announced that their much anticipated "Computer Software Assurance for Manufacturing, Operations, and Quality System Software" draft guidance should be out in 2019 but now it seems like it has been postponed to 2020. The new guidance is supposed to use a more agile approach, including a risk-based and value-creating perspective on CSV activities.

 

Medical Device Cyber Security Requirements from the Johner Institute

 {fastsocialshare}

Finally! State-of-the-art Medical Device IT Security Requirements! And they are free! And you can download them!

For those of us who (in vain) have poured over IT Security standards and guidelines of variable quality in order to distillate useful requirements: look no further! A state-of-the-art, useable Medical Device IT Security guideline is finally here!

The Johner Institute has in collaboration with TÜV SÜD, TÜV Nord and Dr. Heidenreich (Siemens) compiled an excellent set of Medical Device Cyber Security Process and Product requirements and made it available to the industry for free.

Roughly 150 IT Security requirements are available in the Guideline covering both process requirements as well as product requirements, including the level of expertise needed to implement them, are available in the following structure:

Process requirements

Requirements for the development process

  • Intended purpose and stakeholder requirements
  • System and software requirements
  • System and software architecture
  • Implementation and development of the software
  • Evaluation of software units
  • System and software tests
  • Product release

Requirements for the post-development phase

  • Production, distribution, installation
  • Market surveillance
  • Incident response plan

Product requirements

  • Preliminary remarks and general requirements
  • System requirements
  • System and software architecture
  • Support materials

This IT Security Guideline is directed to Medical Device Manufacturers as well as Auditors, Reviewers, and Hospital Management.

Dr. Johner and his collaborators have in this guideline managed to deliver concrete, best-practice guidelines, something that most other standards and regulations certainly tend to lacks.

Patches

The entire guideline is available in the GitHub-Repository „IT Security Guideline“ (https://github.com/johner-institut/it-security-guideline/) and is a recommended read for everyone concerned with Medical Device cybersecurity. You can also download Excel files with the requirements from the Johner Institute website.

We have made the Product IT Security Requirements available as a downloadable extension for Aligned Elements. It is recommended to use them in conjunction with the material in the mentioned GitHub-repository, which contains valuable additional information and footnotes that explain the rationale and context for some of the requirements.

 

5 Tips for Efficient Risk Assessments

{fastsocialshare}

Risk Assessments play a central role in Medical Device development. All medical device manufacturers apply risk management (they should because they have to!). All of them claim to be compliant with ISO 14971. And all of them do it differently.

I have worked with a large number of clients and I have seen more Risk Assessment variants than I can count. Some are good, some have, let's say, "potential".  

zeppelinwtext

From this experience, I can deduce a few best practices that will reduce the risk assessment effort considerably.

Here are my top five tips:

Don't brainstorm to identify risks

You are required to identify and assess ALL potential risks. How do you find them ALL? That can be a daunting question for someone new to the medical device industry.

However, the solution is to be structured i.e. to use a structured approach to systematically identify risks. There exist several known methods to do this, including:

  • Task Analysis (analysing the use process)
  • System Analysis (analysing the system through decomposition)
  • Using the ISO 14971 annex questions
  • Using existing risk reports of similar devices

Regardless of the approach selected, brainstorming should not be one of them. There are a number of well-known reasons for this, the most important one being that you will miss important risks.

Next time around, try a structured technique. You will identify more risks. I promise.

Use both top-down and bottom-up Risk Assessments

Some companies rely on EITHER bottom-up OR top-down risk assessment techniques and miss out on the fact that both approaches deliver vital and often DIFFERENT risks.

Top-down risk assessment techniques (such as PHA or Task Analysis) can be done early in the development process without much knowledge about the actual design of the device. It is a great tool for early identifying use errors and probably misuses.

Once the device design is known, the selected design itself must be analysed for risks (such as materials used, geometry, movements, and energy emittance, etc.) through a bottom-up risk assessment. FMEA's are very popular and well designed for this purpose. Both these techniques complement each other and should be conducted by any serious medical device manufacturer.

Don't keep Design Controls and Risk Management in separate systems

Design drives risk. And Risk drives design. This will become apparent when you need to follow up on the implementation and verification of mitigations as well as the further analysis if mitigations introduce new risks. The glue between the design and the risks is traceability. The effort of managing this traceability in a paper-based documentation system will be VERY high (those of you who have done it will nod now!).

So is applying software tools the solution? Not necessarily, since proper traceability monitoring can not be done until the requirement management tool is integrated with the risk management tool (or vice versa). Only by automatically managing the traceability between the Risk Assessment Items and the Design Items, preferably in a single tool, can true trace monitoring be obtained.

Use reasonable probability and severity scales

I am glad to see a clear trend of tightening down the probability and severity scales during the risk evaluations. From previously having used up to 10 steps, the current trend tends towards five to six steps or less. People simply have a very hard time judging whether a probability should be six or seven on a 1-10 scale and spend too much time pondering such questions. The range of options is simply too large to be effective!

For the probability axis, I would like to endorse Dr. Johner's approach of having each step representing 2 orders of magnitude. He explains this very well by saying, that apart from such an approach lets the probability axis span over more than 8 order of magnitudes, "...the factor 100 indicates the precision which we can appreciate... If you ask a group of people, how long it takes (on average) for a hard disk to be defective, the estimates vary between 2 years and 10 years. But everyone realizes that this average is greater than one month and less than 10 years. And between these two values is about a factor of 100."

Make use of existing mitigations

In many cases, the risk assessment is carried out when the design is already known. In such cases: when coming up with mitigations for your identified risks, use the already existing mitigations in your current design!

I bet your current design already contains a whole bunch of design decisions that are risk mitigations without you really considering them as such. The absolute majority of design teams I have encountered are very, very good at designing innovative and safe devices. However, many of the design decisions taken are based on previous experience, industry state-of-the-art, or simply old habits having been refined over time. Since these engineers are often better designers than document writers, they simply do not see their design (often already in place) through the lens of risk management.

Bottom line: your current design already contains of an uncovered treasure of existing mitigations. Try to use your existing design as mitigations when performing your next risk assessment.

Aligned Elements, our medical device ALM, assists you in performing structured risk assessments. Its highly customizable risk assessment configuration can be set up for a large array of risk analysis variants. Should you be interested in a demonstration, contact us at This email address is being protected from spambots. You need JavaScript enabled to view it.

5 Tips for writing better User Needs

 {fastsocialshare}

In the beginning, there was the User Need.

According to FDA, the User Needs are the starting point for building a safe and efficient medical device. However, User Needs can be elusive to an engineering team used to rigid techniques and accustomed to having "full information" when approaching a problem.

bikeIt is rather rare that the developer has the real and deep experience a User has and it can therefore be precarious to leave the User Need elicitation process in the hands of an engineering team. Here are some tips for eliciting better User Needs.

Consider the Intended Use

It has been said 100 times before, but I say it again: start out by considering the Intended Use and Indication for Use.

These two items will tell you:

  • who is the device for (or rather, who do you say the device is for)
  • why (or for what) is the user using this device (or rather, for what you say the device is to be used for)

The whole idea here is to get as close as possible to the user and the situation in which the user applies the device. What medical condition is the device intended to address? Where and under which circumstances is the device being used? In a hospital or an ambulance? During the night? In the rain? By a child? By a blind person?

Answering these questions will put you in the shoes of the user and will let you describe the user's needs in his or her own words. Imagining being the user in the situation where the need for using the device arises will also give you an idea of the constraints facing the user at the time of application.

Thus, considering the "who" and the "why" is generally a more fruitful starting point than elaborating on the "how" (which is what engineers tend to do).

Who is the user?

In a majority of cases, a medical device is handled by more than one type of User during the product's life cycle. This becomes clear when considering non-core usage tasks such as:

  • transportation
  • installation
  • calibration
  • maintenance
  • service
  • decommissioning

How did the device arrive at the usage location? How and by whom was it deployed? Was it carried in a pocket? Was it transported by air? 

Apparently, it is important to consider all people involved with the handling of the device since these actions may have safety implications. Again, do not second guess the needs of these users but involve them in the process.

User Needs can be vague

Using fuzzy language (such as adverbs) when documenting requirements is a known bad-practice. However, User Needs may be written in a less prescriptive way if it captures important aspects of the User and the Usage. We know that, in the end, the Medical Device will end up being very concrete and void of any fuzziness.

The challenge thus lies in translating the potentially fuzzy language used in User Needs into concrete specifications in the Design Input Requirements and concrete Validation Tests. Make sure that the User Needs are well understood by the team deriving the Design Input Requirements and Validation Tests from the User Needs.

Not all input is User Needs

Costs, brand colors, production constraints are examples of important design input that are not necessarily User Needs. I usually recommend to my clients that they add an additional Design Control type for Stakeholder Needs in order to pick up this input which definitely influences the design, although it is not always required to be verified or validated.

When setting up the validation activities, you must be explicit about what you intend to validate and explicitly define the criteria applied to make this selection. By separating the input in the two mentioned Design Control Types makes it easy to explain which Design Controls are intended for validation (User Needs) and which are not (Stakeholder Needs).

Write User Needs with Validation in mind

The way User Needs are written will heavily influence the Validation activities. Since Validation is a resource-intensive (and therefore expensive) activity, it makes sense to keep a close eye on the proposed validation work that will be derived by a User Need when writing it.

If some of the validation activities are already known at an early stage, the team can use this knowledge to cleverly formulate the User Needs in a way to maximize the coverage of the known validation activities. By this, I do not mean that important User Needs should be left out but rather than formulating and structuring the combined User Needs can have a positive or negative impact on the validation effort.

By following these simple guidelines, you should be able to get more bang for the buck next time you elicitate User Needs.

Aligned Elements, the medical device ALM, manages end-to-end traceability of all Design Control items, including User Needs, Design Input Requirements, Validation and Verification Testing. If you are interested in an online demonstration of Aligned Elements, let us know on This email address is being protected from spambots. You need JavaScript enabled to view it.

Free Risk Management Templates

{fastsocialshare}

Risk Management is a crucial part of Medical Device Development and if you are about to develop a Medical Device, you and your team are likely to find yourselves spending many hours compiling Risk Assessments.

There exist several techniques for performing a proper Risk Assessment but they all follow the same basic steps:

  • Define your risk policy (risk acceptance criteria)
  • Identify the Hazards through a structured analysis
  • Evaluate the Risks by estimating severities and probability
  • Mitigate the Risks that are not acceptable
  • Implement and verify the mitigations for effectiveness

To get you started, we have made two free Risk Assessment Excel templates available for download.

Download Free Risk Assessment Templates

The first demonstrates a Failuremode and Effect Analysis (FMEA) approach, a widespread technique used in many areas and industries. We often see it in bottom-up types of Risk Assessment.

The second one uses a Preliminary Hazard Analysis (PHA) approach which is an excellent top-down approach earlier in the design cycle where many of the design details are not yet known.

Both these techniques are available in Aligned Elements and we have compared and contrasted them in earlier posts.

Task Analysis as Requirement Management method in Medical Device Development

{fastsocialshare}

Requirements Management is strange. It is a well-researched area which each year yields an impressive number of articles, conferences, and known best-practices. Still, this body of knowledge remains remarkably underused by the people who would gain from it most. In many of the organisations I encounter, well-established requirements elicitation techniques are simply not undertaken.

Perhaps this has to do with the deceivingly simple task at hand. "I just need to write down what the device should be able to do. How hard can it be?". Hard enough it seems, if one considers the many reports stating how mismanaged requirements lead to enormous costs down the line.

This is exactly what Prof. Dr. Samuel Fricker and his team have established in their paper "Requirements Engineering: Best Practices" (2015).

He concludes that although each and every one of the 419 participating organisations "...elicited, planned, analysed, specified, checked, and managed requirements...", very few of them apply formal requirement elicitation techniques.

For those who did "...only three techniques correlated with requirements engineering success: scenarios of system use, business cases, and stakeholder workshops."

The common idea with these techniques is to:

  1. bring structure into the elicitation process
  2. involve many people in the discussion

Both parts are essential to success. Doing 1) without 2) misses out on critical knowledge in the organisation. Doing 2) without 1) is just wasteful. Personally, I have particularly good results from applying a variant of Use Scenarios called "Task Analysis / Task Risk Management" described by Andy Brisk. In short, this is a basic Task Analysis method, where processes are analysed step by step for requirements and risks.

Opposing the natural inclination of engineers to decompose a system into parts for analysis, this method focus on how you use the device, i.e. the system is decomposed into the scenarios where the user interacts with the system.

In other words, we let use drive design.

Mr. Brisk lists a number of process examples that coincides well with those applied to a generic medical device, such as:

  • Unpacking
  • Setup / Installation
  • Calibration
  • Startup
  • Daily Operation
  • Shutdown
  • Maintenance
  • Service
  • Alarms and Alerts
  • Decommission

If the found processes are too large to analyse, Mr. Brisk advises decomposing them further into manageable sizes.

Once the processes are identified, Mr. Brisk prioritizes the processes according to risk, in terms of:

  • Are there significant damaging consequences if the task is performed incorrectly? (High Severity of potential Harms)
  • Is there a reasonable likelihood that tasks will be performed incorrectly? (High Probability of Use Errors)

The idea of this risk-based prioritization is to focus on high-risk processes and analyse them early and thoroughly.

The processes are now analysed step-by-step, preferably by a group of people. They imagine and discuss the steps an imaginary user goes through to perform a task. How is the system accessed? How does the user communicate his intentions to the system and vice versa? What potential errors might the users make?

Task Analysis

Coloured post-it notes can be used to describe the steps on a wall. A tip is to use different colours for process steps and identified potential use errors. This approach manages to create both a common vocabulary of describing the system and its use, as well as creating a common understanding of how the device is meant to be used. It also tends to highlight areas where people had a different understanding of how the system should be used (by simply measuring the loudness of the discussion).

The Task Analysis described above is an easy and straight-forward way of eliciting requirements AND uncover high-level risks in a usage context. What makes it particularly suitable for Medical Device manufacturers is that it combines risk- and requirement elicitation in a single, common activity. Much too often, these two tasks are performed by separate groups in separate contexts.

A further bonus is that it intrinsically produces both Use Scenarios and Use Errors, which are substantial and essential parts of the Usability Management File.

Some important lessons learned using this method include:

  • Try to use a similar step granularity in all processes
  • Before starting, try to agree on what shall be considered "known" about the system
  • Define a clear Goal as well as Start and Endpoint for each process in order to declare the scope
  • Be generous with risks, rather too many than too few
  • Do not forget to write down the elicitated Scenarios and Risks before you leave the room

Aligned Elements supports the documentation of Use Scenarios and their associated Use Errors as described above in our IEC 62366 Configuration. The Use Errors are applied in the overall High-Level Risk Assessment to drive further design decisions.

Do electronic signatures reduce cost for medical device manufacturers?

{fastsocialshare}

That medical device development entails a lot of documentation should not be a surprise to anyone. Hundreds of documents are created, reviewed, released, then modified, reviewed, and released again. The majority of these documents need to be signed, often by two persons or more. Collecting signatures, although it seems like a trivial task, becomes a significant nuisance when the number of documents and releases increases.

One of our customers insisted on having 3 people sign each test case before release. Their 18 000 test cases yielded a combined signature collection effort of 5 man-years (estimated 30 mins to collect a single signature, their estimation).

It is rare to find medical device manufacturers that enjoy writing medical device development documents, but it is even rarer to find those who gladly spend their days collecting signatures for the said documents.

The obvious question is hence: how can we spend less time on document signatures?

For many medical device manufacturers, the equally obvious answer seems to be electronic (or digital) signatures.

So why is it so hard to get a signature?

There are several potential reasons for this. Maybe the Signer is a very busy person and simply has no time for this task. Maybe the Signer is located somewhere else through work, traveling, vacation, or other reasons. Or perhaps the previous Signer did not pass on the document to the next Signer in line. Or maybe a formal signature sequence (order) is forced by the document process in question, where a Signer that actually is available, is prevented to carry out the task since some Signer further up the signing sequence has not fulfilled hers. Or it might be that the document in question cannot be signed before some other related document has been released (i.e. signed).

Thus, there can be formal reasons but also trivial reasons why a signature does not get timely collected.

The most trivial is of course that it is just hard to physically get the document in front of the Signer (or the other way around) for some reason or another. Once you get that far, the literary "stroke of the pen" is usually a quick affair. This is the perceived major efficiency benefit of Electronic Signatures. You do not have to physically get the document in front of the Signer. The document does not need to get passed around. The Signer can pull it up (from an E-Signing System) whenever he wants, from wherever he is. This allows a quasi-parallel execution of signatures. Two people on different sides of the planet can sign the same document at the same time (almost)! Costs associated with printing, sending, scanning, and storing the paper copy are eliminated. E-Signatures also entail increased measures of security, enhanced authenticity, resisting tampering, and also provide accurate signature audit trails.

Before explaining how to introduce an E-Signing System, let me say a few words about Digital and Electronic Signatures.

Digital and Electronic Signatures

Even though the terms are often used interchangeably, there are some notable differences between the two concepts.

According to FDA, Electronic Signatures are "Compilation of data (user name/password, dongles, biometric)", which is unique for a person. This can be used to sign documents and is as legally binding as a “wet signature”. The signature and the association with the signed entity (document) are stored in a database of the Signature System. Furthermore, not all E-Signing Systems leave a visual mark on the signed document that indicates that it has actually been signed.

Digital Signatures, on the other hand, require a Digital Certificate that ensures the identity of the signer. A part of that Digital Certificate gets embedded in the signed document during the signing process. As a result, the validity of the signature can be checked independently of the E-Signing System.

So, someone needs to guarantee the identity of the signer.

For Electronic Signatures, the organisation (the manufacturer) does this by using the validated E-Signing System.

For Digital Signatures, it is the issuer of the Digital Certificate that ensures the identity of the signer. Digitally signed documents often also contain a visible signature.

Obviously, there seem to be several advantages to using Digital Signatures. The validity of the signed document can be inspected independently of the E-Signing System, which is an advantage if the E-Signing System goes down, is corrupted or the system vendor goes bankrupt.

Any drawbacks?

Yes, a few. Obtaining a Digital Certificate from a third-party vendor is expensive and requires an administrative effort. There is the obvious question of where and how to store these certificates as well as associate them with the user. They also have the nuisance to expire after a while and therefore need to get regularly renewed. People also have a tendency to marry and change their names etc. which also leads to renewals. Furthermore, it is not guaranteed that the validity shows up correctly in third-party viewers (like Acrobat PDF Reader), for technical reasons having to do with root certificates.

An organisation can circumvent all this by issuing its own Digital Certificates. This is somewhat of an IT "adventure" but it can be done. Costs can then be lowered somewhat but there is still a significant administrative effort. Moreover, internally generated Digital Certificates can of course not be validated by third-party viewers (like Acrobat PDF Reader).

So, there are pros and cons with both options.

However, they have several similarities, and most important of all, both methods are recognized by the FDA.

Let's do E-SIgning!

Let's say we want to engage in eSigning (Electronic or Digital). What kind of effort can we expect to get this up and running?

Here is a shortlist of some of the steps:

  • Assess the E-Signing System for Part 11 / Annex 11 compliance
  • Qualify the E-Signing System Vendor as Supplier according to your QMS
  • Assign responsible roles and people for the E-Signing System
  • Install and configure the E-Signing System
  • Make or buy the Digital Certificates (if used)
  • Adapt your QMS to recognize E-Signatures and describe how they are intended to work
  • Prepare all the Document Templates to be used for eSigning (the system needs to know where in the document the signature shall be placed. Page nr, location on the page, margins, and spaces, etc).
  • Validate the E-Signing System
  • Create E-Signature User Guidelines and train all users in how to use it
  • Notify the FDA (which is compulsory)

There is thus a non-neglectable initial effort to set up the E-Signing System, and also an effort to keep it maintained, both from a process as well as from an IT perspective.

There are also several other things to consider before you decide to go down the Electronic Signature path:

Document Life Cycle

All documents have a life cycle and the signing is only a very small part of this process. You need to consider how the document gets into the E-Signing System, how it interfaces with other systems such as Document Management Systems, workflow engines, or e-Forms of which the document may be a part.

You also need to pay attention to how you plan to archive the electronic documents. This might seem like a trivial question but it is more depth here than you think.

External Users

If external users (as in external to your organisation) are going to use the system, you need to prepare a process where they get access to the E-Signing System, including setting up a corresponding user in the system with the appropriate Digital Certificate if applicable. These external users also need to get trained in how to use the system.

Hybrid Signature Situations

Are you going to end up with documents that are partly signed electronically and partly with traditional "wet signatures"? If so, you need a described process for this as well.

Ownership

Last but not least you need to establish who has the ownership of the E-Signing System. Is it the IT Department that usually acquires and maintains IT systems? Or is it the R&D department that probably is the most frequent user of the system? Or is it the HR department that is concerned with the identity of the people working in the organisation? This needs to be clarified before you start.

Predicted Outcome

As mentioned, an E-Signing System will decrease the effort of placing the document in front of the Signer. It will reduce costs associated with transporting the paper copy of the document. It will also potentially increase the security and authenticity of the documents.

But there are things an E-Signing System cannot do. Regardless of how deep you entrench E-Signatures as a paradigm in your organisation, you will almost inevitably have a residual number of documents that are signed with ink. Thus, no matter how much you push E-Signatures you will end up with a hybrid system, composed of documents signed electronically and documents signed with ink. Be prepared for this.

Then, a Signature System is per se an IT system with all the work it entails. It needs to be validated and maintained, people will repeatedly forget their credentials if they do not use the system frequently and there will be ubiquitous bugs and errors. This all means increased costs that need to be compared and contrasted with the costs by using a manual system.

Finally, an E-Signature System does not make bad processes good just by digitizing them. Overloaded employees will still remain overloaded regardless.

For which situations does E-Signatures make sense?

E-Signatures make sense when signing is a routine operation i.e. when a user makes several signings per week. E-signing for the occasional (or maybe even singular) CEO signature on a Product Requirements Document does not warrant the effort.  

Document types that are well suited for E-Signatures are those that exist in many instances and that have a comparably small amount of actual content (as in "quick to read"). Examples of such document types are time reports, expense reports, purchasing approvals, and test case documents.

Last but not least, maintenance is of course made easier if all Signers are part of the organisation (as opposed to involving multiple external users).

Efficient with and without E-signatures

If you find it cumbersome to collect signatures today, there are several ways you can scrutinize your organisation for efficiency improvements.

Analyse current signing process

Are all these signatures really necessary? Ask yourself why they were added ("It is required by our process" is not a valid answer) and most importantly, what does the particular signature mean? In what way does a particular signature make the document "better"?

Don’t get dependent on busy Signers

The overloaded Project Manager or CTO that never has time for signing is a common bottleneck in many organisations. Appoint deputies to all signing functions (the deputies shall also have deputies). Try to avoid sequentially forced signature sequences. They cost more than they bring. Finally, simply planning the signing occasion like a regular meeting (set up a meeting in the calendar) might yield some good results.

I hope this post has highlighted some of the pros and cons of employing Electronic Signatures. If there is anything I want you to take home it is probably this:

  • Signature efficiency stands and falls with the process, not the system
  • Analyse and improve the process first!!
  • E-signatures can be very beneficial in specific situations
  • E-signatures gains (i.e. speed gains) must be weighed against costs

Aligned Elements supports electronic as well as digital signatures of documents with automatic relaying to external Document Management Systems.

If you would like to get a demonstration of e-Signatures in Aligned Elements, just let us know.

Test Management in Medical Device Development

{fastsocialshare}

Test managers often have to deal with superhuman juggling of timelines, resource allocations and continuously changing specifications while facing increasing pressure from management as the shipping-date draws closer. Furthermore, the test manager is responsible for making sure that the traceability is completed, that test data integrity remains intact, and that change management procedures are followed correctly, even under situations of extreme stress.

Efficient change management, planning, tracking, and reuse of test executions are therefore much appreciated tools in the V&V Managers toolbox. The Aligned Elements Test Runs aims to address these challenges. The Test Runs manages the planning, allocation, execution, and tracking of test activities. Let's dive into the details.

Planning the Test Run

The Test Run Planning section is the place to define the Test Run context, much as you would write a Verification Plan.

The Test Run Planning information includes:

  • What to test i.e. information the Object Under Test and the addressed configurations
  • When to perform the tests i.e. the planned start and end date for the Test Run
  • Who participates in the test effort i.e. the team of testers and their allocated work
  • How to test the device i.e. which test strategy to use by selecting among the existing test types
  • Why the test is being done i.e. the purpose or reason for performing this particular set of tests on this Object Under Test

Quality Assurance in Aligned Elements

Allocate existing Test Cases from your Test Case library to the Test Run by using simple drag-and-drop. You can use any number of tests of any available types from any number of projects. If needed, add Test Run specific information to the Test Cases and designate the Test Cases to the members of the Test Team.

Once the planning phase is completed, the Test Execution can begin.

The Test Execution phase 

Test Case data is kept separate from the Test Execution result data, permitting a high degree of test case reuse and a clear separation between Test Instructions and Test Results.
Consistency checks are optionally carried out on the Test Case before execution in order to ensure that tests cannot be performed until all foreseen process steps are completed.
During execution, the test input data is optionally kept read-only, preventing testers to modify a reviewed and released Test Case during execution.

All Test Team Members can access the Test Run and simultaneously Execute Tests as well as continuously monitor how the Execution phase progresses.

Test Run Progress Bar

Real-Time Test Execution data is presented through:

  • Individual Test Execution results including any found defects as well as colour-coded feedback on states
  • Colour coded test progression statistics, with the possibility to drill down on e.g. individual Testers or Test Types
  • Burndown charts, showing how planned Test progress over time corresponds to the actual progression

Defect Tracking

During Test Execution, Defects and Anomalies can be created and traced on-the-fly without having to leave the Test Execution context. The Defects can be tracked in either Aligned Elements internal Issue Management system or already existing integrated Issue Trackers such as Jira, TFS, GitHub, Trac or Countersoft Gemini or any mix of these systems. Created Defects and their corresponding status are displayed in the Test Run.

TestCaseList

Test Case Change Management

When developing medical devices, it is of paramount importance to keep your Design Control under tight change control.
The Test Run assist the testers and test managers in several ways to accomplish this goal, including optional actions:

  • Preventing inconsistent tests from being executed
  • Preventing input data to get modified during test execution
  • Allowing Test Managers to lock/freeze Design Input and Tests during execution
  • Alert testers when attempting to modify tests for which valid results already exist
  • Signalize whether a Test Case has been reviewed and released or not
  • Allowing the user to explicitly invalidate existing test results when a test is updated with major changes

Testing Variants of the Object Under Test

If several variants of the Object Under Test exist, it is sometimes desirable to create variant-specific test results for common test cases and subsequently create separate traceabilities for the variants. The Test Run uses a concept called "Configurations" to achieve this behaviour. A Test Case is executed for one or more Configurations to make sure that variant-specific test results are kept separate.

The exact data composition of a Configuration is customizable to fit the needs of each customer.

Complete the Test Run

Once all Test Cases has been completed, the Test Run and all its content is set to read-only. Optionally, a Snapshot of all Test Run relevant items is created as part of the completion procedure. A Test Run report containing all the necessary Test Run information can be inserted into any Word Document using Aligned Elements Word Integration with a single drag and drop action.

The Test Run is a Test Managers best friend, providing the flexibility needed during test planning and full transparency during test execution, making it possible to quickly react as real-time test events unfold.

Note: Burn Down Charts is under development at the point of writing and planned to be released in the next service pack.

Cybersecurity Risk Assessments for Medical Devices - 5 important aspects

{fastsocialshare}

The medical device as a stand-alone product is a waning concept.

The desire to make use of the information collected in a medical device in other health systems, coupled with recent advancements in networks and interconnectivity has resulted in more devices being connected to the Internet. As a consequence, they have become more vulnerable to hackers.

A 2015 KPMG survey found that 81 percent of health care organizations had their data compromised within the previous two years.

FBI reports (FBI Cyber Division PIN (Private Industry Notification) #140408-010) that due to the transition of paper to electronic health records, lax cybersecurity standards and higher financial payout for medical records in the black market, the "cyber actors will likely increase cyber intrusions against health care systems". 

During 2016, several ransom-attacks were launched against health institutions in the United States with wide-ranging consequences. 

FDA has shown heightened interest in cybersecurity issues and released three guidelines during the last two years. Medical Device manufacturers are likely to focus more on cybersecurity risk management activities in the near future and assign additional resources accordingly.

One integral part of the cybersecurity risk management process constitutes the risk assessment.

Performing risk assessments is a core activity in the medical device industry and many of the available techniques are well-known and well-used by industry professionals. Having long and thorough experience of risk management might lead the same professionals to believe that cybersecurity issues can be harnessed by the tools already at hand.

There are, however, key aspects where cybersecurity risks differ from traditional medical device risks.

Risk identification and the building blocks of a cybersecurity risk

The fundamental challenge during risk identification is to ensure that all relevant risks have been identified.

Verifying that this criterion has been fulfilled is often difficult and therefore structural help and practical advice is very welcome.

Due to the wide variety of possible medical device types, ISO 14971, the standard for medical device risk management, understandably has a hard time defining concrete identification techniques that are relevant enough to provide value for every kind of medical device. 

IT risk management, operating in a narrower technical scope, provides a host of techniques, tailored to this domain. Many of these techniques are based around the asset-vulnerability-threat model. 

These components can be described as followed.  

  • Assets are the entities a cyber-intruder attempts to access and control. An asset has a value to the patient and therefore also to an intruder. In a safety-related context, we can exemplify assets as device configurations, health data, medical device functions, or battery power.
  • Vulnerabilities represent weaknesses in the medical device that, when exploited, can give an intruder access to an asset. Software bugs, application design flaws, and insufficient input validation are examples of software vulnerabilities. But vulnerabilities can also be found in hardware, business processes, organizational structures, and interpersonal communication.
  • A threat is defined as an event with the potential to an adverse impact on an asset. The threat is executed by a threat agent (i.e. an intruder) exploiting a vulnerability in order to access an asset.

A combination of these building blocks describes a cybersecurity risk. 

As a starting point, the manufacturer should be able to enumerate assets for the given device. By analyzing how these assets can be targeted, e.g. by performing a “Threat modeling analysis”, using a data flow analysis of the application, identifying "data-at-rest" (data storage) and "data-in-motion" (data transfer) will further help the manufacturer to identify vulnerabilities and threats, particular to the medical device.

Bottom-line: Identifying assets, threats, and vulnerabilities will support the manufacturer when enumerating potential cybersecurity risks. The identified items should be listed in the risk management documentation.

What is the probability of a cyber attack?

The Medical Device industry takes its risk assessment cues from ISO 14971, which defines risk as the combination of the probability of the occurrence of harm and the severity of that harm. 

The computer security industry, on the other hand, has used several kinds of assessment methods to estimate the "riskiness" of cybersecurity risks. None of them rely solely on the probability and severity of an event. Instead, these techniques estimate and/or quantify aspects of the assets, threats, and vulnerabilities that in combination say something about the computer security risk. 

So how can the medical device risk assessment methods concerned with the safety of patients be connected with techniques whose primary concern is information security?

In the AAMI paper "TIR 57 - Principles for medical device security - Risk Management", the authors address this discrepancy and attempt to transpose the cybersecurity line of thinking into the domain of ISO 14971. 

It is easy to see how the "potential adverse effect" can be regarded as analogue to the “Harm” in ISO 14971 and be quantified with a corresponding severity.

The probability factor in ISO 14971 does not have a direct equivalent in the cybersecurity risk domain. A composite factor called "exploitability" combining characteristics of the vulnerability, the threat agent, and the medical device itself is mentioned in the FDA guidelines. This factor intends to indicate the amount of work involved in order to invoke a successful attack.

The AAMI TIR57 group suggests a two-pronged approach to establish something similar, combining two likelihood factors.

The first factor, the “Threat Likelihood”, defines the likelihood of the threat agent having the motivation, skills, and resources to exploit a given vulnerability.

The second factor estimates the likelihood of harm being the effect of an exploited vulnerability.  These two factors in combination make out the probability of the cybersecurity risk.

Note that this approach is similar to the P1 (probability of the hazardous situation occurring) / P2 (probability of the hazardous situation causing harm) frequently used in the medical device community.

Information security theory recognizes that aspects and characteristics of the threat agent, the vulnerability, and the device itself drive these factors.

Bottom-line: caution shall be taken when estimating probability/likelihood/exploitability for cybersecurity risks. The manufacturer's Risk Management SOP shall address this concern accordingly and be adapted if necessary.

The drivers of cybersecurity risks

During classic medical device risk assessments, the identified causes leading to hazardous situations and potentially to harms are in most cases accidental. Although malicious use should be considered, this area often receives considerably less attention than harms caused by accidental events. 

For cybersecurity risks, the opposite is true. Here, a significant factor in the causal chain constitutes the intentional behavior of a threat agent causing harm by exploiting a vulnerability.

Whereas the software flaw might have been created accidentally, exploiting it is an intentional act.

A malicious agent may come in many shapes and forms, such as a criminal organization, a competitor, or disgruntled employees. Each of these has its own motivation, skill set, and reach when it comes to detecting and exploiting vulnerabilities, which affects the likelihood of a vulnerability being exploited as we have seen above.

Therefore, not only needs the focus to be shifted from accidentally caused risks to intentionally caused risks. The manufacturer benefits from analyzing the characteristics of the malicious intruder in order to do a proper risk assessment.

Bottom line: Manufacturers will need to shift perspective from accidentally caused risks to explicit maliciously caused risks during the identification and classification of the risk assessment.

The poisoned SOUP

Just like in the rest of the software industry, medical device companies use third-party libraries to increase productivity when developing medical devices.

These third-party libraries (or frameworks or applications), sometimes referred to as "SOUP" components (Software of Unknown Provenance) are of course also subject to cybersecurity scrutiny.

It is debatable if third-party components, and in particular open source components, are inherently more unsafe than proprietary produced applications (there are several arguments against such claims).

Regardless of this claim, it can be safely assumed that many of these libraries were not developed with a medical device cybersecurity context in mind. The security firm “Contrast Security” reports that 26% of downloaded SOUP:s contained known vulnerabilities and were still applied.

It shall be noted that SOUP:s themselves often rely on and integrate third-party software, which increases the scope of the problem accordingly.

The medical device manufacturer is responsible for any vulnerabilities caused by SOUP:s in his device and must therefore analyses his SOUP:s accordingly.

This shall include a systematic inventory of SOUP:s, actively collecting information on current vulnerabilities in SOUP:s as well as applying timely updates of the SOUP:s when available.

Bottom-line: The manufacturer must have a clear plan for how to handle potential cybersecurity risks in SOUP:s. 

Skills required to assess cybersecurity risks

Cybersecurity vulnerabilities are largely made up of software flaws. Understanding the technical details of how such vulnerabilities come into being, the technical influence they have, and how they can be mitigated requires precise software knowledge. Including software developers in the assessment group is, therefore, a highly recommended measure.

It is a misconception that all software professionals are software security professionals. The majority of today’s information security problems can be traced to flaws in code. Many software developers lack basic training and understanding of cybersecurity. 

As already mentioned, the security scope for an interconnected medical device is much larger than the device itself. The network infrastructure in which connected medical devices operate is often outside the control of the medical device manufacturer but still has a great impact on the overall risk exposure and needs to be included in the risk assessment.

It is likewise a misconception that cybersecurity is a strictly technical field. Cybersecurity vulnerabilities are not exclusively found in hardware and software but also in business processes, organizational structures, human behavior, and the environment in which the device operates. A thorough understanding of where, how, and by whom the medical device will be operated is therefore important input for the risk assessment.

Last but not least, the clinical experts are required for estimating the potential harm of compromised availability, confidentiality, and integrity of the associated data.

All these competencies are required in order to perform a comprehensive cybersecurity risk assessment and it is clear that it stretches beyond being an internal R&D activity.

The risk assessment benefits from involving stakeholders beyond the immediate software development team. If specific cybersecurity expertise is lacking in the organization, the manufacturer should consider employing or train their own experts or outsource these functions to a competent partner.

Bottom-line: the knowledge required to perform a medical device cybersecurity risk assessment is both broader and deeper than what is often immediately available in many medical device companies. 

Conclusion

The medical device industry has extensive experience with risk management and risk assessment techniques.

The cybersecurity dimension of medical device development is an additional aspect where risk assessments can enhance safety.

However, traditional medical device risk assessment techniques need adaptation in order to successfully be applied to cybersecurity risks.

Cybersecurity risks consist of other aspects and other drivers than "classic" medical device risks and are best mitigated by recognizing these differences.

The inside story of an MDSAP audit

{fastsocialshare}

One audit to rule them all. 

Sounds good, doesn't it?

ring

If you have spent a lot of time in audits lately, then you are certainly not alone. More and more company resources are devoted to a continuous string of auditing activities. The Medical Device Single Audit Program is an initiative by the IMDRF intended to curb the audit avalanche.  

But does the promise hold? Or is it too good to be true?

Read this inside story from a recent MDSAP audit experience.

Start your road to efficient medical device documentation here!

{fastsocialshare}

You spend too much time and money on documentation.

Agreed?

Good.

So what is your plan?

The regulations of the medical device industry require us to produce a pretty hefty chunk of documentation to show that the device is safe and efficient. If the documentation is not compliant, then it does not matter how safe, secure and performant the device itself it. Therefore, the documentation aspect that receives the largest share of attention is compliance. The most common path to a compliant stack of documents is to throw heaps of man-hours at the problem. Quantity seems to be the weapon of choice in many firms.

As a consequence, a large number of people get involved in the documentation creation and maintenance, especially people residing in the R&D part of the organization as most companies do not have Document Officers or documentation experts in their organization.

However, engineers and scientists are not necessarily the best writers (And they probably have no ambition to be.) Staff who are necessary for other tasks struggle to find the time to write these documents, and so the documents they do produce may be of lower quality than their usual work.

The effect on these people is often a suffocating feeling of inefficiency and frustration of spending an un-proportionally large part of the working day on menial documentation tasks, deciphering SOP:S and unpractical standards to compile documents that no-one reads (apart from the auditor).

lifebelt

Our studies show that up to 30% of the total project effort is spent on documentation required by regulations. There is also an overwhelming consensus in the industry that this is far too much. Money and time are inefficiently spent and morale buckles as the workload increases.

The good news is that these problems can be fixed. The bad news is that you are going to lose time, money, and people until you fix them.

Excelling at documentation efficiency is not intuitive to many R&D-centric organizations. However, considering the situation described above, good documentation practices is an investment. In the medical device industry, it is even a competitive dimension.

Your company can be efficient!!

Some good starting points are:

  • Make documentation efficiency a prioritized objective using measurable goals
  • Assign a responsible Manager
  • Set up a tight collaboration between the people writing templates and SOPs (Quality people) and the people using the templates and SOPs (R&D people)
  • Analyze your documentation processes
  • Apply the right software tools to automate documentation tasks

You can start right away!

Download our Medical Device Documentation Self-assessment paper and take a few minutes to complete it. We assure you that you will have started the road to more efficient documentation within the next 15 minutes!

Med Tech Industry Study: "Quality and documentation requirements" ranks as top challenge

{fastsocialshare}

The Swiss Medtech Industry Sector study surveyed more than 340 medical device companies in this year’s SMTI industry study, uncovering the current opinions and views in the Swiss Medtech market.

The study claims that "the ever-increasing quality and documentation requirements" and the difficulty to "preserve innovative capacity" are the primary challenges to remain competitive in the current market.

For those of us working in the industry for some time, this does not come as a surprise.

Knowing the dynamics operating in medical device development, it is obvious how these two challenges inter-relate. How is it possible to sustain a competitive and innovative R&D program when more and more of the development effort is devoted to satisfying regulatory and documentation requirements?

Our own studies show that up to 30% of the total medical device development effort is spent on documentation tasks. As in many other industries, the digitalization of processes offers some prospects of streamlining the medical device documentation burden.

Our users say that significant efficiency improvements can be done by applying the right tools. Download your free copy today or ask us for a free online presentation.

Infographic

 

 

 

What is lurking in the Design History File? - a manufacturer's perspective

{fastsocialshare}

Generating the medical device development documentation making up the products' Design History File stands for up to 30% of the total development effort. Putting in these substantial resources ought to provide top-notch quality and very few errors in documentation.

Still, Documentation not available, or Documentation not adequate are frequently cited deviations in FDA Warning Letters. The reason for this flood of FDA 483 warning letters, addressing seemingly obvious and simple errors, is not that the medical device manufacturers are ignorant or incompetent.

The documentation requirements are many and detailed, the development projects often span over a long time-period, involves a large number of alternating team members, all contributing to the large set of deliverables that make up the Design History File / Technical File. The deliverables are highly interdependent and a small change can cause unexpected ripple-effects over large parts of the documentation.

haystack

This combination of large volume, frequent changes and a high level of document interdependencies make it impossible for any single employee to get a comprehensive view of the current documentation consistency at a particular point in time.

This is where automatic checks by a dedicated computer system really help out. To illustrate this point, let us share some insights from a medical device manufacturer that ported existing Design Control documentation into our Medical Device ALM Aligned Elements and analyzed it using the automatic consistency checks.

Seeing is believing

"We found inconsistencies in the areas of design control structure, missing and inadequate traces as well as formal aspects of the risk management." says the responsible engineer. "We knew about the problem with the risk management and did not have too much confidence about the traces, but the other inconsistencies were new to us".

Applying the sort of systematic, automated analysis and support a tool can offer, not only unveils gaps in the documentation. It also makes deviations from formal documentation requirements explicit. "In the past, we did not work with a tool and then you have more freedom to find solutions, which in themselves are not wrong per se, but at the same time may be a bit outside of the ideal structure." he continues. "As time goes by, these design control documents are modified and deviates further and further away from the ideal structure."

This is a typical way in which inconsistencies undetectably find their way into the documentation. No up-front errors are committed, but over time, the collective aggregation of minor, grey-zone adaptations undermines the overall documentation coherence. In the end, the consistency of the documentation is unknown and the confidence in the work deteriorates.

"We might have underestimated the benefit of adhering to a clear documentation structure. At the moment, we know that we have structural points to fix. When this is done we can focus on the content. I am confident that, in the end, it will be easier to defend the documentation when the formal aspects are not in question.", concludes the customer.

FDA List of Highest Priority Devices for Human Factors Review

{fastsocialshare}

FDA keeps pushing the topic of the importance of Human Factors and Usability Engineering, trying to maximize the likelihood that new medical devices will be safe and effective for the intended users, uses and use environments. 

The FDA's draft guidance List of Highest Priority Devices for Human Factors Review, issued in February 2016, enumerates a number of device types where human factors data should be included in premarket submissions. This draft guidance is currently under review. The submitted Human Factors Review data should include a report that summarizes the human factors or usability engineering processes they have followed, including any preliminary analyses and evaluations and human factors validation testing, results and conclusions. 

  • Ablation generators (associated with ablation systems, e.g., LPB, OAD, OAE, OCM, OCL) OCL)
  • Anesthesia machines (e.g., BSZ)
  • Artificial pancreas systems (e.g., OZO, OZP, OZQ)
  • Auto injectors (when CDRH is lead Center; e.g., KZE, KZH, NSC )
  • Automated external defibrillators (e.g., MKJ, NSA )
  • Duodenoscopes (on the reprocessing; e.g., FDT) with elevator channels
  • Gastroenterology-urology endoscopic ultrasound systems (on the reprocessing; e.g., ODG) with elevator channels Hemodialysis and peritoneal dialysis systems (e.g., FKP, FKT, FKX, KDI, KPF ODX, ONW)
  • Implanted infusion pumps (e.g., LKK, MDY)
  • Infusion pumps (e.g., FRN, LZH, MEA, MRZ )
  • Insulin delivery systems (e.g., LZG, OPP)
  • Negative-pressure wound therapy (e.g., OKO, OMP) intended for use in the home
  • Robotic catheter manipulation systems (e.g., DXX)
  • Robotic surgery devices (e.g., NAY)
  • Ventilators (e.g., CBK, NOU, ONZ)
  • Ventricular assist devices (e.g., DSQ, PCK)

According to the FDA, these devices were selected because they have clear potential for serious harm resulting from use error.

Paper prototype

However, for device types not on the list, the guidance is very clear that submissions should contain human factors data if analysis of risk indicates that users performing tasks incorrectly or failing to perform tasks that could result in serious harm. This is a strong indication on that, regardless of device type, Human Factor engineering, Usability Engineering and the documentation of detected and mitigated risks caused be Use Errors will receive increased by the agency in the future.  

The Aligned Elements IEC 62366 configuration is available for download in our extension library.

For a live demonstration of the Aligned Elements IEC 62366 configuration, please This email address is being protected from spambots. You need JavaScript enabled to view it. to set up an appointment.

IEC 62304: Unit tests are not Unit tests

{fastsocialshare}

Sometimes when I ask development teams about their Software Unit verification, they fire up Nunit (or MSUnit or some other unit test tool) and say “Look here, we have more than 1800 unit tests with a test coverage way over 70%! Regarding Software Unit tests, we are certainly well covered!”

It is obvious to readers that the guys who wrote IEC 62304 explicitly tried to stay away from established software notions. You don’t see any “components”, “classes”, “modules”, “diagrams”, “dlls” or “objects” in the text (at least not in the English version). The authors did this on purpose in order to stay out of the nitty-gritty details of software development, to not make any technical prescriptions, and avoid a minefield of interpretations of concepts existing for decades. 

gui-verify

That’s why the norm speaks of software development in tech- and methodology-neutral terms. We have “architecture”, “items”, “interfaces” and “versions”. And, unfortunately, “units”.

This is just about the only notion where there seems to be confusion. Maybe not about units per se, but certainly about “Software Units verification”, which in some companies is translated to “Unit Testing”.

What is a “Unit”?

Unit testing was popularized by the practice of Test Driven Design and extreme programming in the late ’90s. JUnit was the first xUnit testing framework getting widely used and the concept was reused for other development stacks. The xUnit testing frameworks have since then evolved and are today widely used for automated testing in many industries, including the medical device industry. Over time, the frameworks have become increasingly versatile and are today used far beyond the original scope of classical unit testing. 

IEC 62304 defines the Software Unit as a Software item “not subdivided into other items”. According to the standard, it is up to the manufacturer to decide the granularity of items and therefore also the criterion for divisibility, making the definition somewhat arbitrary. Members of the medical device community have through lengthy discussions tried to agree on a practical interpretation of what a "Software Unit" is. Suggestions include:

  • Class
  • File
  • Package
  • Namespace
  • Module

Unfortunately, there seems not to be a commonly accepted definition. This leaves us, the implementers of the standard, with much freedom and little support. 

Regardless of the manufacturers' definition of the “Software Unit”, the standard does require the unit to be “Verified”. Again, the IEC 62304 leaves it up to the manufacturer to define according to which acceptance criteria and through which means the verification shall be conducted.

Note that the verification of a Software Unit does not necessarily have to be a test (depending on the software safety classification). The verification strategy can be a walkthrough, inspection, review, results from static testing tools, or anything else that is valid and appropriate for the Software safety classification of the unit. Furthermore, the verification strategy, methods, and acceptance criteria are permitted to vary from Unit to Unit as long as this is appropriately documented.

Let’s now move the focus from the definition of a “Software Unit” in IEC 62304 to the definition of a “Unit” in classical "Unit Testing". Wikipedia suggests several definitions (which are all variants of the word “small”) but concludes that at the business end of unit testing, we find a piece of code that can be tested in isolation. This is in general a much finer level of granularity than "Software Units" in IEC 62304. Whereas a “Software Unit” in IEC 62304 is an architectural building block, a “Unit” in Unit Testing is simply something that can be tested in isolation with no explicit relation to the software architecture.    

Thus, summary so far:

  1. A “Unit” as in Unit Testing is not the same thing as a “Software Unit” in IEC 62304.
  2. A Test being executed in a xUnit Testing Framework does not automatically make it: 
    1. a classical Unit Test
    2. a Software Unit Verification.
  3. A Software Unit can be verified by other means than tests (depending on the software safety classification).

 “But I really want to use my xUnit Tests for Software Unit Verification! What do I need to do?”

It is of course entirely possible to use unit tests run by an xUnit Testing Framework (or manually if preferred) as a verification method for Software Unit. However, the tests themselves are only a part of the required deliverables. Be careful to:

  • Document the test strategies, the test methods, and the procedures used.
  • Evaluate the procedures for adequacy and document the results.
  • Establish and document acceptance criteria (more rigid for class C than A and B, see IEC 62304)
  • Perform the tests and document the result.
  • Explicitly evaluate if (and document that) the results fulfill the acceptance criteria.

Learn more about how Aligned Elements can help you support IEC 62304

Request a live demo and let us show you how Aligned Elements can help you to comply to IEC 62304

Mobile Health Apps - Which federal laws do I need to follow?

{fastsocialshare}

The Mobile Health Application market is glowing hot at the moment. New players are entering the market from left and right. Traditional blue chip players are moving in and new start-up's are popping up by the day. However, the medical device market is regulated market and finding out to which laws that apply for your mobile health app might be obvious. Start-up companies may not even realize that their apps are subjected by FDA regulations.

woman-smartphone-girl-technology

 

For those unsure about the legal status of their mobile health apps, the US Free Trade Comission has provided an excellent interactive online tool.

Take it for a spin right here! 

Learn more about how Aligned Elements can help with achieving regulatory compliance for your app

Request a live demo and let us show you how Aligned Elements can manage your documentation for your app

ISO 13485:2016 is finally here!

On March 3rd, the 2016 revision of ISO 13485 was finally released. The new revision is essentially an evolution of the 2003 revision and includes a number of changes and clarifications.

For Aligned Elements users, a change in section 4.1.6 might have some important implications. Whereas ISO 13485:2003 did not explicitly require computer systems used in the quality management system to be validated, the 2016 edition certainly does. ISO 13485 is now finally on par with FDA 21 CFR 820 on this matter.

csv

Computer Software Validation is used to ensure that each computer system fulfills its intended purpose. It prevents problems with the software to reach the production environment. CSV is today used in many regulated industries and is today regarded as a good manufacturing practice.
Aligned Elements certainly fall into the category of Computer systems that must be validated according to ISO 13485:2016 and FDA 21 CFR 820. If you do not have CSV process in place, we do have some things that may help you.

Why is Aligned Elements not validated by Aligned AG?

Aligned Elements falls into the GAMP 5 Software category 4 - "Configurable Software". AE is a highly configurable software with the purpose of mapping the customer's QMS, as opposed to forcing the customer to change his processes and templates to match Aligned Elements.
When validating a software of Category 4, it is of course the particular configuration of the software that is validated. Since each Aligned Elements customer is using a different configuration (each customer has its individual QMS), we cannot foresee which configuration our customers will use.
As mentioned, we do supply a number of useful tools to make the validation process faster.

What do I need in order to validate Aligned Elements?

Even though ISO 13485:2016 and FDA 21 CFR 820 require Computer Systems used in the Quality Management System to be validated, they do not explicitly described how to do it. This is up to each organisation to decide.
With no intention to be a complete listing, you need at least the following things:

  1. A Standard Operating Procedure (SOP) that describes how Computer Software is validated in your organisation
  2. A validation plan, that describes:
    • INTENDED USE of the computer system in question
    • WHAT are you validating, i.e. which name, version, and configuration of the software including manuals and supplier information and target environment
    • WHY are you validating this software to this particular extent (hint: the GMP Software Categories are a good starting place. A risk-based approach is also a viable starting point.)
    • WHO is responsible for the software, for the maintenance of the software and who is responsible for the validation
    • HOW do you intend to validate the software, what the acceptance criteria are, and in particular how do you intend to handle software errors detected during the validation
    • DELIVERABLES i.e. the documentation you intend to provide
  3. Requirements and/or use cases describing how you intend to use the Computer Software.
  4. Tests verifying that the use cases and documenting any deviations found.
  5. A Validation Summary or Report stating the result of the validation, preferably with some kind of traceability, and whether the Computer Software should be cleared for use in the production environment

To make this process a bit simpler, we provide our customers with a pre-filled example validation kit of Aligned Elements that can be adapted to the individual needs of each organization.

Can Aligned Elements be used to document the validation of other systems?

Absolutely. Aligned Elements is very well equipped for documenting the validation of other systems (provided AE has been validated, of course). AE supports all steps of the CSV process, from evaluating the validation extent via checklists or using a risk-based approach, documenting requirements, use cases and test cases, executing test cases, analysing the test results, and finally supplying the necessary traceability reports.

Learn more about how Aligned Elements can help with achieving regulatory compliance

Request a live demo and let us show you how Aligned Elements can manage your documentation

How Software Safety Classifications changed in IEC 62304:2015 Amendment 1

{fastsocialshare}

The first amendment to the IEC 62304 was released in June 2015 and contains some welcome contributions, including:

  • Clarification on the scope of the standard
  • Information on how to approach Legacy Software
  • Increased number of clauses applicable to Class A

There were also some interesting changes made to Software Safety Classifications in section 4.3.

For those familiar with the original IEC 62304 text, the following section describes to assign a Software Safety Classification:

"The MANUFACTURER shall assign to each SOFTWARE SYSTEM a software safety class (A, B, or C) according to the possible effects on the patient, operator, or other people resulting from a HAZARD (being a potential source of Harm) to which the SOFTWARE SYSTEM can contribute.

The software safety classes shall initially be assigned based on severity as follows: 

Class A: No injury or damage to health is possible

Class B: Non-SERIOUS INJURY is possible

Class C: Death or SERIOUS INJURY is possible

If the HAZARD (i.e. the potential source of harm) could arise from a failure of the SOFTWARE SYSTEM to behave as specified, the probability of such failure shall be assumed to be 100 percent."

This is essentially saying that severity alone decides the classification of your software system/item/unit. Since there is no consensus on how to determine the probability of software failure, the probability of a failure to occur is assumed to be 100%, effectively eliminating the probability factor from having any kind of influence on the software safety classification. 

Now, software in itself has never killed anyone. When harm occurs due to a software failure, there is always some other executing agent involved, e.g. some piece of hardware or a human actor. Consequentially, for harm to occur, there must exist a causal chain of events, tying the software to the harm via that external agent. A causal chain of events occurs with some probability, sometimes called probability of harm.

Probability of harm did not play any prominent part in the original release of IEC 62304 and focusing by effectually removing the probability of failure from the equation due to the difficulties of establishing it in quantitative terms sometimes lead to more or less absurd results. 

In examples where a failure has severe consequences but is extremely unlikely to result in any kind of harm, the software safety class is C according to IEC 62304:2006 (if no hardware mitigations exist), regardless of how unlikely the risk of harm is.

And here is where the authors of the IEC 62304:2015 Amendment 1 have done a great job reformulating the Software Safety Classification section.

The IEC 62304:2015 Amd 1 section 4.3 point a) now reads:

"a) The MANUFACTURER shall assign to each SOFTWARE SYSTEM a software safety class (A, B, or C) according to the RISK of HARM to the patient, operator, or other people resulting from a HAZARDOUS SITUATION to which the SOFTWARE SYSTEM can contribute in a worst-case scenario as indicated in Figure 3.

IEC 62304 2015 1

The SOFTWARE SYSTEM is software safety class A if:
the SOFTWARE SYSTEM cannot contribute to a HAZARDOUS SITUATION; or
the SOFTWARE SYSTEM can contribute to a HAZARDOUS SITUATION which does not result in unacceptable RISK after consideration of RISK CONTROL measures external to the SOFTWARE SYSTEM.

The SOFTWARE SYSTEM is software safety class B if:
the SOFTWARE SYSTEM can contribute to a HAZARDOUS SITUATION which results in unacceptable RISK after consideration of RISK CONTROL measures external to the SOFTWARE SYSTEM and the resulting possible HARM is non-SERIOUS INJURY.

The SOFTWARE SYSTEM is software safety class C if:
– the SOFTWARE SYSTEM can contribute to a HAZARDOUS SITUATION which results in unacceptable RISK after consideration of RISK CONTROL measures external to the SOFTWARE SYSTEM and the resulting possible HARM is death or SERIOUS INJURY.”

The pivotal point lies in the use of the terms "RISK of HARM" and "unacceptable risk". RISK, in this case, being a combination of severity AND probability.

Now, the probability of harm, (the probability that someone gets hurt) is different from the probability of failure (the probability that the software malfunctions).

The combination of these two probabilities becomes the probability of occurrence of harm. IEC 62304:2015 Amd 1, explains this further in section B4.3 and also includes a Figure (B.2) from ISO 14971.

IEC 62304 2015 2

This means that it makes sense to incorporate both the probability of failure and the probability of harm in our risk assessments. We will still stay true to IEC 62304:2006 by setting probability of failure to 1 (100%) (and avoid the problematic discussion of the probability of a software failure) and concentrate our efforts on correctly estimate the probability of harm.

The amendment of the standard also claims clinical knowledge might be necessary to correctly estimate that the probability of harm following a hazardous situation, in order to “distinguish between hazardous situations where clinical practice would be likely to prevent HARM, and hazardous situations that would be more likely to cause HARM.” This certainly makes sense since the casual chain of events leading from a hazardous situation to a harm typically takes place in a clinical context. 

There are also further complications. Where it previously was sufficient to map severity to the  “no injury”, “non-serious injury”, and “serious injury” categories, which is fairly straightforward, we now have the additional possibility of bringing in the risk's acceptability into the picture.

Establishing severity and probability is one thing that can be done fairly objectively, but in a rational manner argue why a particular combination of these factors is “unacceptable” or “acceptable” is subjective at best, opening the software safety classification establishing to an amount of arbitrariness. On the other hand, "unacceptable" and "acceptable" risks are terms defined in ISO 14971 and should therefore not be new territory to the average medical device manufacturer.

The software safety classification method in IEC 62304:2015 Amendment 1 has certainly become more intuitive. The price for this change lies in the extra effort of:

  1. Establishing the probability of harm following a hazardous situation, with the involvement of clinical expertise if and where applicable.
  2. Establish and rationalize what makes a particular risk limit acceptable or unacceptable, if not already defined in the general risk management process. 

To finalize this discussion on Software Safety Classification in IEC 62304:2015 Amd. 1, I would like to point out sections in the standard that have received some welcomed clarifications.

Segregation

The new version of the standard amends that segregation of software items does not necessarily have to be physical. In the 2006 version, the only segregation exemplified was hardware-related, which has lead to the false belief that segregation between items has to be physical. This is not the case. The 2015 Amendment makes it clear that the main concern is to assure that one item does not negatively affect another. Furthermore, the segregation applied shall make sense in the context it is used as well as clearly documented and rationalized.

Software Items implementing risk controls

A software item implementing a software risk control (i.e. not external risk controls which can have a positive effect on the classification) shall be assigned the same software safety classification as the software item containing the risk it is controlling. This idea is applicable not only on System level (as described in the 2006 version) but also Item/Unit level.

Learn more about how Aligned Elements can help you support IEC 62304

Request a live demo and let us show you how Aligned Elements can help you to comply to IEC 62304

5 steps to effective medical device defect reporting

{fastsocialshare}

"Defect rejected - not reproducible” - we have all experienced the gnawing feeling of closing an unreproducible defect. We suspect that the error is still in there somewhere, but without being able to reproduce it, we are fumbling in the dark. In a Medical Device context, where a considerable risk is potentially lurking in this evasive defect, we end up in a very uncomfortable spot.

To minimize these kinds of risks, it is therefore of utmost importance that defect reports contain the information necessary for timely reproduction. 

MagGlass

"We are all interested in getting defects fixed as soon as possible. However, more often than not, defect reproduction takes more time than the actual fixing. The tester has a major role in providing the information necessary to reproduce a defect. However, he can only excel in this role if the underlying concepts of providing accurate diagnostic information are in place, to begin with.", claims Mr.  Michael Stegmann, Chief Executive Director in the Stegmann Innovation Team. 

Mr. Stegmann, having an extensive background in Medical Device Verification and Validation, singles out five critical software aspects that have a decisive effect on effective defect reporting.

Versioning of the software under test

"If the software I test is not unambiguously identifiable, the chance that the development team will be able to reproduce the defect decreases significantly. "V1.0.0.0" or "found in current trunk" will not help anyone. Sending patched dll:s back and forth is a risky practice that eventually will make your environment completely opaque. Therefore, a clear version schema of the software tested needs to be in place from day one." 

Nightly builds with versioned Installers

"Half-baked deployments from various developers will invariably deteriorate your test environment. In the end, you will simply not know what you are testing. Better to go with a centralized build system that automatically creates versioned Installers. 

In that way, all testers have a common source of versioned, predictable deployments that also correctly cleans out all files and dependencies during an uninstall. Setting up a build server does not have to be complicated or expensive. There exist many examples of free and low-priced options in the market."

 Logs, logs, logs

"Logs are one of the main diagnostic tools for finding and fixing defects. When the software developer wants to reproduce the defect, he will certainly ask for the logs. This, of course, presupposes that there exist logs in the first place.

Characteristics of a good logging concept include:

  • Logging shall be easy to apply in the code
  • Log messages shall have timestamps
  • The Log files shall be humanly readable on any computer
  • Message severity categories to enable quick visual identification of abnormal entries
  • In multi-threaded applications, the logging shall be tread-safe
  • The logs shall be stored in an unambiguous and accessible location 
  • The log files shall be of a manageable size for viewing and sending, 500kB is a good approximation
  • Log message metadata that enables a Log Viewing Application to efficiently filter large amounts of data"

 An Error Handling Concept

"Error handling includes the anticipation, detection, resolution, and communication of errors. When it comes to effective defect tracking the last word, communication, is the most important. If the developer deals with an error situation, seeming ever so improbable, a clear message of what, why, and where the problem occurred, communicated both in logs and on-screen, will go a long way.

Details on the error context, stack traces, and probable causes shall be communicated in a way that is detectable and collectible for the tester. Only then can the information serve as accurate feedback to the developer."

Automatic Collection of Diagnostic information

"Collecting test evidence often includes the finding and bundling of log files, screenshots, PC Information, event log files, configuration file, crash dumps, etc. This is often a tedious and time consuming activity. Automating this process can greatly speed up the verification process and also make sure that the evidence collected is uniform and according to expectations. 

Integrating such a capability in the software will have the additional benefit of making it easier for users and field service engineers to collect and send diagnostic information back to the company in case a defect has slipped through the verification net."

"These aspects shall be designed and implemented in due time before verification starts. In a medical device development environment where risk management is a central factor, a solid foundation for collecting valid and accurate diagnostic information is a requirement, not an option.”

Request a live demo and let us show you how Aligned Elements can help you with your reporting