Skip to main content

EN

{fastsocialshare}

Preliminary Hazard Analysis is a risk assessment technique performed in the early design phases. It is a powerful method to document and assess device risks without having the full implementation completed and to highlight problematic risk areas early on.

hazard

We have previously discussed how the PHA compares with the FMEA method in Aligned Elements and how they complement each other. The templates needed are available here.

Learn more about Aligned Elements and Riskmanagement

Request a live demo and let us show you how Aligned Elements helps you with your Preliminary Hazard Analysis

Not long ago, I sat down with three test managers I have recently worked with. They all have extensive backgrounds in managing test teams and supervise the writing and execution of test cases in large Medical Device projects. Since we have made the observation that about 50% of the total DHF is consisting of tests, I had long been pondering how the test activities could be done more efficiently.

We talked about how to find the right "review and release" effort (goldilocks principle, "not too much, not too little"), the optimal test case size, the optimal number of fields in a test case, and how to deal with the ever reoccurring problem of volatile specifications. I got some interesting input on all topics and I was very satisfied with how the conversation went on.

After a while, one of them said, "Mr. Larsson, it is all well and good that you want to optimize the test case writing and execution. I understand your intentions. But, you know, testing is more than just writing and executing. In my opinion, only 30% of the total test effort consists of the writing and execution activities you talk about. 70% is about setting the table. If I were you, I would take a look at that 70%."

Only 30% of the test effort is about writing test cases

I must confess that I did not really understand what he was talking about. In my world, testing is writing and executing test cases. And what did he mean by "setting the table"?

After some prying, we got closer to the heart of the matter: setting the table implies activities such as:

  • Setting up infrastructure (computers, user accounts, instruments, etc.)
  • Training testers – get to know the instrument, the “lingo”, the templates, and the processes
  • Setting up / calibrating the instruments to test
  • Learning simulation tools, log parsers, etc.
  • Generating test data
  • Reviewing specs
  • Dry runs and exploratory testing
  • Collecting test data

These are all auxiliary test activities that lay the foundation on which efficient test case writing and execution are subsequently performed. They might not look particularly impressive at first, but experience has shown that performing these activities carefully, consciously, and consistently pays off immensely. The reverse is also true; failing to give these activities their proper attention will have a severe impact on testing efficiency.

Finally, another test manager said, "Writing and executing a test case is the end of a long journey. It is the result of a long array of preparatory activities. It is how you get to this point that decides how efficiently your writing and execution will be".

{fastsocialshare}

Traces are the glue.

Traces are the links that tie the development documentations together. They provide the basis for verifying completeness and consistency.

Traceability, the discipline of setting and managing traces, is a core activity in the medical device development process. Norms and directives such as ISO 13485, ISO 14971, IEC 62304, IEC 62366 and FDA CFR 21 QSR 820 explicitly prescribe it.

Traces in a medical device project

In essence, establishing traces between artefacts is a deceptively straightforward task. You just declare the trace and that's it. So why is traceability perceived as such as an annoying piece of work?

The main reason is change.

As mentioned above, a trace establishes a relation between dependent artefacts (e.g. a requirement and a specification). When change occurs to one party in this relationship (the requirement), the state of the traced artefact (the specification) may become invalid.

Word documents and Excel sheets are notoriously inept at handling this kind of problem since the artefacts are not aware of each other.

Traces in an Excel sheet are just dead text.

On the other hand, this is why traceability software tools such as Aligned Elements are so popular. Traceability software tools not only allow traces to be declared but are also able to track changes made to the traced artefacts.

A change in an artefact automatically highlights the affected traces and signalizes that they might have become invalid due to the change. In Aligned Elements, we call such a possibly invalidated relationship a "suspect trace".

The user now has the possibility to inspect the implicated artefacts and update them if needed. Or simply confirm that the change did not have any impact on traced artefacts. In Aligned Elements, an impact analysis entry is made in the audit trail as the user completes this action, in order to ensure accountability throughout the development process.

However, these gains are quickly lost if different artefact types (e.g. tests and risks) are kept in separate systems. Managing traces across system boundaries is often only possible by resorting to manual measures. Such as Excel. And then we are back to where we started.

Keeping all traced artefacts in a single system further helps to disentangle a range of more mundane problems, such as finding out:

  • If some artefacts have not been traced
  • If some artefacts trace to artefacts of the wrong type
  • If some traces simply don't make sense

Trace consistency

In Aligned Elements, we have implemented traceability support following a few simple guidelines:

  • Usability. It shall be easy to set or remove a trace, regardless of the working context. Therefore we have five different mechanisms for setting traces.
  • Accountability. All trace-related operations shall be recorded chronologically in the audit trail.
  • Transparency. Missing, illegal and suspect traces must be immediately and automatically highlighted.
  • Completeness. End-to-end traceability must be facilitated without having to cross-system boundaries.
  • Reportability. Trace reports shall be customizable and configurable to match the quality requirements and look-and-feel criteria of each customer.

Again, traces are the glue.

Managed correctly with the right ALM support, they are (besides being regulatory necessities) a real asset to your design history file management. The traces help you ensure that completeness and consistency have been achieved throughout your development documentation. We are doing our best to make sure this becomes a smooth journey.

{fastsocialshare}

Since the release of Aligned Elements V2.1 SP 1, it is possible to integrate Work Item lists from your existing Team Foundation Server in Aligned Elements.

With the appropriate configuration, coupling your Aligned Elements user with your TFS user, your Work Items dynamically appear in a separate Item view. 

TFS

In Aligned Elements, the Work items behave and are treated very much like regular Document Objects, meaning that you can open and edit them in a Document Object form, set traces to them, set up inconsistency rules for real-time checks, etc.

TFS2

When saving changes made to a Work Item in Aligned Elements, the attribute changes are routed to the TFS server, effectively separating the two repositories according to their respective responsibilities. 

Using the "Go to Original" link in the Document Object Form automatically opens a browser and navigates to the Work Item on your TFS server.

TFS3

Using the Aligned Element TFS integration permits you to benefit from your existing Team Foundation System infrastructure in the Design History File management, working smoother, faster, and with fewer interruptions.

{fastsocialshare}

  • Written by

  • on

  • . Posted in

In my profession, I often observe manufacturers of active (electrical) medical devices making the mistake of implicitly assuming that safety-relevant aspects are already adequately addressed during development by circuit design, material selection, or individual component testing.

For IEC 60601-1 compliance, this is not enough.

These manufacturers are overlooking the explicit requirement to identify, justify, and document critical components as specified in the standard.

Even when recognized, uncertainty frequently arises as to which components must be classified as critical components. This uncertainty becomes particularly evident when assessments are not performed consistently on a risk-based basis.

Furthermore, manufacturers often lack clear understanding of which types of components that can fall into the category of critical components.

These are not limited to electrical parts; mechanical, thermal, or structural elements may also be safety-relevant and thus fall in this category

The result: incomplete evidence in accordance with IEC 60601-1 or follow-up questions during IEC 60601-1 testing, with serious consequences such as delayed market-access and unexpected extra costs.

The central challenge therefore lies in clearly identifying critical components and documenting compliance with the requirements of IEC 60601-1 in a traceable and comprehensible manner.

Here are the steps to solve this problem.

Step 1: Understanding critical components from a functional perspective

IEC 60601-1 defines critical components in Clause 4.8 as:

“All components, including wiring, the failure of which could result in a hazardous situation […].”

This implies that:

  • Not every component is automatically considered critical.
  • Criticality results from the function of the component within the ME equipment.
  • A component is considered critical if its failure leads to an unacceptable risk.

The examples below illustrate identical components that may or may not be classified as critical depending on their intended use.

Component

Critical if…

Not critical if…

Plastic enclosure

…it provides protection against hazardous electrical voltage through solid insulation or compliance with creepage distances and air clearance (IEC 60601-1, cl. 8.8).

…it is used solely for decorative purposes.

On/off switch

…the switch serves as a means of disconnecting the mains voltage (disconnecting device: IEC 60601-1, cl. 8.11.1).

…the switch is used purely as a functional component and not as a disconnecting device. The medical device can be disconnected from the mains via the power cord.

Step 2: Identification of critical components within the risk management process

The identification of critical components is best performed as part of the risk management process described in Clause 4.2 of IEC 60601-1.

A structured approach to identify critical components includes the following activities:

  1. Hazard identification
    Identification of potential hazards associated with the medical device (e.g., electrical, mechanical, acoustic, thermal hazards, data loss).
  2. Risk evaluation
    Assessment of whether the identified risks are acceptable.
    If not, appropriate risk control measures must be implemented.
  3. Risk control measures
    Measures implemented to reduce the severity and/or probability of a risk.
    → If a component forms part of such a measure, the component is classified as a critical component.
  4. Evidence of effectiveness
    Demonstration of the protective function and compliance through:
    • Test reports
    • VDE certificates
    • UL numbers
    • UL Yellow Cards

The corresponding evidence must be referenced and added to the list of critical components.

Practical examples

Example 1: Fuse in the primary circuit

A fuse prevents overheating or fire in the event of a fault. Failure of the fuse may result in a fire hazard.

Assessment & documentation:

  • Function: Overload protection or short-circuit protection. In the event of a fault: Operation at maximum current. Interruption of the mains voltage
  • Risk: Fire hazard
  • Standard: IEC 60127; IEC 60601-1: Clauses 8.11.5 & 15.4.3.5
  • Evidence: Test report according to IEC 60127 or UL number
  • Critical? Yes, or justification in the risk management analysis for omitting the fuse.

Example 2: Plastic material for a fire enclosure

The enclosure ensures that no fire resulting from an electrically initiated fire can escape from the enclosure.

Assessment & documentation:

  • Function: Hazard to people, caused by fire
  • Risk: Mechanical and thermal injury
  • Standard: IEC 60695-11-10; IEC 60601-1: Clause 11.3 (constructional requirements for fire enclosures)
  • Evidence: Test report according to IEC 60695-11-10 or Yellow Card (materials certified by UL)
  • Critical? Yes, if a fire can occur within the medical device.

The central challenge therefore lies in clearly identifying critical components and documenting compliance with the requirements of IEC 60601-1 in a traceable and comprehensible manner.

Step 3: Demonstration of protective function and normative conformity

With the identification of the critical components completed, proceed to select or source implementations of the components where their protective functions can be demonstrated to be in compliance with applicable standards.

IEC 60601-1 clause 4.8 requires that critical components (unless a justified exception is documented, either in the standard or within the risk management process) are tested in accordance with relevant standards or evaluated against defined requirements of IEC 60601-1.

Appropriate forms of evidence may include:

  • Test reports (e.g., CB reports, accredited test reports)
  • Certificates (VDE, UL)
  • UL Yellow Cards for plastic materials
  • ASCA-accredited test reports (FDA context)

It is strongly recommended to obtain and secure quality assurance statements or agreements from suppliers of critical components that guarantee that the protective function remains unchanged over a defined period and that design or manufacturing changes are performed in a controlled manner.

For critical components without a direct technical standard reference, testing in accordance with IEC 60601-1 may be used to experimentally demonstrate the protective function. However, this approach involves increased uncertainty and should only be applied when no normative alternatives are available.

It is important to note that critical components are only to be replaced by components that meet the same requirements and provide equivalent documented evidence.

Conclusion

In this article, we have addressed three common problems manufacturers experience in conjunction with managing Critical Components:

  • Understanding what critical components are and how they can be identified and thus avoiding that too few or no components are listed as „critical“
  • The importance of identifying them as soon as possible, as assessments and evidence collection takes time
  • Avoiding not having good enough evidence for critical components or not knowing where, or how to collect the evidence.

Critical components are not merely technical key elements but central elements of normative safety in accordance with IEC 60601-1. Their identification must be risk-based, traceable, and compliant with applicable standards. Failing to do so can put the medical device release-to-market in danger with high and unexpected costs as a consequence.

Best practices show that an early and well-structured selection and documentation of appropriate evidence reduce regulatory uncertainty, facilitates conformity assessment, and contributes significantly to the overall safety of the medical device.

About the Author

Sarah Lippert is a test engineer at the DAkkS-accredited testing laboratory of KEYMKR GmbH. She studied Medical Engineering at South Westphalia University of Applied Sciences and Biomedical Engineering at Münster University of Applied Sciences. Since 2022, she has been testing active medical devices at the KEYMKR laboratory in accordance with the relevant safety standards, primarily the IEC 60601-1 and IEC 61010-1 series.

About the Author
[rocket]

Accelerate your journey to CE Mark and FDA approval

Try aligned elements 30 days for free!

Start Today

Continue reading

  • Written by

  • on

  • . Posted in

In a research paper by Falk and Björlin at KTH in Sweden published in 2025, a major medical device manufacturer using Word and Excel for managing design and risk traceability was analysed for compliance and efficiency.

Despite significant manual effort, the researchers observed:

  • inconsistent use of terms and phrasing in the different documents, negatively affecting searching
  • varying trace approaches being used by different departments
  • significant probability of human errors when manually handling the traceability data
  • excessive manual work copying redundant information between the systems

The resulting traceability error rate was approximately 6.6 percent.

This finding is not an academic anomaly. It reflects a systemic weakness that persists across the medical device industry. In modern regulated medical device development, a design control management approach that does not include tool supported, data-driven risk assessment functionality is an inherent liability.

Risk management and design controls are structurally interdependent. Risk drives safe design decisions, and design changes continuously reshape the risk profile of a device. Manually handling this ever-changing, interconnected data may work for small devices but will quickly become untenable when complex design is required.

The question is therefore not whether to integrate risk into design controls, but how to structure a tool-driven approach that makes this integration reliable, scalable, and auditable.

Let’s explore the steps to apply a tool-driven integrated risk assessment approach.

Step 1: Define a Unified Design Control Object Model

The foundation of tool driven, integrated risk assessments lies in the underlying structure of the design control system. Risk must exist as a first-class design control object, not as a static document. Hazards, hazardous situations, harms, risk estimations, risk acceptability decisions, and risk control measures must be managed as structured, versioned data elements within the same environment that manages design inputs, design outputs, and verification activities.

This structure directly supports the regulatory expectation that risk drives design. ISO 14971 requires that hazards are identified early, risks are estimated and evaluated, and risk control measures are defined before design solutions are finalized. When risk objects are embedded into the design control system, they can directly inform functional requirements, safety mechanisms, materials, software architecture, and user interaction concepts.

From experience, it is this latter part (analysing design changes for new risks) that often get neglected during the later stages of the design and development lifecycle!

Step 2: Establish mandatory structural relationships between Risk and Design

Once risk exists as structured data, the next critical requirement is enforced traceability, both incoming and outgoing.  

Risk control measures shall trace to the corresponding implementation of those controls. In case of where the measure addresses device design, these can be specifications, software functions, hardware features, alarms, safeguards, or user interface elements.

Such implementation items should then be verification for both correct implementation and effective risk reduction. This traceability is essential to demonstrate that identified risks are not only assessed but actively realized in the actual design.

But this is not the hard part. Equally important is the inverse relationship, where design elements serve as direct inputs into the risk assessment itself. Design inputs such as intended use, user needs, functional requirements, performance characteristics, materials, system architecture, and connectivity features are all potential carriers of risk and should serve as input for the risk identification process.

As the design evolves, new and changing inputs continuously feed back into the risk assessment to ensure that new hazards are identified and existing risks are reassessed accordingly. ISO 14971 explicitly requires this iterative relationship, and ISO 13485 mandates that risk management activities be embedded within design and development records.

From experience, it is this latter part that often get neglected during the later stages of the design and development lifecycle.

Manual traceability approaches fail here because links are either neglected or lost during design development or inconsistently copied across documents. A tool would enforce such relationships and eliminate this fragility.

Step 3: Integrate non-traditional Design Controls into the risk structure

Risk assessment does not operate in isolation. A compliant design control system must integrate a wide range of regulatory and technical inputs that either feed into or result from risk decisions.

  • Regulatory requirements such as EU MDR General Safety and Performance Requirements must be traceable to both design controls and risk justifications.
  • Clinical evaluation findings influence risk probability estimates and residual risk acceptability.
  • Usability engineering activities conducted under IEC 62366 identify use related hazards that must be reflected in the risk analysis.
  • Cybersecurity analyses introduce additional hazard categories that directly affect design and verification strategy.

By representing these elements as structured objects within the same system, a tool driven approach allows them to be connected directly to risk assessments. This ensures that safety, performance, usability, cybersecurity, and clinical evidence remain consistent across the technical documentation.

More importantly, it ensures that risk assessments are based on the full regulatory and clinical context rather than a narrow technical view.

Step 4: Enable change impact through relationship-based navigation

One of the most critical advantages of an integrated design and risk structure is reliable change impact analysis. Design changes inevitably occur, and each change has the potential to alter the risk profile of the device. ISO 14971 explicitly requires risk management to be iterative, and ISO 13485 requires that risk management activities are part of design and development records.

When a design input, specification, or software component changes, a well-designed can immediately identify which hazards, risk control measures, verification activities, regulatory requirements, and clinical justifications are impacted. This prevents the common error often observed in disconnected systems where risk documentation becomes outdated and residual risk acceptability is no longer valid.

Step 5: Derive technical documentation directly from the structured data

The final step is ensuring that technical documentation is an output of the integrated system rather than a separately maintained artifact. The risk management file, design history file, traceability matrices, and MDR Annex II sections should be generated directly from the live data and relationships within the tool.

This approach eliminates the inconsistencies that arise when documentation is manually assembled from disconnected sources. It ensures that what is submitted to regulators accurately reflects the current design state, implemented risk controls, and verified evidence. It also provides audit readiness at any point in time, since traceability is inherent in the structure rather than reconstructed on demand.

Conclusion

The evidence is clear. Document-based and disconnected approaches to risk and design control traceability introduce measurable error rates, weaken safety justification, and expose organizations to failed submissions and delayed market access.

Risk drives design, design changes drive risk, and regulatory expectations require that this relationship is fully traceable, auditable, and continuously maintained.

A tool driven, integrated design and risk management structure enables consistent decision making, reliable change impact analysis, and a defensible technical documentation being audit ready at any time. Most importantly, it ensures that safety is actively engineered into the product.

About the Expert

Anders Emmerich is a co-founder of Aligned AG. With more than 20 years in the medical device industry, he has seen paradigms and trends come and go. What remain are true and tested ideas on medical device development and medical device documenation strategies.

About the Author
[rocket]

Accelerate your journey to CE Mark and FDA approval

Try aligned elements 30 days for free!

Start Today

Continue reading

  • Written by

  • on

  • . Posted in

The reliability and performance of AI-enabled medical devices is only as robust as the data they are trained on. The type of data required for AI-enabled medical devices is dependent on the intended purpose and may include patient demographics, laboratory values, vital signs, clinical measurements, imaging (such as CT scans, radiographs, or MRI images), and time-series data (such as electrocardiogram signals or photoplethysmogram waveforms).

Data for AI-enabled medical devices serves different functions throughout the AI lifecycle:

  • Training data is used to teach the AI model to recognize patterns and make predictions.
  • Tuning data allows developers to tune model parameters and assess performance during development.
  • Test data provides an unbiased assessment of model performance on unseen data.
  • Real-world performance data collected post-market can inform ongoing model monitoring and identify potential safety issues.

The figure below shows different types of data required against the AI/ML lifecycle phases defined by the FDA.

Given the importance of data on the performance of AI-enabled medical devices, data management has become an emerging regulatory focus. Regulatory expectations have increased significantly in recent years. The EU AI Act dedicates Article 10 to data management of high-risk AI systems, while the FDA’s Jan 2025 draft guidance document on AI lifecycle management allocates 10% of its content to the topic. Data management will continue to become central to regulatory compliance as medical technology increasingly leverages AI to automate and improve clinical workflows.

Although the term is not defined in regulatory literature, international standards (such as ISO/IEC 22989 and ISO/IEC TR 24368) and regulatory guidance documents indicate that data management encompasses processes and tools involved in overseeing the entire data lifecycle – from its initial acquisition and preparation to its storage, maintenance, and disposal. This includes data collection, cleaning, annotation, quality checking, sampling, and augmentation. It also involves ensuring data representativeness, addressing bias, and maintaining version control and documentation to guarantee data integrity, suitability, and traceability.

In our experience, companies commonly make the mistake of developing AI models without considering a comprehensive end-to-end data management strategy. Given the current regulatory scrutiny on data management, failing to build a strong data management foundation at the start of design and development will likely lead to additional questions from regulatory authorities, expensive and time-consuming remediation, and delays to market access.

This article provides practical strategies for managing data in AI-enabled medical devices.

Integrate Within The QMS

Data management activities once occurred outside the medical device quality management system (QMS), with regulators expecting documentation only in submissions. This expectation is now shifting in that data management must be embedded within the QMS. The EU AI Act (Article 17) exemplifies this, where data collection and preparation processes must be integrated with AI QMS. Medical device companies should therefore define data management processes within their QMS to ensure consistent application across all AI-enabled devices throughout their lifecycle. These processes include procedures on the management of data throughout its lifecycle – from sourcing and preparation to retirement.

Apart from regulatory non-compliance, failing to integrate data management within the QMS will likely lead to inconsistent application of data management practices across multiple products and an inability to demonstrate traceability and control over processes that directly impact device safety and effectiveness.

Expert’s Secret #1: Define data management processes that include data collection, data requirements, data labelling, data pre-processing, data version control, and documentation.

Failing to define data requirements prior to collection may lead to data not being usable for ingestion, poor model performance, and bias and generalisability issues.

Plan For Data Management Early

Like other activities within design and development, it is crucial for data management activities to be planned early. This is consistent with how device companies approach early planning of design and development, risk management, software development, and usability engineering activities. When planned early during development (i.e., before design inputs), data management expectations can be aligned within cross-functional development teams. This also allows data management activities to be integrated with model training and validation activities. Data management plan should be defined during the planning stage of design and development and should include device-specific data management expectations.

Failing to plan data management activities early during design and development will likely require retroactive remediation to meet stringent regulatory expectations.

Expert’s Secret #2: Define a data management plan within the design and development file for each AI-enabled device.

Define Data Requirements Before Collection

As not all data can be used for all purposes, it is crucial to identify data requirements relating to each AI function prior to data collection. Consider data characteristics including validity, accuracy, completeness, consistency, traceability, uniformity, and representativeness to the intended patient population. This ensures that the data is fit for purpose for the intended algorithm.

Failing to define data requirements prior to collection may lead to data not being usable for ingestion, poor model performance, and bias and generalisability issues.

Expert’s Secret #3: Define data requirements as design inputs prior to data collection.

Implement Robust Data Versioning & Sequestration

The importance of data in AI cannot be overstated. Therefore, it is crucial that data storage processes ensure the integrity of data throughout the lifecycle. Storage processes should support version control, as different dataset versions may result in different model performance. The figure below shows the relationship between data, dataset and ML model – elements that should be version controlled:

  • Data is the raw information collected from various sources, such as CT scans or ECG.
  • Datasets are curated collections of data organized for a specific purpose. A dataset combines multiple data points with pre-processing (cleaning, annotation, and labelling), ultimately creating a structured package ready for model training.
  • Models are the trained AI systems that learn patterns from datasets. A model is the output of training a specific algorithm on a specific dataset version.

A given model version is always trained on a specific dataset version, which is composed of specific versions of underlying data. Traceability between these elements is crucial to understand changes in model performance. Version control is also important to ensure that frozen models are deployed following regulatory authorisation. Finally, traceability between the version of the model and the dataset should be maintained in cases where rollback may be required.

Data storage controls should also ensure that training and test data is sequestered to prevent data leakage. The goal of model evaluation is to understand performance and generalisability on unseen data. When test and training datasets are not separated, overly optimistic model performance will be estimated that will perform far worse on truly unseen data in clinical practice. Training and test data sequestration can be implemented by:

  • Data sequestration strategy early (in the Data Management Plan). For example, the data split ratio before collection, such as 70% training, 15% validation, and 15% test.
  • Organizational or structural separation by storing training, validation, and test datasets in separate locations or directories, and by making it technically difficult (or impossible) to accidentally merge them. Technical controls available in tools like MLflow, Azure ML, and SageMaker should be used where possible to enforce sequestration.
  • Access and permission restrictions for development team members.

Expert’s Secret #4: Implement robust data versioning and sequestration controls for each AI function to maintain data integrity and ensure robust model performance.

Conclusion

Data management for AI-enabled medical devices is a foundational element of device development that regulators expect to see embedded within quality systems from the outset. The regulatory landscape has shifted dramatically in the last 5 years, and companies that establish robust data management practices today will be well-positioned to create safe and effective devices and demonstrate compliance confidently.

Practical data management strategies in this article will ensure AI-enabled devices are trustworthy and perform as intended throughout the device lifecycle. Companies that treat data management as a strategic priority rather than a compliance checkbox will build and maintain a competitive advantage in an increasingly complex regulated landscape.

About the Expert

Richie Christian is a Senior Medical Device Consultant at wega Informatik where he helps teams secure FDA clearances and build quality systems that work. He has 11+ years’ experience in medical device quality management systems and regulatory affairs and is driven by a simple belief: the MedTech community deserves clear, practical guidance without the fluff.

About the Author
[rocket]

Accelerate your journey to CE Mark and FDA approval

Try aligned elements 30 days for free!

Start Today

Continue reading

  • Written by

  • on

  • . Posted in

Usability engineering for medical devices can easily become either too heavy or too thin. Some teams over-engineer simple interfaces; others under-scope complex tasks and meet surprises late. The sweet spot lies in tailoring — scaling your usability effort to real risk and context.

Step 1 – Establish a Reliable Foundation

Before you decide how much usability work is enough, make sure the ground you stand on is solid. Tailoring decisions are only as good as the information that drives them. In practice, three key inputs form the backbone of any proportionate usability engineering approach:

  1. A complete User Interface Description – What users see, touch, and interact with. This document includes displays, controls, indicators, and workflow elements.
  2. A structured Use Specification – Describing who will use the device, in which environments, and under what conditions.
  3. A Use-Risk Analysis – Integrating both pre-market assessments and real post-market feedback to identify hazard-related use scenarios.

Without these inputs, tailoring becomes guesswork. Teams risk scaling activities incorrectly and consequently investing heavily where it adds little value, or overlooking areas where safety and comprehension depend on more thorough evaluation. When these three foundations are sound, every subsequent tailoring decision i.e. what to test, how deeply, and why, can be made with confidence, traceability, and full alignment to IEC 62366-1 expectations.

Tip for practitioners: If any of these inputs are incomplete, treat it as an early signal. Fill the gaps first or, at minimum, record clear assumptions and follow-ups in your usability plan. This avoids surprises later and keeps your tailoring defensible in audits.

Step 2 – Apply Risk-Based Decision Criteria

Once your foundation is in place, the next step is to decide where to focus your effort.
A risk-based approach ensures that your usability engineering activities are proportionate, that is, neither too heavy nor too light.

In essence, tailoring means investing more where the consequences of error or misunderstanding are high, and scaling down where risk is limited and well-understood.  This mindset is deeply rooted in IEC 62366-1, which explicitly allows flexibility when justification is documented.

To make these decisions systematic, teams can define clear risk-related decision factors that help distinguish between high- and low-priority areas. In practice, two factors have proven particularly useful when evaluating usability risk:

  • Severity of harm – What is the potential impact if the user makes a mistake?
  • Magnitude of UI change – Does a new or modified function introduce new potential use errors?

By rating these two factors and combining the results, you gain a structured way to “grade” usability activities.
High-risk areas can then be addressed first and in more depth, while low-risk elements may require only streamlined formative evaluation or expert review.

Other influencing factors such as UI complexity, breadth of the use specification, or the presence of legacy elements also play a role in decision-making. However, these extend beyond the pure risk perspective and will be explored further in a separate article.

Tip for practitioners: Document your reasoning for each decision. Even a short note, stating why a task was considered low risk, or why a formative test was scaled up, helps maintain transparency and audit readiness.

The goal is efficiency without compromising patient safety, with a clean audit trail.

Step 3 – Define and Document Tailoring in a User Interface Evaluation Plan

Tailoring only becomes effective once it is made explicit.
Many teams intuitively scale their usability activities but fail to describe how and why those decisions were made. Without that traceability, flexibility can appear like inconsistency and that’s exactly what regulators want to avoid.

A well-structured User Interface Evaluation Plan (UIEP) turns tailoring into a transparent and defensible process.
It serves as the single point of reference connecting risk, rationale, and evidence.

When you create or update your UIEP, make sure it captures four essential elements:

  1. Scope and rationale - Which usability activities will be performed, to what extent, and why.
  2. Decision logic - How scaling choices follow from risk and context, for instance, why one task needs a full formative test while another requires only expert review.
  3. Coverage rules - How user groups, use environments, and hazard-related scenarios are represented and justified.
  4. Change control - How design updates trigger reassessment or retesting, and how decisions are recorded over time.

This documentation doesn’t need to be lengthy, but it must be structured and verifiable. A concise table mapping decisions to risk factors often works better than long prose.

Tip for practitioners: Treat the UIEP as a living document. Each tailoring decision should have a short, checkable rationale that can be traced back to the corresponding hazard or design change. That clarity builds confidence not only with auditors, but also within your own development team.

Step 4 – Execute Proportionate Usability Activities

Once the decision framework is in place, it’s time to put tailoring into action.
Execution is where the balance between efficiency and safety becomes visible. The key principle: apply the right amount of effort for the actual risk profile.

Not every user interface requires the same depth of testing. Low-risk or well-understood elements might be covered by streamlined formative evaluations or expert reviews, especially if previous data already confirm safe and intuitive use.

By contrast, novel, complex, or high-risk interactions demand deeper formative exploration and realistic summative testing to confirm that users can perform critical tasks safely and effectively.

This proportionate approach helps avoid redundant loops and late surprises. Instead of repeating the same evaluations “just in case,” every activity has a clear purpose tied to risk, context, and learning objectives. The result: a smoother, more predictable path toward usability validation.

Tip for practitioners: Define success criteria before you start testing. A well-formulated criterion, such as “critical tasks completed without use error in ≥ 90 % of attempts”, turns each evaluation into an evidence-producing step rather than an open-ended exploration.

Follow Through and Keep It Living

Tailoring usability engineering is not a one-time exercise: it’s a discipline.
Each decision about what to include, omit, or scale must remain traceable as the design evolves. When new hazards appear, workflows change, or new user groups are added, revisit your tailoring assumptions. Adjust them where necessary and keep the reasoning documented.

That continuous loop between risk, rationale, and evidence not only ensures compliance with IEC 62366-1, but also strengthens the maturity of your entire development process. Teams that maintain this traceable link tend to work more predictably, reach summative validation faster, and build stronger confidence with regulators and users alike.

Conclusion

Tailoring is about working smarter, not lighter.

It enables teams to focus effort where it counts most, achieving safety, efficiency, and audit readiness in one step.
When applied systematically, it transforms usability engineering from a regulatory obligation into a strategic advantage.

If you’d like to explore the practical side of tailoring, including checklists, examples, and ready-to-use templates, visit our detailed article on the USE-Ing. website.

About the Expert

Benedikt Janny, Senior Usability Engineer, Euro-Ergonomist, scientist, and founder, holds a PhD in engineering and has dedicated his career to advancing human-centered design in medical technology. During his research years, he focused on developing adaptive human–machine interfaces for gerontological and medical applications—work that sparked his passion for designing industrial medical products around users.

Since then, he has worked with both global corporations and MedTech startups across the fields of UX, Human Factors, and innovation, and has co-founded several ventures himself. Today, as Managing Partner of USE-Ing. GmbH, one of Europe’s leading Human Factors and Usability Engineering agencies for medical device development, he drives the integration of usability and human factors into cutting-edge MedTech innovation.

As an international speaker and university lecturer, Benedikt is deeply committed to sharing knowledge and expertise. He also contributes to standardization and professional communities—serving in the DKE committee for IEC 62366-1 and in the German UPA “Med&Health” working group—to strengthen the role of UX and Human Factors in research, education, and practice.

About the Author
[rocket]

Accelerate your journey to CE Mark and FDA approval

Try aligned elements 30 days for free!

Start Today

Continue reading

  • Written by

  • on

  • . Posted in

Plastics are among the most versatile and indispensable materials in modern medical device development.

From syringes and catheters to surgical instruments and implantable components, plastics enable engineers to translate clinical concepts into safe, effective, and reliable products. Their unique combination of properties (moldability, strength-to-weight ratio, biocompatibility, transparency, and sterilization resistance) makes them central not just to manufacturing efficiency, but to fulfilling the very purpose of medical devices.

However, turning high-level product requirements into specific, testable plastic requirements is not always straightforward. For plastics in particular, the stakes are high: design changes made late in the development process can be extremely costly, such as having to commission a new injection mold or having to re-commission biocompatibility or sterility tests. At the same time, we always want to keep an eye on the spectre of overengineering. Specifying materials with unnecessary properties or tolerances that drive up costs without improving clinical outcomes.

Beyond cost, the way plastics are selected and applied has a significant impact on a device’s sustainability profile. Serving today's climate conscious markets, we need to pay attention to the fact that plastics being a major contributor to the CO₂ footprint of medical devices, every decision, from material choice to part design, influences not only performance and safety, but also whether the device supports long-term environmental goals.

Plastic requirements in a medical device context

Not all product requirements can be directly translated into plastic properties. Although some neatly map onto known plastic characteristics, such as tensile strength or impact resistance, others are more qualitative in nature and consequently require analysis and interpretation.

Product requirements that relate to plastics describing how a device should perform in use or how it should feel to the user, for example, “a transparent housing for monitoring”, or “a soft grip for comfort” must be carefully transformed into measurable, verifiable material specifications that can actually guide design and testing.

Challenge the requirements

Every requirement that flows down into material specifications carries consequences for cost, usability, regulatory compliance, and sustainability. If left unexamined, requirements risk becoming bloated with “nice-to-haves” that add little or no value to patients or clinicians. This kind of over-specification is particularly problematic in plastics, where excessive safety margins can easily translate into thicker parts, more complex materials, or additional processing steps, all of which drive up costs and carbon footprint.

Asking the right questions early prevents this. Instead of simply asking “Why is this required?”, which can lead to defensive justifications, it is more effective to ask “How do we know this is required?”.

That shift reframes the discussion around evidence and traceability — whether the requirement is grounded in a regulatory clause, a user need, a clinical claim, or solid engineering data. By questioning requirements in this way, teams can separate what is truly necessary from what is merely assumed.

This process not only reduces the risk of overengineering but also ensures that every plastic requirement serves a clear purpose: to support the clinical claim, safeguard the patient, or meet a regulatory expectation. Anything beyond that is cost, complexity, and CO₂ emissions without benefit.

Verification as the Compass

If you can’t verify it, you can’t require it.

How do we best formulate a plastic requirement given a product requirement? This takes us to the core of this discussion. Based on my extensive expertise and long experience in this domain, I have learnt that a key principle in deriving plastic requirements is: the verification method drives the formulation of the requirement.

When drafting a plastic requirement, always consider: How will this be verified?

Let’s walk through an example:

Step 1: Inspect the Product Requirement

Let’s say the product requirement states: “The handle has to be soft.”

Step 2: Consider Verification Options

There are several approaches to how softness can be verified:

User Evaluation with Material Samples

Provide different material samples to representative users and ask them to choose the one that best matches their expectation of “soft.”

Prototype Testing

Build early prototypes and let users handle them to assess perceived softness.

Defined Parameter Measurement

Translate softness into a measurable property, such as Shore hardness, and test against a specified range.

Step 3: Assess Verification Methods

Not all verification methods are created equal. Each verification option differs in cost, time, and effectiveness. Therefore, apply a system where you grade the verification methods according to these three parameters: cost, time and effectiveness. Score the different verification methods and select the verification methods with the best score.

Step 4: Select and Formulate the Requirement

Once you have established your verification method of choice, go ahead and formulate the plastic requirement accordingly. If Shore hardness proves to be the most reliable and cost-effective method, the requirement should be stated as:
“The handle material must have a Shore A hardness between 30 and 40.”

This statement is clear, measurable, and directly verifiable.

Best Practices for Writing Plastic Requirements

Challenge Rationales – Especially when requirements are claimed to be “regulatory.” Always ask for the exact clause or reference.

Avoid Overengineering – Resist specifying “the best” when “good enough” meets user needs, patient safety and clinical effectiveness. Remember that every added property has the potential to increases cost and environmental impact.

Tie Every Requirement to Verification – If you can’t verify it, you can’t require it.

Closing Thoughts

Plastics are unique and versatile, but they require thoughtful specification. By working backward from how a requirement will be verified, medical device engineers can ensure that plastic requirements are both practical and defensible. The result: safer devices, leaner development, and more sustainable products.

About the Expert

Lucas Pianegonda, founder of Gradical, is a globally recognized expert in sustainable plastics for medical technology. He supports MedTech companies in choosing the right plastic for their applications, combining deep technical expertise with a strong focus on sustainability.

Recently elected to the board of the European Medical Division of the Society of Plastics Engineers he will bring in his expertise to push the boundaries of sustainability in MedTech. Lucas also hosts the “Plastics in Medical Technology” podcast, where he explores plastics, MedTech innovation and sustainability.

About the Author
[rocket]

Accelerate your journey to CE Mark and FDA approval

Try aligned elements 30 days for free!

Start Today

Continue reading

  • Written by

  • on

  • . Posted in

For medical device manufacturers, the word 'audit' can trigger quite a reaction.

Start-up teams may sway between confusion and panic. Even seasoned manufacturers tense up. They know all too well what is at stake. Your processes, people, and product files are about to be examined in great detail.  The danger of a poor audit is rarely just one dramatic failure. It’s rather the death by a multiple small findings.

Key team members may be unavailable, critical documentation may be outdated, incomplete or inconsistent, and the wrong people might be sitting in front of the auditor trying to defend processes they don’t fully own.  A poorly handled audit does not just risk delaying certifications. It burdens your team with months of remediation, erodes trust and confidence, and cost you significant momentum forward.

On the flip side, well-prepared audits offer far more than just a clean report. They accelerate internal alignment, validate the quality system under real-world scrutiny, and consolidates your team. Your audit success is to a large extent decided by your preparations. As Camilla Messerli puts it, “A prepared audit team doesn’t just survive an audit - they profit from it.”

Getting to that point takes planning and practice. A successful audit begins not on the day itself, but the moment the audit date is set. Make the most of the time prior to the audit by preparing thoroughly and in a structured manner to increase your probability of passing the audit.

In this article, we will explore a structured audit preparation approach and practical insights shared by Camilla Messerli, a seasoned expert in audit situations. Her methods blend structure with adaptability, offering tangible advice that can help any team face audits with professionalism and clarity.

Inspect the Audit Scope

The audit scope may seem like a simple notification, a few lines in an email or a formal letter, but it carries essential insights for planning.

The scope defines the specific areas, processes, products, or systems that will be included in the audit. Importantly, it also (implicitly more often than explicitly) tells you what is excluded from the audit.

From the scope, you can start deducing which team members must be involved and which member need to be present during the audit. Check their schedules before confirming an audit date to make sure the necessary people have time to prepare and are available to defend their areas during the audit. There is nothing worse than having one or several of your key players not being able to attend the audit…

Your audit success is to a large extent decided by your preparations. A prepared audit team doesn’t just survive an audit - they profit from it.

Inspect the Audit Plan

Closer to the actual audit date, you will receive a detailed audit plan. While the scope tells you what will be audited, the audit plan shows how and when it will happen. It includes the detailed agenda, time slots, auditor names, and often specific references to standards or processes.

Review the plan carefully to assign the right experts to the right sessions, avoid scheduling conflicts, and ensure no topic is left uncovered.

Check out the Auditor(s)

With the plan, you will also be notified about the auditor themselves. Do not forfeit the chance to take a closer look at the auditors. A quick search on LinkedIn can reveal their areas of expertise, industry background, and even personal preferences.

Knowing whether an auditor has a deep background in software, manufacturing, or regulatory affairs helps you anticipate which topics they may scrutinize more thoroughly based on their expertise. It also allows you to brief your team accordingly and prepare stronger examples in those areas.

Prepare the Team

Even though it is documents, records and physical artifacts that are most often audited, it is the people presenting them that is the key to your audit success.

Assign clear responsibilities early on to your team and follow up diligently that they conduct their individual preparation on time. Each section of the audit should have a designated process owner or subject matter expert presenting and defending it. These individuals should be involved early in the preparation and have full visibility into the audit agenda, so they can prepare their materials and anticipate potential questions.

Train your team in audit behavior. This is just as important as preparing the content. Audits have a rhythm and tone that can feel unnatural to those unfamiliar with them. Teach your team to stay calm, answer only what is asked, and avoid scrolling through unrelated documents when sharing screens. Internal audits are a powerful tool here. Use them as dry-runs to simulate real audit situations. Give your team feedback not only on the answers they give, but on their body language, clarity, and ability to find information quickly.

There are a few additional roles to consider for an audit. For example, make sure that you assign a person taking notes during the audit, including recommendations that the auditor makes, as these have a tendency not to end up in the final report. This person (or your QMR) should circulate and process these recommendations post-audit. Also make sure that you assign a “back-end” team, that is available to prepare documents outside the audit room, if required. And lastly, make sure someone is in charge of the room preparations and food/drinks. There is nothing worse, than hungry and unhappy auditors…

Clean up the content

Content preparation is where good audits are won. Start by reviewing the audit agenda and identify which processes, records, or product documentation will be discussed in each session. Ensure that all documents in these areas are up to date, completely filled out, signed, and easy to locate, both digitally and physically.

Have strong, representative examples of your work prepared and ready, and where possible, choose products or files you’ve recently reviewed or cleaned up. If you’ve deviated from a template or skipped a step in a process, be ready with a written justification.

Make sure all HR records are solid, that all trainings have been completed, that work contracts as well as evidence of competence such as education certificates are filed correctly for all employees.

Going through your Supplier Management and Quality Events is also a mandatory task for a thorough preparation.

Prepare the Room(s) for Audit Day Logistics

Since the auditor is supposed to stay in the audit room throughout the majority of the audit, a well-prepared audit room sets the tone for the entire inspection. Start by removing any unrelated documents, cleaning whiteboards, and ensuring that no sensitive or messy materials are visible (auditors notice everything).

Provide water, coffee, and light snacks to keep the atmosphere calm and focused, especially during long sessions. A word about lunch: prepare for a lunch that minimizes waiting time and is resilient to delays in the schedule. Sandwiches nearby are always a popular choice.

Always keep a printed copy of the audit agenda available for easy reference, along with enough seating and workspace for laptops and note-taking. Check power outlets in advance and provide adapters or converters, especially if the auditor is coming from abroad.

And finally, be ready to offer Wi-Fi access. If you have a guest network, have login credentials ready. If you do not have a guest network, take appropriate measures (you do not want to let the auditor in to your company network unchecked) or inform the auditor accordingly.

Plan for Post-Audit Activities

It’s a classic mistake: once the audit is over and the tension lifts, teams move on, often too quickly. But the days immediately following the audit are critical. Now is when you should debrief internally, review your notes, and assign actions not only for formal findings, but also for informal observations or weak spots your team noticed during the audit, but the auditor didn’t flag. 

These overlooked details often reveal the most valuable opportunities for improvement. Make sure to allocate time and resources for this phase in your audit plan, or risk losing insights that could strengthen your system before the next inspection.

Conclusion

Audits don’t have to be all about stress. By preparing early, understanding the scope and the auditor’s focus, and training your team to respond with clarity and professionalism, you can significantly increase your chances of not just passing the audit, but benefiting from it. Audits are not obstacles; they're opportunities. With the right mindset and structured preparation, your team can walk into any audit ready, composed, and in control.

About the Expert

After completing her Master's in Infection Biology at the Tropical Health Institute of Basel (University of Basel), she began working at a Swiss IVD company called BÜHLMANN Laboratories. Her work there led her to appreciate the role of Quality Managers, both in product management and in quality systems.

Following a brief temporary contract with Roche Diagnostics in Rotkreuz, she moved on to Effectum Medical AG, where she has since been working as a Senior Quality Manager and Head of Quality Management and Regulatory Affairs—both for the products for which the company acts as legal manufacturer and as a consultant for other companies.

About the Author
[rocket]

Accelerate your journey to CE Mark and FDA approval

Try aligned elements 30 days for free!

Start Today

Continue reading