Skip to content

2.4 Choosing the sDHT

Identify a fit-for-purpose sDHT to operationalize your endpoint strategy

This step focuses on aligning adopter needs with a tool that’s reliable, user-friendly, and capable of delivering the precise data required. This approach ensures a chosen sDHT aligns seamlessly with stakeholder needs, whether you’re refining a technology as a developer or integrating it into a trial as an adopter.

Vendor selection is often framed from the adopter perspective –  but it’s really a joint effort, as we discuss below. 

Leveraging the developer/adopter partnership: a co-development model

Selecting the right sDHT is rarely a matter of finding a “finished” product. More often, technologies are largely ready but require targeted optimization to be fit-for-purpose for a specific endpoint, patient population, and context of use. Both adopters and developers benefit from treating vendor qualification as a co-development process rather than a one-sided audit.

In practice, this means aligning on what is already proven – for example, 80% of the technical requirements – and then identifying the remaining ~20% that must be refined or validated. This approach preserves speed and agility by focusing resources on what matters most for trial success.

Co-development also creates an opportunity to share the work of building the final evidence package. Developers establish that the sDHT can reliably capture the signal of interest, while adopters can collaborate with developers on trial-specific clinical validation within their own context of use. This ensures the technology is not only technically sound but also proven in the intended study setting. For developers, it reduces the burden of carrying all evidence generation alone; for adopters, it avoids building a tool from scratch while still producing evidence directly relevant to their regulatory submission. In this way, co-development balances costs and risks while creating shared ownership of the final evidence package.

Structured due diligence still matters. Public resources such as the Digital Health Vendor Assessment toolkit (see the Industry spotlight below) outline key categories (e.g., product functionality, data quality, patient safety, technical support, cybersecurity, regulatory experience, and cost) to help adopters satisfy internal and external review requirements, while developers can use them to anticipate how their technology will be evaluated. Used thoughtfully alongside collaborative discussions about fit-for-purpose needs—these tools complement, rather than replace, the shared work of co-development.

And of course, practical considerations remain essential: both sides should confirm the vendor’s track record (e.g., prior use in trials, regulatory interactions), customer support infrastructure (including field troubleshooting and data management support), and financial stability (sustained partnership across the trial). Logistics such as supply chain reliability, training materials for sites, and warranty or maintenance terms should also be factored in (see Section 5.2: Pre-trial).

Vendor selection

Adopters: Vendor selection considerations

As an adopter, focus on how the sDHT integrates into your trial ecosystem without adding undue complexity. Factor in scalability for your study size, compliance with privacy standards like HIPAA, and support for diverse populations to promote inclusivity. Budget for ongoing services, like troubleshooting or updates, and review case studies from similar trials to gauge reliability. By prioritizing these, you ensure the sDHT enhances your operations rather than complicating them.

  • Start with the clinical need.
    What you need to measure – your clinical concept and context of use – drive the choice of technology, not the other way around. Keeping the endpoint and patient priorities front and center ensures that adoption is motivated by scientific and patient value rather than by the availability of a particular tool.
  • Review existing V3+ evidence (see Section 4: Your validation strategy).
    Examine what verification and validation evidence already exists, but avoid treating this as a siloed box-checking exercise. A perfunctory checklist rarely captures the performance characteristics that matter most for the endpoint. Instead, engage developers directly to understand how verification was conducted, under what conditions, and how results map to the clinical context of use. 

    Evaluate the usability of the technology for the participant group. An sDHT that is too burdensome or complex can undermine data quality due to non-adherence. Key questions include: Is the sDHT comfortable and easy to wear or use as instructed? Are there potential barriers like short battery life or the need for continuous internet connectivity in the participant’s environment? Applying human factors engineering principles can greatly improve usability and should be considered by developers and adopters. See Section 4.2: Usability validation for a usability deep dive.

    If a digital measure is novel or not yet validated for your use case, plan early validation studies to establish that the sDHT’s output correlates with the clinical outcome of interest (see Section 4.4: Clinical validation)
  • Plan for data integrity and privacy.
    Data must be captured, stored, and transferred securely, with audit trails and privacy protections appropriate for a clinical setting. Adopters should verify compatibility with their own data systems and clarify data ownership up front. While regulations such as FDA 21 CFR Part 11 or EU’s General Data Protection Regulation (GDPR) may not apply to every study, core Good Clinical Practices are important, and reputable vendors should have transparent, credible policies in place (see Section 5.2: Pre-trial).
Developers: Contributions to vendor selection and early validation

When adopters are weighing whether to move forward with a particular sDHT, developers can make this decision easier by clearly communicating where the technology stands and how it will perform in a trial environment. The emphasis is on enabling informed decision-making, setting shared expectations, and reducing downstream surprises.

  • Readiness snapshot
    Prepare a concise “dossier” that summarizes the technology’s readiness against the V3+ framework (see Section 4: Your validation strategy). This snapshot does not need to be exhaustive but should highlight what has already been demonstrated and what is planned next. Presenting this summary up front allows adopters to quickly assess whether the technology is suitable for their intended context of use and what evidence gaps might need to be addressed before trial launch.
  • Operational readiness
    Provide deployment guides, outlining expected site support, and identifying potential sources of missing data (e.g., battery recharge schedules, connectivity requirements) allows adopters to build realistic protocols and site workflows. Transparency about telemetry and adherence monitoring tools also helps adopters plan for data quality management.
  • Change management
    Describe your approach to managing updates to hardware, firmware, apps, or algorithms during a trial. Adopters need to know how updates will be communicated, what versioning information will be available, and how changes might affect data interpretation. If the developer is pursuing medical device status, sharing whether a Predetermined Change Control Plan (PCCP) is envisioned can help adopters anticipate regulatory interactions.

Making the selection decision together

To stress-test whether an sDHT is the right fit for a trial, convene a joint working session where adopters and developers review evidence and assumptions side by side. Use the following steps to anchor the discussion:

  1. Confirm that the candidate endpoint remains clearly linked to the meaningful aspects of health and concepts of interest that matter most to patients and clinicians by jointly reviewing how the proposed digital measure reflects the patient-informed concepts identified earlier.

    In practice, this may include walking through the endpoint definition together, discussing what change in the sDHT output would be interpreted as meaningful in the trial, and ensuring that both adopters and developers share a common understanding of how the measure connects to patient experience and clinical decision-making, as discussed in (Section 2.1: Patient-informed endpoints).
  1. Walk through the technical pathway from raw signals to endpoints by convening a joint adopter–developer review of the end-to-end data pipeline. Teams should examine how sample-level data are processed into intermediate features and ultimately summarized as endpoints, identify assumptions that affect interpretability, and confirm that the approach is feasible for the intended context of use, including expected adherence patterns, data loss, and site-level implementation constraints (Section 2.2: Context of use).
  1. Examine the validation snapshot (Section 4: Your validation strategy). Developers should present where the sDHT stands in the V3+ framework—what has been verified, what analytical or clinical validation is in progress, and where usability testing is planned.
  1. Test operational fit. Discuss how the technology will function in practice, including usability evidence, site support requirements, and change-management plans for hardware, firmware, or algorithms.
  1. Check organizational alignment. Overlay these findings with the business case and stakeholder priorities defined in Section 1.2: Your business case and Section 2.3: Stakeholder alignment. Do feasibility inputs, timelines, and resource needs align with what the organization is prepared to support?

By working through these steps together, adopters and developers can identify gaps early, reduce downstream surprises, and move forward with a shared understanding of readiness and risks. The outcome is a rational, co-developed plan for bringing the right sDHT into the trial, in a way that is scientifically credible, technically feasible, and organizationally supported.

FDA Logo

“Fit-for-Purpose: A conclusion that the level of validation associated with a medical product development tool is sufficient to support its context of use.”

– Appendix 2 (Glossary), p. 37

Why maturity matters: Tailoring regulatory engagement for adopters vs. developers

A maturity or gap assessment is a critical step for understanding whether a digital endpoint or sDHT is ready for regulatory engagement—and what kind of engagement is most appropriate. In this roadmap, maturity assessment refers to a structured evaluation of a digital endpoint or sDHT against the V3+ framework (Section 4: Your validation strategy), combined with practical feasibility considerations relevant to the intended trial context.

How to conduct a maturity assessment

Stakeholder considerations

Adopters: Is it ready to use?

Adopters rely on maturity assessments to evaluate whether a candidate digital endpoint or technology is ready for their intended use. Gaps in validation (e.g., insufficient clinical relevance or missing usability data) can increase regulatory risk and delay trial execution.

  • If the tool is mature, adopters may proceed with confidence—or request informal confirmation from FDA that existing evidence is sufficient for their trial context.
  • If gaps are identified, early engagement becomes a chance to ask whether the endpoint can still be used as exploratory or what supplemental evidence is required for it to support a primary outcome.

Use your maturity assessment to frame targeted questions during early regulatory outreach:

“Given the existing analytical and usability data, would this endpoint be acceptable as an exploratory endpoint in our Phase 2 trial?”

Developers: What needs to be built?

Developers use maturity assessments to map out which evidence domains still require investment. This clarity helps them design a validation strategy that aligns with regulatory expectations and minimizes future rework.

If multiple gaps exist, engaging early with FDA (see Section 3: Engage regulators) helps validate the validation plan itself—ensuring the studies they intend to run will be considered appropriate and sufficient.

Use early dialogue to reduce uncertainty before committing resources:

“We plan to use X study design to generate clinical validation evidence. Will that meet expectations for this context of use?”

Choosing an sDHT is a strategic decision that translates endpoint intent into operational reality. A fit-for-purpose selection depends on early collaboration between adopters and developers, clear understanding of the clinical concept and context of use, and transparent assessment of technical, clinical, and operational readiness. Treating selection as a co-development process helps teams focus evidence generation where it matters most while reducing downstream risk and rework.

List arrow decoration

Anchor sDHT selection in the clinical concept, patient priorities, and context of use, rather than starting from available technologies.

List arrow decoration

Approach vendor selection as a collaborative, co-development effort that clarifies what evidence already exists and what gaps must be addressed for the intended trial.

List arrow decoration

Review existing V3+ evidence with attention to how and where it was generated, and assess its relevance to the planned endpoint and study setting.

List arrow decoration

Evaluate usability, data integrity, privacy protections, and operational fit to ensure the technology can be implemented without undermining data quality or participant adherence.

List arrow decoration

Convene joint adopter and developer discussions to review readiness, traceability, validation plans, and organizational constraints before final selection decisions are made.

Library resources to guide you

The sDHT roadmap library gathers 200+ external resources to support the adoption of sensor-based digital health technologies. To help you apply the concepts in this section, we’ve curated specific spotlights that provide direct access to critical guidance and real-world examples, helping you move from strategy to implementation.

Features essential guidance, publications, and communications from regulatory bodies relevant to this section. Use these resources to inform your regulatory strategy and ensure compliance.

Open Regulatory spotlight

Gathers real-world examples, case studies, best practices, and lessons learned from peers and leaders in the field relevant to this section. Use these insights to accelerate your work and avoid common pitfalls.

Open Industry spotlight

Back to top ↑