
Welcome to the sDHT Adoption Library, featuring NaVi
NaVi is a closed-environment AI research assistant that leverages a carefully curated library of more than 300+ vetted documents, including FDA guidance and industry best practices. NaVi helps you search and explore content across the sDHT Adoption Library and Roadmap using natural language questions.
The Library is intended to serve as a living resource. Content is added periodically as new guidance, standards, and peer-reviewed research are released.
Meet NaVi: Your AI-Powered Research Assistant
Library scope and selection
To ensure high-quality, relevant results, the Library follows a predefined scoping approach:
- Inclusions: FDA guidance, non-commercial standards, and peer-reviewed research (2018–Present) focused on sDHTs being used as measurement tools for medical products in U.S.-based clinical trials.
- Exclusions: Materials from single commercial entities, non-U.S. regulatory bodies (except select EMA guidances with direct U.S. cross-relevance), and conference proceedings, and conference proceedings.
Inclusion in the Library does not imply endorsement, completeness, or regulatory acceptability.
Library scope
Resources in the sDHT Adoption Library are identified using a predefined scoping approach and include publicly available FDA guidance, non-commercial standards and guidance, and peer-reviewed research relevant to sDHT use in U.S.-based clinical trials. Materials from single commercial entities, non-U.S. regulatory bodies, conference proceedings, and studies conducted exclusively outside the United States are excluded; inclusion does not imply endorsement or regulatory acceptability.
Last updated 2026: Library content is reviewed and updated on a periodic basis as new eligible materials become available.
Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products: Discussion Paper and Request for Feedback, 2025 (FDA)
Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products: Discussion Paper and Request for Feedback, 2025 (FDA)
The use of Artificial Intelligence (AI) and Machine Learning (ML) is being applied to a broad range of drug development activities with the potential to accelerate the process and make clinical trials safer and more efficient. The inclusion of AI/ML is most common in the clinical development/research phase of regulatory submissions. Concerns exist that AI/ML algorithms could amplify errors and preexisting biases in underlying data sources, which raises issues related to generalizability and ethical considerations. Other challenges include limited explainability due to model complexity and proprietary reasons, as well as managing risks related to data quality, reliability, and representativeness. The FDA recognizes that a careful, risk-based assessment of the specific context of use (COU) is needed when evaluating AI/ML.
Recommendations
Stakeholders should adhere to practices in three key areas: human-led governance, accountability, and transparency; quality, reliability, and representativeness of data; and model development, performance, monitoring, and validation. A risk management plan should be applied to identify and mitigate risks based on the COU, guiding the level of documentation and transparency. Practices are needed to ensure the integrity of AI/ML and address issues like bias and missing data. For models, developers should use pre-specification steps and clear documentation for development and assessment criteria. Models must be monitored over time for reliability and consistency, and Real-World Data (RWD) performance can provide valuable feedback, including for potential re-training.
Regulatory Considerations
The FDA encourages early engagement through mechanisms like the Critical Path Innovation Meetings (CPIM), ISTAND Pilot Program, and Emerging Technology Program to discuss relevant AI/ML methodologies or technologies. The Verification and Validation (V&V 40) risk-informed credibility assessment framework and the principles for Good Machine Learning Practices (GMLP), while not specific to drug development, are helpful guides for evaluating models. The industry is exploring the use of a Predetermined Change Control Plan (PCCP) mechanism for AI/ML-based devices to proactively specify and manage modifications, enhancing adaptability. In general, a risk-based approach should guide the level of evidence and record keeping needed for the verification and validation of AI/ML models for a specific COU.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
V3+ extends the V3 framework to ensure user-centricity and scalability of sensor-based digital health technologies
V3+ extends the V3 framework to ensure user-centricity and scalability of sensor-based digital health technologies
While verification, analytical validation, and clinical validation have been well-established, usability validation has not been systematically incorporated into digital health technology evaluation.
Variability in device designs, patient populations, and regulatory environments creates barriers to widespread adoption of sensor-based digital health technologies.
Usability problems, such as poor user interfaces and technical errors, can lead to significant data loss in clinical trials and real-world applications.
While some guidance exists for usability in medical devices, there is no unified global standard for assessing usability in digital health products, leading to inconsistencies in implementation.
Stakeholders, including regulators, industry leaders, and researchers, recognize the need for usability validation to ensure the real-world effectiveness of digital health technologies.
Recommendations
Adopt the V3+ framework as a standardized method to ensure that usability is rigorously tested alongside verification, analytical validation, and clinical validation.
Establish clear protocols for usability testing, including use specification development, risk analysis, iterative formative evaluations, and summative evaluations.
Bring together regulators, technology developers, clinicians, and patients to create guidelines that ensure fit-for-purpose digital health solutions.
Work with regulatory agencies such as FDA, EMA, and MHRA to establish harmonized global standards for usability validation.
Encourage the publication of usability study results, including negative findings, to facilitate transparency and continuous improvement in digital health technologies.
Regulatory Considerations
Agencies like FDA and EMA increasingly require usability data for digital health technologies, but standardized methodologies are still evolving.
Usability validation should align with regulatory requirements for medical devices and digital biomarkers, ensuring clinical relevance and data integrity.
Digital health technologies must adhere to HIPAA, GDPR, and other data protection regulations while ensuring seamless usability.
Poor usability can lead to missing or unreliable data, which affects regulatory submissions and real-world evidence generation.
A consistent approach to usability evaluation is needed to support regulatory decision-making and digital health product approvals globally.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
A practical guide for selecting continuous monitoring wearable devices for community-dwelling adults
A practical guide for selecting continuous monitoring wearable devices for community-dwelling adults
Existing guidelines lack pragmatic application and systematic approach for device selection.
Device choice is dependent on measurement objectives, user population, and available resources.
Current frameworks do not systematically consider verification, validation, feasibility, and protocol design.
Rapid obsolescence of digital devices due to technological advancements.
Need to incorporate social/psychological factors into device selection.
Recommendations
Develop a practical guide with a systematic approach for selecting wearable devices.
Use five core criteria: continuous monitoring capability, device suitability and availability, technical performance, feasibility of use, and cost evaluation.
Prioritize feasibility of use to ensure user needs are incorporated into the selection process.
Adapt guide criteria to accommodate novel innovations.
Foster clarity and transparency in decision-making among researchers, HCPs, and device users.
Regulatory Considerations
Follow FDA guidance for digital health technology usage in clinical investigations.
Consider CTTI recommendations for improving clinical trial quality and efficiency.
Use ePRO Consortium's factors for device suitability in regulatory trials.
Apply international guidelines for specific measurements when available.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
Artificial Intelligence and Machine Learning in Software as a Medical Device
Artificial Intelligence and Machine Learning in Software as a Medical Device
AI/ML technologies offer dynamic learning capabilities but require careful regulation to ensure safety and effectiveness.
The FDA recognizes that traditional regulatory paradigms may not align with the adaptive nature of AI/ML and is developing frameworks to address this.
Guidance documents, such as the AI/ML SaMD Action Plan and predetermined change control plan (PCCP) recommendations, provide a structured approach for handling software updates.
Collaboration across FDA centers (CDRH, CBER, CDER) facilitates consistent regulatory practices for AI/ML across medical products.
Transparency and real-world data integration are key focuses in regulating AI/ML technologies.
Recommendations
Manufacturers should use FDA's premarket pathways, including 510(k), De Novo, or PMA, for AI/ML-enabled SaMD.
Apply Good Machine Learning Practices (GMLP) during development to ensure algorithm reliability, transparency, and patient safety.
Include a predetermined change control plan (PCCP) in submissions to allow for iterative updates without requiring resubmissions.
Follow lifecycle management practices to maintain AI/ML system performance after deployment.
Engage with FDA early in development to align on appropriate regulatory strategies for novel AI/ML implementations.
Regulatory Considerations
AI/ML-driven SaMD updates may require premarket review, depending on the significance of changes and associated risks.
The FDA has outlined principles for transparency, including clear labeling and documentation of AI/ML system capabilities and limitations.
Guidance documents like the "Good Machine Learning Practice" and "Marketing Submission Recommendations for PCCP" should be followed for compliance.
Collaboration between FDA centers ensures alignment on the use of AI in combination products and broader healthcare applications.
Lifecycle management strategies must account for real-world data to ensure continuous learning and safe AI/ML system updates.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
Assessing the net financial benefits of employing digital endpoints in clinical trials
Assessing the net financial benefits of employing digital endpoints in clinical trials
The use of digital endpoints provides substantial financial value to drug developers, with significant positive changes in expected net present value (eNPV) and high returns on investment (ROI). These benefits are primarily driven by shorter clinical trial durations and smaller participant enrollment sizes. The financial gains are considerably larger in Phase III trials compared to Phase II, which is attributed to the higher probability of a drug successfully reaching the market from the later stage. While the upfront investment for implementation is significant, the financial returns justify the cost across the therapeutic areas analyzed.
Recommendations
Sponsors should develop cross-portfolio strategies for digital measures to optimize and scale the value captured across their development programs. Engaging in precompetitive collaborations is encouraged to share the risks and costs of development, harmonize new measures across the industry, and increase overall returns. Organizations should continue to invest in these capabilities, as their widespread adoption can transform the drug development process and, ultimately, deliver safe and effective treatments to patients sooner.
Regulatory Considerations
While a deep analysis of the regulatory environment is outside the paper's scope, it acknowledges that the evolving regulatory landscape is critical for fostering innovation in clinical development. To support broader adoption and understanding, the authors suggest that clinical trial registries should expand their data collection to include specific details on the use and outcomes of digital endpoint strategies. This would improve transparency and help build the evidence base for the impact of these novel measures on clinical research.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
At-a-Glance: Incorporating Human-Centered Design Into sDHT Development
At-a-Glance: Incorporating Human-Centered Design Into sDHT Development
The goal of sDHT design is to create tools that are functional, intuitive, accessible, and enjoyable to use, moving beyond merely minimizing use-errors. Human-centered design (HCD) is the preferred term over user-centered design, emphasizing the impact on many user groups beyond just the end-users. "Users" encompass end-users (patients/participants), carepartners, clinicians, investigators, and administrators.
Recommendations
Developers of sDHTs should adhere to the following HCD principles:
Empathetic: Take time to deeply understand users' needs, behaviors, and emotions, capturing this in the use specification.
Holistic: Consider the entire end-to-end user journey, including hardware, software, accessories, packaging, instructions for use, and training.
Iterative: Employ an iterative approach to designing, prototyping, testing, and refining, using formative evaluations to identify use-errors and gather usability data, capturing this in the use-related risk analysis.
User-centric: Improve usability by capturing user feedback in real-world settings, gradually recruiting larger, more diverse samples that represent the intended use population.
Inclusive: Collaborate with individuals representing all user groups by hiring them as consultants or creating user advisory panels to influence design decisions (co-design).
Multidisciplinary: Ensure the development team includes colleagues from various disciplines to bring diverse perspectives and innovative solutions.
Regulatory Considerations
The document ties the HCD process to risk management and eventual validation by recommending that findings from formative evaluations (used to identify use-errors) be captured in a use-related risk analysis. The approach aligns with the principles of the overarching V3+ framework.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
At-a-Glance: Selecting Metrics for Evaluating sDHT Usability
At-a-Glance: Selecting Metrics for Evaluating sDHT Usability
Usability is a multi-domain concept that requires a combination of methods for evaluation. Evaluations fall into two types: formative (for design modification of prototypes) and summative (for demonstrating usability of the final product to a representative user sample). The user experience metrics fall into several domains, including: Satisfaction, Usefulness, Ease of use, Learnability, Efficiency, Memorability, Understandability, Actionability, Readability, and Use-errors. Metrics related to Satisfaction and Usefulness are always subjectively reported by users.
Recommendations
Developers should select metrics based on the specific usability-related domain being evaluated.
Subjective Data (e.g., Satisfaction, Usefulness): Capture through qualitative surveys, quantitative surveys (scales), interviews, focus groups, and think-aloud evaluations .
Objective Data (e.g., Ease of use, Use-errors): Capture through direct or indirect observation (e.g., counting steps/attempts, timing task completion), or by using data generated by the sDHT (e.g., error reports, timestamps, page load times).
Time-based Metrics: Evaluate Learnability (ease of first use), Efficiency (ease with experience), and Memorability (ease after non-use) by measuring ease of use at different points in time .
Information Presentation: If the sDHT presents clinical data or written information (instructions, warnings), evaluate Understandability, Actionability, and Readability .
Use-errors: Objectively capture the number, type, and recoverability of use-errors (actions, or lack thereof, that may result in harm) via observation and sDHT data, noting that "use-error" is preferred to "user-error".
Regulatory Considerations
While this guide does not reference regulatory bodies like the FDA, it is part of the V3+ framework and recommends that researchers prioritize essential documents like the use specification and use-related risk analysis before designing a usability study. Summative evaluations demonstrating usability against a representative user sample under intended use conditions are the standard for demonstrating product fitness.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
Best Practices and Recommendations for Sites Utilizing Connected Devices
Best Practices and Recommendations for Sites Utilizing Connected Devices
Sites must establish effective data privacy and security plans, especially considering regional and global regulations like GDPR.
Risk mitigation is critical, including plans to address unanticipated issues and potential patient disengagement due to technology challenges.
Budgeting and contracting often involve additional considerations, such as storage, training, and technical support requirements for connected devices.
Sites require adequate training to ensure staff and patients are prepared to use connected devices efficiently.
Companion applications or services often play an essential role in device functionality and data transmission.
Recommendations
Develop a clear plan for data pathways, including storage, security, and regulatory compliance.
Establish detailed risk mitigation and management strategies to handle unexpected challenges.
Ensure comprehensive training programs for site staff and patients to enhance device usability.
Incorporate device storage and resource allocation into budgeting and contracting processes.
Facilitate effective communication with sponsors and vendors to resolve operational and technical issues promptly.
Regulatory Considerations
Ensure connected devices comply with CFR 21, Part 11, and other relevant data collection and transmission regulations.
Understand and adhere to local and regional data privacy laws, such as GDPR, when managing patient data.
Verify that appropriate licenses and regulatory approvals are in place for device data transmission and storage.
Assess and address shipping and handling regulations for devices, ensuring safe and compliant transportation.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
Building Fit-for-Purpose Sensor-based Digital Health Technologies: A Crash Course
Building Fit-for-Purpose Sensor-based Digital Health Technologies: A Crash Course
Usability gaps in sDHTs remain a barrier to adoption, with many technologies failing to prioritize ease of use, accessibility, and diverse user needs
Human-centered design is critical for ensuring that digital health solutions are intuitive, functional, and scalable across different healthcare environments
Standardized usability metrics for evaluating digital health technologies are lacking, leading to inconsistent reporting and validation of usability outcomes
Use-related risk analysis is essential to identifying and mitigating risks associated with user errors, ensuring the safety and effectiveness of sDHTs
The V3+ framework provides a structured approach to integrating usability validation into digital health technology development, aligning with global regulatory expectations
Recommendations
Developers should incorporate human-centered design principles from the outset, ensuring that usability, accessibility, and user needs are central to sDHT development
Usability validation should be standardized, with clear methodologies for measuring usability, including satisfaction, ease of use, efficiency, and error mitigation
Regulatory and clinical stakeholders should collaborate on defining best practices for usability evaluation, ensuring that digital endpoints are both meaningful and scalable
Risk analysis should be iterative, with developers continuously refining their technologies based on real-world user feedback and testing
The usability validation component of V3+ should be widely adopted to ensure that digital clinical measures meet patient-centered, regulatory, and technical expectations
Regulatory Considerations
Regulators are emphasizing the need for usability validation to ensure that digital endpoints are both clinically relevant and patient-friendly
sDHTs must comply with human factors engineering guidelines, aligning with global regulatory frameworks such as ISO 9241-210 and FDA usability requirements
Data security, privacy, and interoperability must be ensured, particularly as sDHTs become integrated into remote monitoring and decentralized clinical trials
Real-world evidence (RWE) should support usability validation, helping to bridge the gap between regulatory approval and real-world adoption
Regulatory bodies should work toward standardizing usability testing methodologies, ensuring consistency across clinical research, digital endpoints, and medical device evaluations
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
Checklist: Essential Questions for DHT Vendor Selection (Core measures of sleep)
Checklist: Essential Questions for DHT Vendor Selection (Core measures of sleep)
Different Digital Health Technologies (DHTs) estimate sleep staging using data from various sensor-based sources (e.g., EEG, actigraphy, ballistocardiography), each with different properties impacting the estimation. Sleep staging algorithms are often proprietary. DHTs interpret sleep staging at different time intervals, or epochs (e.g., polysomnography uses 30-second epochs). DHT vendors transmit data at different levels, ranging from epoch-level data to pre-calculated summary data (e.g., "total sleep time").
Recommendations
Method and Signals: Ask the vendor about their method of sleep monitoring and which signals are being recorded and used, and understand the strengths and limitations of the technology.
Granularity and Epochs: Inquire about the granularity of sleep data estimated (coarse to fine grain) and the epoch length used for sleep annotations, as this informs interpretation and comparability to other research.
Thresholds and Rules: Ask what rules and thresholds are set for confirming events like sleep onset and offset to ensure certainty in the data and inform future interpretation of results.
Data Level: To align with the Core Digital Measures of Sleep, epoch-level data is preferred for further analysis and comparison between measurement systems. If only summary data is offered, ask for a detailed description of the estimation process.
Algorithms and Evidence: Ask for evidence to support the validity and reliability of the estimated sleep stages, which may include peer-reviewed manuscripts, technical documentation, and conference abstracts.
Regulatory Considerations
While not a regulatory document, the recommendations emphasize the need for vendors to provide evidence for the validity and reliability of their proprietary sleep staging algorithms. This evidence, which can be found in peer-reviewed literature or technical documentation, is crucial for establishing confidence in the results arising from the technology, and can be used for inclusion in, for example, regulatory documents.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
Checklist: Essential Questions for DHT Vendor Selection (V3+)
Checklist: Essential Questions for DHT Vendor Selection (V3+)
For an sDHT to be considered fit-for-purpose, a researcher or healthcare provider must understand the alignment between the sDHT's intended use (What it does, who uses it, where/when/how) and their own context of use . Key information for this assessment comes from the developer's Use Specification (detailing hardware, software, accessories, training) and Use-Related Risk Analysis (detailing warnings, harms from use-errors, and risk avoidance) . Usability validation evidence should cover study objectives, protocols, participant characteristics, metrics, and collection methods.
Recommendations
Researchers/providers should use the checklist to:
The Basics: Compare the sDHT's intended use to their context of use; if there is substantial overlap, existing evidence may be sufficient.
Use Specification/Risk Analysis: Gather detailed descriptions of the sDHT's hardware, software, accessories, written materials, training, cautions, warnings, and potential harms from use-errors to update their own Use Specification and Use-Related Risk Analysis .
Existing Evidence: Access existing usability validation study results (objectives, methods, participant characteristics, metrics, etc.) to determine its applicability and generalizability to their context of use .
Collaboration: Consider establishing a collaborative relationship with the developer to provide feedback for next-generation sDHTs, ensure version control, and potentially collaborate on future usability validation studies .
Regulatory Considerations
The document notes that if the sDHT is a regulated medical device, the intended use statement should already capture the answers to the basic questions. The entire checklist is framed around the V3+ framework, which is designed to ensure the rigor necessary for a product to be considered fit-for-purpose by all stakeholders.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.
Conducting Clinical Trials With Decentralized Elements
Conducting Clinical Trials With Decentralized Elements
Coordination challenges with multiple locations in DCTs.
Variability in data collection across decentralized locations and remote tools.
Challenges in implementing certain statistical approaches in DCTs.
Need for DHTs to be accessible and suitable for all trial participants.
Ensuring compliance with local laws and regulations.
Recommendations
Develop clear protocols for integrating decentralized elements into clinical trials, specifying remote and in-person activities.
Use digital health technologies (DHTs) and electronic systems to streamline data acquisition, informed consent, and investigational product tracking.
Provide training for all stakeholders, including trial personnel, local health care providers, and participants, on decentralized processes.
Implement robust safety monitoring plans to address adverse events in decentralized settings.
Ensure compliance with local and international laws governing telehealth, data privacy, and investigational product use.
Regulatory Considerations
Maintain compliance with FDA requirements under 21 CFR parts 312 and 812 for drug and device trials, respectively.
Document all trial activities and data flows in trial protocols and data management plans, ensuring traceability and integrity.
Ensure informed consent processes meet FDA standards and provide clear communication to participants about decentralized trial activities and data handling.
Address investigational product accountability by documenting IP distribution, storage, and return or disposal.
Design electronic systems for decentralized trials to comply with 21 CFR part 11 requirements for data reliability, security, and confidentiality.
Some summaries are generated with the help of a large language model; always view the linked primary source of a resource you are interested in.