Skip to main content
/

Criteria

Helping You Choose with Confidence

The APA Labs Digital Badge program reviews digital mental and behavioral health tools across six critical domains that reflect key areas of importance in mental health care and digital health delivery. This comprehensive evaluation approach - which includes rigorous risk identification and assessment of AI components – supports informed decision-making for both clinicians and users seeking mental and behavioral health support.

6 Critical Domains

Scene Setters
Provides context: product description, target users, claims made, and developer transparency. We assess whether users are clearly informed about what the product is, and isn’t.
Scientific Principles
We evaluate whether clinical or behavioral claims are supported by studies, analytics, or real-world KPIs.
Regulation & Safety
Covers whether a product is classified correctly, complies with local laws, and includes emergency contact info or risk disclaimers.
Data Protection & Privacy
We examine data flows, consent, legal basis, location handling, and whether users can control their data.
Technical Security & Stability
Looks at encryption standards, access control, testing practices, patching, and reliability.
Usability & Accessibility
Accessibility isn’t optional. We assess conformance to WCAG standards, readability, and user-centered design.

AI Evaluation Framework
Our approach to evaluating AI products and components is product-specific - the risks present in any given product depend substantially on the type of AI used and what it is asked to do, and the evaluation reflects this.

Domain 1: Scene Setters

The Scene Setters domain offers a deeper, more tailored front-end assessment for psychological and behavioral products. It separates out key scoping elements — including product purpose, mental health focus, target population, AI use, and funding model — to ensure clarity before detailed scoring begins.

Scene Setters includes special emphasis on:

  • Target audiences of products

  • Inclusivity and underserved populations

  • Clinical framing of product functions in the context of behavioral health

  • Expanded AI functionality criteria and ethical AI use

This domain sets a strong foundation for downstream assessment, making it easier to determine scope, risk, and relevance to APA’s standards.

Criteria Categories
  • Mental Health Relevance & Intended Use

  • Target Population & Priority Groups

  • Data Collection & Sharing Overview

  • Use of Algorithms & AI

  • Device Claims or Classifications

  • Communication Modes & User Interaction

  • Funding Model & Claimed Benefits

Domain 2: Scientific Principles

The Scientific Principles domain captures a nuanced and clinically relevant understanding of evidence quality, setting, and impact — particularly for psychological health technologies.

It includes discrete questions to differentiate:

  • RCTs vs. observational studies

  • Clinical vs. real-world settings

  • Number of study arms (e.g., single, dual, multi-arm)

  • Whether AI model outputs are tested and validated, with questions focused on transparency in methods and accuracy

Criteria Categories
  • Evidence Type, Quantity & Study Design

  • Claimed Benefits & Alignment with Supporting Evidence

  • Source Credibility & Publication Format

  • Use of Behavior Change Models or Theoretical Frameworks

  • AI Model Testing & Output Accuracy

Domain 3: Regulation & Safety

The Regulation & Safety domain focuses on transparently capturing who is involved, what their qualifications are, and what role they play in the product’s development, delivery, or oversight.

This includes capturing:

  • If a qualified health professional is involved, and if so, their discipline, role, and level of involvement

  • Whether professionals are actively contributing (e.g., through testing, validation, content design, or live services) or listed in name only

  • If a robust safety management system exists — covering not just named oversight but also formal risk identification, mitigation, and reporting pathways

The domain also includes a set of criteria focused on clinical risk management documentation.

Criteria Categories
  • Health Professional Involvement & Role Transparency

  • Nature & Depth of Professional Contribution

  • U.S. FDA Regulatory Status & Device Classification

  • Safety in Peer Support & Communication Features

  • Risk Disclosure, Intended Use & Claim Accuracy

  • Clinical Safety Oversight & Governance Pathways

  • AI Governance & Model Assurance

Domain 4: Data & Privacy

The Data & Privacy domain evaluates whether products follow responsible and transparent data practices, with an emphasis on privacy rights, clarity, and protections — especially for users in psychological or vulnerable contexts.

It includes a refined approach to assessing HIPAA applicability, ensuring products fall under U.S. or equivalent legal obligations, and whether this is clearly communicated to users.

The domain also focuses on making privacy documentation more inclusive, addressing whether policies are:

  • Written in plain language

  • Accessible to a general reading level

  • Publicly available across key platforms

It also adds emphasis on:

  • Children’s data protection and parental consent

  • Secure user authentication and product-level access controls

  • Clear, visible pathways for raising privacy concerns or complaints

These criteria ensure that data practices are not only legally compliant but also understandable, fair, and protective — reinforcing user trust and psychological safety.

Criteria Categories
  • Refined HIPAA Applicability & Covered Entity Status

  • Transparency of Privacy Documentation

  • Clarity on Data Use, Collection & Sharing

  • User Rights

  • Children’s Data Protection & Consent Mechanisms

  • User-Level Authentication & Access Controls

  • Raising Concerns & Contact Pathways

Domain 5: Technical Security & Stability

The Technical Security & Stability domain evaluates how well a product protects user data, maintains uptime, and responds to faults — ensuring safe and reliable performance in real-world settings.

It covers:

  • Data handling and connectivity, including internet use, rollback capacity, and device-level storage

  • Operational stability, such as version control, issue resolution, monitoring, and long-term maintenance

  • Disaster recovery and continuity plans, including backup systems and decommissioning protocols

  • Testing and validation, including penetration, vulnerability, and load testing

  • Platform architecture, deployment type, and adherence to secure development practices (e.g., OWASP) Risk management and compliance, aligned with frameworks like ISO 27001, SOC 2, and NIST

This domain helps assess whether technical safeguards are not only in place but actively maintained and evidence-based, reducing risk to both users and clinical workflows.

Criteria Categories
  • Connectivity, Data Access & Device Storage

  • Version Control, Rollback & Maintenance Plans

  • Operational Monitoring & Issue Resolution

  • Disaster Recovery & Business Continuity

  • Security Testing & Vulnerability Detection

  • Architecture, Access Control & Deployment

  • Security Risk Management & Compliance Frameworks

Domain 6: Usability & Accessibility

The Usability & Accessibility domain assesses whether digital health products are inclusive, understandable, and practically usable across a wide range of user groups — with an emphasis on equity, flexibility, and responsiveness.

It includes criteria to evaluate:

  • Whether the product has been co-designed or tested with diverse populations, including people with lived experience, different cultural backgrounds, disabilities, and low digital literacy

  • The quality, clarity, and transparency of accessibility statements, including whether assistive technologies are supported and compliance with standards like WCAG and ADA best practice guidelines

  • Reading level and explanation of technical terms, to ensure clinical or digital content is accessible to those with cognitive impairments or limited literacy

  • Quality and responsiveness of user support — including the presence of clear contact methods and a commitment to resolving reported bugs, support requests, or clinical concerns.

Criteria Categories
  • Inclusive Design, Co-Design & Demographic Representation

  • Accessibility Statement & Design Standards Compliance

  • Support Tools & Comprehension Aids

  • Font, Visual & Presentation Customization Options

  • Notification Preferences & Privacy Controls

  • AI Accessibility & Feedback Integration

  • Support Access & Developer Responsiveness

AI Evaluation Framework

Clinical Context: Understanding the scope of influence

AI Function: What the AI component specifically does

AI Model Evaluation: Understanding the technology to identify inherent risks

Safety Design: Structural safeguards embedded in the system

Governance & Oversight: Operational controls and monitoring arrangements

Evidence: Testing and validation data along two distinct streams, model performance and control effectiveness

Expert Contributors

The APA Labs Digital Badge criteria were developed in collaboration with leading subject matter experts across psychology, digital health, clinical research, neuroscience, and mental health technology.

This multidisciplinary group brings together deep expertise from clinical practice, research, ethics, technology, and innovation to help ensure the criteria reflect the most important aspects of evaluating digital mental and behavioral health technologies.

For a full list of contributors, please see below.

  • Victoria BangievaVictoria Bangieva, PhD, is a licensed clinical psychologist who works at the intersection of clinical science and technology. She specializes in designing and disseminating digital measures and therapeutics to expand access to evidence-based tools and improve health outcomes. Her work is dedicated to driving the integration of digital health solutions into routine care and research.

  • Joe Braidwood

  • Alisa Breetz

  • Amber W. ChildsDr. Amber W. Childs is a nationally recognized expert in child and adolescent mental health, and founder/CEO of The Dr. Amber Childs Advisory, a venture dedicated to improving mental health outcomes for youth. Currently an Associate Professor of Psychiatry at Yale School of Medicine, Dr. Childs is a serial founder (GROW, YMBCC, M-Select), award-winning innovator, and frequent media contributor (New York Times, Washington Post, CNBC, Hartford Courant). Dr. Childs earned her PhD from the University of Tennessee. She lives in Connecticut with her husband and two children.

  • Lindsay Childress BeattyDr. Lindsay Childress-Beatty is APA’s first Chief of Ethics, leading national and international conversations on psychological and organizational ethics. She has presented on ethics and AI at major venues, including the 2025 International Summit on Psychology and Global Health, CES 2024, and the APA Main Stage. A founding member of the Ethics Professionals Network, she previously served as APA’s Deputy General Counsel. Dr. Childress-Beatty is a licensed attorney with a PhD in Clinical Psychology from Columbia University, a JD from the University of Michigan, and an MPhil from the University of Cambridge.

  • David CooperDavid Cooper, PsyD. is a digital health expert who is currently the Executive Director of Therapists in Tech, the largest organization of clinicians in digital mental health. He has worked with organizations like the US Department of Defence, the AMA and FDA, Teladoc, and many top hospitals in the US on their digital health strategies and portfolios.

  • Karen FortunaDr. Karen Fortuna is an Assistant Professor of Community and Family Medicine at Dartmouth and Founder of the Patient Innovation Lab, where she partners with patients to co-design technology solutions that support a long, healthy lifespan.

  • Leanna FortunatoLeanna Fortunato is a licensed clinical psychologist with an interest in finding creative ways to harness technology to make high-quality mental health care more accessible and equitable for all. Fortunato supports OHCI’s efforts to operationalize strategies that promote practice innovation in the realms of digital mental health and measurement-based care. She has experience as a clinical administrator, practitioner, and consultant across a variety of settings including university-based mental healthcare, private practice, and digital mental health. Fortunato holds a PhD in Clinical Psychology from Eastern Michigan University and is licensed in Illinois and Virginia.

  • Trina HistonDr. Trina Histon is an internationally recognized expert in digital health, behavior change, and healthcare innovation. With over 20 years of experience at the intersection of clinical care, psychology, and technology, they bring academic rigor and real-world experience to partnerships focused on improving lives. Trina was previously VP of Clinical Product Strategy at Woebot Health and shaped national behavior change and digital health deployment efforts at Kaiser Permanente. They currently co-lead the UK’s Digital Adoption Workstream of the Mental Health Goals Program and serve several digital mental health companies.

  • Jessica JacksonDr. Jessica Jackson has over 15 years of experience turning conversations around mental health into actionable ecosystems of care. Serving on leadership advisory boards of the nation’s top mental health associations, behavioral health startups, and venture capital firms, Dr. Jackson helps innovators drive the future of mental health in ways that are profitable while remaining firmly equitable, inclusive, and accessible to every individual that needs it most.

  • Marie M. OnakmaiyaMarie M. Onakomaiya, PhD MPH is a neuroscientist, clinical epidemiologist, and founder of Metric Health, an AI-driven startup advancing brain injury assessment and monitoring. She is also a member of APA’s Mental Health Technology Advisory Committee. With a PhD from Dartmouth and an MPH from Columbia, Dr. Onakomaiya has built her career at the intersection of data, technology, and healthcare/public health innovation.

  • Stephen SchuellerStephen Schueller, PhD is a Professor of Psychology and Informatics at the University of California, Irvine. He is a licensed clinical psychologist and a mental health services researcher. His work focuses on the development, evaluation, and implementation of technologies to improve mental health and mental health service delivery.

  • Jennifer ShannonDr. Jennifer Shannon is a practicing child psychiatrist at The Emily Program and a teaching faculty member at the University of Washington. She was previously a medical director at Cognoa, where she helped develop the first FDA-authorized diagnostic device for autism using machine learning. She is currently the co-founder and Chief Medical Officer of Glacis.

  • Hilary WeingardenDr. Weingarden is the Director of Clinical Research at HabitAware and a licensed psychologist providing evidence-based therapy for OCD, body dysmorphic disorder, and related conditions through a private practice in Massachusetts. She serves on the American Psychological Association's Mental Health Technology Advisory Committee, and before her current role, she was Assistant Director of the Massachusetts General Hospital Center for Digital Mental Health, a psychologist in the MGH Center for OCD and Related Disorders, and an Assistant Professor at Harvard Medical School. Her research, which leverages technology to improve assessments and treatments for mental health conditions and investigates the harmful role of shame in obsessive-compulsive and related disorders, has been funded by the National Institute of Mental Health and Harvard Medical School. 

  • Rachel WoodDr. Rachel Wood holds a PhD in cyberpsychology with expertise at the intersection of AI and mental health. She is a speaker, workshop facilitator, and advisor. As the founder of the AI Mental Health Collective, she fosters clinician awareness of the impacts of AI and cultivates cross-disciplinary dialogue on responsible AI innovation. Dr. Wood’s work has been featured in TIME, ABC, the APA Monitor, International Business Times, Behavioral Health Business, and more.

  • C. Vaile Wright: Vaile Wright is a licensed psychologist and researcher focusing on developing strategies to leverage technology and data to address issues within health care including increasing access, measuring care, and optimizing treatment delivery at both the individual and system levels. Wright has maintained an active line of research with peer-reviewed articles in journals including Professional Psychology: Research and Practice, Law and Human Behavior, and the Journal of Traumatic Stress. As a spokesperson for APA, she has been interviewed by television, radio, print, and online media including CNN, NBC News, the Today Show, MSNBC, The Washington Post, and NPR on a range of topics including stress, politics, discrimination and harassment, COVID-19, serious mental illness, telehealth and technology, and access to mental health care. Wright received her PhD in counseling psychology from the University of Illinois, Urbana-Champaign in 2007, and is licensed in the District of Columbia.

Want to stay informed as new technologies earn APA Labs Digital Badges?

Subscribe to the APA Labs newsletter for product releases, insights and updates on the evolving digital mental health landscape.

Download the APA Labs Digital Badge Criteria & AI Evaluation Framework

Download the APA Labs Digital Badge Criteria and AI Evaluation Framework to understand the rigorous standards applied to digital mental and behavioral health technologies.

APA Labs is a unit of American Psychological Association Services, Inc. (APASI), the 501 (c)(6) companion of the American Psychological Association, a 501(c)(3) organization. APA Labs supports the work of APASI through creative, collaborative, psychology-centered projects, programs, and solutions.

The APA Labs Digital Badge Solutions Library was developed in partnership with ORCHA.