Lessons from CES 2026 for Wearable Health Tech
By James K. Bishop
January 9, 2026
As CES 2026 wraps up today in Las Vegas, the buzz around AI-integrated health wearables is impossible to ignore. From Oura’s enhanced smart rings tracking menstrual cycles and metabolic data to Mira’s hormone analyzers and Withings’ cardiovascular scales, these devices promise a revolution in personalized wellness. But amid the excitement, a sobering undercurrent dominated panels and exhibits: governance. With AI shifting from cloud to edge processing in gadgets that collect intimate biomarkers, questions of data privacy, biases, and “hallucinations” (those plausible but erroneous AI outputs) loomed large. As someone who’s tracked tech’s intersection with assurance for decades, I see CES as a wake-up call. It’s time to leverage established frameworks—like NIST’s AI Risk Management Framework (RMF), ISO/IEC 42001, and Common Criteria Protection Profiles (PPs)—to craft robust safeguards for these wearables. Done right, this isn’t just compliance; it’s a strategic enabler for trust and innovation.
The Governance Gap in Wearable AI
At CES, the Consumer Technology Association (CTA) spotlighted digital health’s explosive growth, projecting a $500 billion market by 2034. Yet, as experts like EFF’s Cindy Cohn pointed out in sessions like “Is Your Health Data Safe?”, consumer wearables fall outside HIPAA protections, leaving sensitive data vulnerable to misuse or breaches. Oura’s CEO Tom Hale addressed backlash over perceived data-sharing on X, insisting menstrual info is “off-limits,” but user skepticism persists. Add AI-specific risks—biases skewing insights for underrepresented groups or hallucinations fabricating glucose alerts—and the need for structured governance becomes clear.
Frameworks like NIST AI RMF and ISO 42001 provide the blueprint, while Common Criteria offers tactical, certifiable PPs to operationalize them. By aligning these, firms can create device-specific PPs that balance assurance with business agility, ensuring wearables like the Peri for perimenopause tracking or J-Style’s HRV monitors deliver reliable, ethical AI.
Leveraging NIST AI RMF for Risk-Centric PPs
NIST’s AI RMF 1.0, with its Govern-Map-Measure-Manage functions, is tailor-made for wearables’ lifecycle risks. Updated in 2024 with a Generative AI Profile, it emphasizes metrics for trustworthiness, like bias audits and performance monitoring—critical for devices analyzing urine samples or sleep patterns.
To build a PP for a wearable like Mira’s tracker:
- Map Risks: Identify threats in the PP’s Security Problem Definition, such as T.BIAS_AMPLIFICATION from skewed hormone data. RMF’s inventorying aligns with PP assumptions (e.g., pre-vetted models).
- Measure and Manage: Incorporate extended components like FAI_BDM (Bias Detection and Mitigation) for real-time audits, drawing from RMF’s testing subcategories. For hallucinations in AI chatbots like 0xmd, use FAI_HAL with confidence scoring.
- Govern: Embed organizational policies (OSPs) for ethical oversight, ensuring continual improvement via RMF’s feedback loops.
This integration makes PPs dynamic: At EAL3 (methodical testing), it’s feasible for mid-tier wearables; EAL4 adds depth for high-risk ones, enabled by AI automation in evaluations.
Integrating ISO/IEC 42001 for Holistic Management
ISO/IEC 42001, the certifiable AI Management System standard from 2023, complements RMF by providing an overarching structure. Its clauses mirror a PDCA (Plan-Do-Check-Act) cycle, mandating AI impact assessments (AIIAs) and controls for risks like privacy invasions—directly relevant to CES demos where wearables blend AI with bodily data.
For PP development:
- Planning and Operation: Use Clause 6 for risk treatment in the PP’s threats section, incorporating controls like data provenance (FAI_PRO) to trace biomarker inputs.
- Leadership and Support: Align with PP objectives for secure deployment (O.SECURE_DEPLOYMENT), ensuring training on AI ethics to mitigate biases in women’s health tracking.
- Performance Evaluation: Leverage audits in Clause 9 to inform PP assurance requirements (SARs), like vulnerability analysis at EAL4, for ongoing flaw remediation.
Certification under ISO 42001 can validate a PP-compliant system, turning governance into a market differentiator. Imagine Samsung or LG certifying their AI companions—showcased at CES—against this, boosting consumer confidence amid deregulation talks.
From Frameworks to Action: Crafting PPs for Wearables
Synthesizing these, a tailored PP for health wearables might start at EAL3 for feasibility, scaling to EAL4 where AI risks elevate (e.g., in metabolic analyzers). Extended components address uniques: FAI_BDM for demographic fairness in cycle tracking, FAI_HAL for validating AI insights against medical-grade data.
Businesses should:
- Assess via RMF’s Map to scope device risks.
- Build the PP with ISO 42001’s controls for ethics and compliance.
- Certify at appropriate EALs, using AI tools to streamline testing—flipping the traditional “EAL3 ceiling” for strategic gains.
CES 2026 underscored that without this, wearables risk becoming privacy pitfalls. But with these frameworks, we can forge PPs that empower users, not exploit them. As Hale noted, privacy is the “third rail”—let’s electrify governance to keep the train on track.
James K. Bishop is a tech columnist and X user @James_K_Bishop, focusing on AI assurance. Views are his own.

