AI Governance & Algorithmic Accountability Expert Witness
AI disputes are product development disputes. The same lifecycle governance standards apply.
AI Governance Is Technology Governance
AI governance disputes are technology, product development, and software engineering disputes — with the added dimension of algorithmic accountability. The same lifecycle governance standards apply: Was the AI system designed with foreseeable risks in mind? Were its features tested against their stated objectives? Were deployment decisions made in accordance with AI governance frameworks?
Bruce Weiner has been retained as a testifying expert in a large-scale federal MDL involving gig economy platform technology, AI-weighted algorithmic systems, and product development methodology — surviving Daubert challenge on qualification, methodology, and relevance (N.D. Cal., January 6, 2026).
As a speaker at Sibos 2025 in Frankfurt (October 2, 2025) on transformation and governance in AI-adjacent financial technology, and an attendee of Gartner IT Symposium 2024 on AI governance and CIO strategy, Bruce brings both practitioner and litigation experience to AI accountability matters.
Disputes Addressed
- AI-weighted model governance and foreseeable risk analysis
- Safety-critical AI feature deployment and testing standards
- Algorithmic bias and fairness claims
- AI feature prioritization and product development methodology disputes
- Machine learning model validation and post-deployment monitoring
- AI-driven routing, matching, and pricing system disputes
AI Governance Framework
Identify the AI System’s Stated Objectives
Define what the AI system was designed to do, how success was defined, and what governance obligations that purpose creates under applicable standards.
Evaluate the Development Lifecycle
Design, training, testing, and validation — was the AI system built in accordance with a documented, risk-aware product development methodology?
Benchmark Against AI Governance Standards
Measure the development and deployment process against ISO/IEC 42001 (AI management systems), NIST AI Risk Management Framework, and IEEE P7000 series.
Analyze Post-Deployment Monitoring
Was the AI system monitored against its stated objectives after deployment? Were foreseeable failure modes identified and mitigated?
Assess Causation Through Governance Decisions
Trace the governance decisions — or their absence — that permitted the harm at issue. Link evidence to reproducible, standards-grounded opinions.
Standards Applied
| STANDARD | APPLICATION |
|---|---|
| ISO/IEC 42001 | AI management systems — lifecycle governance for AI products |
| NIST AI RMF | AI Risk Management Framework — identify, govern, map, measure, manage |
| IEEE P7000 Series | Ethical AI design standards — transparency, accountability, harm mitigation |
| EU AI Act | Risk classification framework for AI systems — high-risk category requirements |
| ISO/IEC 25010:2023 | Software quality model — applied to AI feature quality and reliability |
| ISO 31000:2018 | Risk management — foreseeable risk governance in AI product development |
Relevant Credentials & Experience
- Testifying expert in large-scale federal MDL involving AI-weighted algorithmic systems — Daubert challenge admitted (N.D. Cal., Jan. 6, 2026)
- Speaker, Sibos 2025, Frankfurt — Transformation and Governance in AI-Adjacent Financial Technology (Oct. 2, 2025)
- Attendee, Gartner IT Symposium 2024 — AI Governance and CIO Strategy track
- VP and Chief Product Owner overseeing AI-adjacent product development at globally recognized financial institution
- IEEE member since 2018 · ACM member since 2018
Ready to Discuss Your Matter?
Confidential. No obligation. Responses within 24 hours.