Responsible AI through Data, Evaluation & Applied Systems
One-to-one session — 1.5h
This one-to-one session approaches Responsible AI as a framework of explicit technical decisions and clearly defined system boundaries across the entire project lifecycle, rather than a series of abstract concepts.
Together we explore Responsible AI in medical imaging through several practical lenses: data readiness, algorithm and pipeline choices, evaluation boundaries, and the defensibility of models in real-world deployment.
Through real R&D examples, we analyze how seemingly small decisions, such as what qualifies as a valid sample or how evaluation datasets are constructed, can fundamentally shape the reliability and defensibility of AI systems.
While the examples are rooted in medical imaging, where reliability constraints are particularly strict, the underlying reasoning framework is domain-agnostic and applicable to any high-stakes AI system.
Practical information
Format: one-to-one online interactive session
Duration: 1.5 hours
Language: English
The session is scheduled upon booking. The date and time are defined together after purchase.
Access details are sent by email after booking.
The session focuses on
- Responsibility as an upstream project concern
- Transparency in evaluation and reporting
- Trustworthiness through clear scope and defined constraints
- Defensible technical decision-making over time
Who this session is designed for
- AI / ML practitioners working in medical imaging
- Technical leads and researchers
- Professionals involved in validation, deployment, or oversight of AI systems in healthcare
This masterclass assumes prior familiarity with AI.
Participants receive
- Access to the live session.
- Certificate of completion
Refund policy
Sessions are non-refundable.
If you are unable to attend, your session may be rescheduled.