Speaker

Speaker Info

Name
Wiley Finch
Organization
Inholland University of Applied Sciences
Country
The Netherlands
Biography
Wiley Finch (MSc Cyber Security, University of Essex) began his IT career in 2016 and has since worked across educational and medical games, cybersecurity and applied AI. That cross-domain background lets him translate complex legal requirements into pragmatic engineering and product decisions. His research produced the Least Responsible AI Controls Framework (LRAICF) and accompanying RegTech tooling, a lightweight, auditable approach that makes compliance with the EU AI Act achievable and proportionate for low-capacity actors such as SMEs, small studios and public teams. He collaborates with product, design and QA to map features to risk tiers and turn obligations into concrete tasks: classification flows and decision trees, logging and versioning rules, explainability snippets and audit-ready evidence. In this session Wiley combines academic rigour with hands-on engineering to demonstrate tooling-driven decision flows and short, practical exercises that help teams implement and build auditable, realistic AI systems.

Photo

Photo

Presentation Info

Title
Embedding the European AI Act the Practical Way for Developing Medical Games/Gamification
Summary
Who this is for: game developers, product owners and designers building medical or health-adjacent games for European users. The EU AI Act changes how you should design, implement and develop AI systems in software and products. This session is not a comprehensive legal review of the act, but rather focuses on policy theory (which ones work and fit your specific needs) and provides you with concrete steps you can apply immediately. What we’ll do in the session • Classify: a fast flow to decide whether a feature is an “AI system” under the Act and whether it’s low/medium/high risk. • Translate obligations: turn legal requirements into engineering and design tasks (logging & versioning, performance tests, explainability snippets, human oversight, minimised data collection). • Deliverables you can reuse: Actor Classifier, Risk Classifier, Actor x Risk obligation mapping and an audit checklist listing the artefacts auditors will look for (data sheets, test results, risk assessments). • Non-EU studios: What does it mean to bring your AI-based health system to Europe; practical options (appoint an EU representative, use an EU distributor) and what those choices mean operationally. During the presentation, we will work on hands-on cases stemming from real projects we've encountered in our research (e.g., a personalised rehabilitation coach or cognitive training). The goal is to map obligations straight to tasks for design, backend and quality assessment. Next to practical tools that you can directly use, there will be an extended Q&A session.
Keynote
Presentation
GFHEU Year
2026

Info

Info