Skip to content
IN PARTNERSHIP WITH CGI and the AI INNOVATION CENTER
May 27th, 2026 | AI Innovation CENTER ‒ HIGHT TECH CAMPUS EINDHOVEN
An Executive Roundtable on How to Govern AI at Scale
From Regulatory Compliance to Responsible Agentic AI
Why this matters now?
The 'Grace Period' ends August 2nd, 2026.
The EU AI Act is no longer on the horizon: its obligations are in force. For organisations in high-end manufacturing there are several AI use cases already active in production environments including quality assurance, supply chain orchestration, workforce planning, which fall within the EU AI Act's high-risk classifications. The key challenges have shifted from experimentation to accountability: How do you govern AI systems that act autonomously? Who bears responsibility when an AI agent makes a consequential decision? And how do you build the internal structures to ensure AI delivers business value without introducing unacceptable risk? This roundtable initiative brings together senior leaders and practitioners to work through these questions with rigour grounded in regulatory reality, informed by practical experience, and focused on what your organisation can act on.
This session will enable you to map obligations to your context, with reference to where agentic AI systems change the accountability picture and what that means for how you contract, document, and govern these systems going forward.
Who should attend
CEOs, board members, executives with AI accountability, legal counsel, product owners, AI Governance heads and enterprise architects who are moving (or need to move) from AI pilots into governed, scalable deployment.
Format
This is a working session, not a presentation. You will be asked to contribute your organisation's most pressing AI governance challenges, and the output will be directly applicable to your context.
Key takeaways
A clearer view of where your AI use cases sit on the EU AI Act risk classification A governance blueprint, with reference to CGI's responsible AI framework Concrete examples and structural approaches to scaling AI responsibly — drawn from comparable organisations
The Curriculum
May 27th 2026, from 9am to 2pm - AI Innovation Center HTC5 - Active Working Session (Chatham House Rule)
09:00 - 09:30 | Opening: Grounding the Conversation (by R Harmeling & P Attallah)
Welcome, introduction and framing the 'shift':
How organisations in the Eindhoven region are transitioning from "Generative Tools" (chatbots) to "Agentic Systems", to AI systems that act, decide, and operate with increasing autonomy. What has changed, and what that means for how you lead.
09:30 - 11:00 | Regulatory Landscape and Risk Mapping (by A Reuters / E Poort) Objective is to connect legal liability ro ethical responsibility. The Ethical Spark: A personal, high‒stakes scenario designed to test individual judgment before looking at the law. The Regulatory Deep Dive: Setting the scene includes the latest developments around the EU AI Act and its practical obligations, AI liability considerations, as well as the broader regulatory context including NIS2, DORA, and sector‒specific requirements.
ACTIVITY
Participants will be asked to identify their most significant AI use case and specific (governance) challenges blocking progress from experimentation to operational value.
11:15 - 12:15 | From Principles to Practice: AI Governance in Action Drawing on CGI's work with organisations that have moved through this transition, this session presents concrete examples of AI governance frameworks in operation.
ACTIVITY Participants will have the opportunity to pressure‒test their own use cases and governance approaches against these examples, leaving with material they can use within their own organisations.
12:15 - 14:00 Working / Networking Lunch Over lunch, the conversation shifts to a question that sits behind much of the morning's discussion: as AI becomes embedded in core operational decisions, what does it mean to maintain genuine control over those systems? The lunch session is structured as an open exchange rather than a presentation.

About the experts
Anke Reuters is a member of CGI's CTO Office, with global responsibility for AI compliance and regulatory alignment. Her work sits at the intersection of technology strategy, legal accountability, and enterprise risk.
Eltjo Poort is a Senior Enterprise Architect whose practice focuses on the ethical and structural dimensions of AI adoption — including how organisations design for accountability, transparency, and long-term governance.
SECURE YOUR SEAT
Limited to 20 seats for Brainport Executives
Name
Company Name
Mobile Phone Number
The Shift Academy powered by Agentic Shift
High Tech Campus Eindhoven, The Netherlands
KvK: 98941267
©2026 Agentic Shift. All rights reserved.