Shared services leaders are moving quickly from AI pilots to AI-enabled delivery models, as generative and agentic AI become embedded in core platforms and workflows across the organization. But scaling AI across a global services footprint introduces a new “trust gap”: inconsistent outcomes, limited transparency, and unclear accountability can stall adoption or create costly incidents.
In this workshop, PwC will translate Responsible AI principles into an operational control stack across the AI lifecycle and share a practical approach for scaling AI in shared services with confidence—treating governance as an accelerator, not a brake. We’ll cover how leading organizations operationalize controls across intake and use-case triage, data and privacy governance, model testing and monitoring, security, third-party risk management, and independent assurance.
Shared services leaders are moving quickly from AI pilots to AI-enabled delivery models, as generative and agentic AI become embedded in core platforms and workflows across the organization. But scaling AI across a global services footprint introduces a new “trust gap”: inconsistent outcomes, limited transparency, and unclear accountability can stall adoption or create costly incidents.
In this workshop, PwC will translate Responsible AI principles into an operational control stack across the AI lifecycle and share a practical approach for scaling AI in shared services with confidence—treating governance as an accelerator, not a brake. We’ll cover how leading organizations operationalize controls across intake and use-case triage, data and privacy governance, model testing and monitoring, security, third-party risk management, and independent assurance.
Check out the incredible speaker line-up to see who will be joining Fuad.
Download The Latest Agenda