Key Points for GBS from the UK’s AI Safety Summit

Add bookmark
Evan Beebe
Evan Beebe
11/07/2023

AI safety

Last week, Rishi Sunak, Prime Minister of the UK, and his government hosted world leaders, computer scientists, executives from the top AI companies, and Elon Musk at an AI safety summit. The summit provided the United States, European Union, China, and other countries the opportunity to collectively manage the risk from AI and charter a safe way forward in handling the rapidly evolving technology.

This article will explore what spurred the summit in the first place as well as some of the key outcomes from the summit. It will also discuss how this conference could potentially impact shared services and GBS's use of the latest AI tools. Service organizations are constantly looking to innovate and expand their scope of services, and they will typically rely on new technologies to do so.

Concerns around AI advancements

Many prominent public figures have shared their fear about what the growth of AI could mean for society. In April, Elon Musk warned that AI could lead to “civilization destruction,” and this month, President Biden said, “It’s the most consequential technology of our time.”

A major reason this summit was held was due to rapid advancements in AI just in the past year. In November 2022, ChatGPT took the world by storm when Microsoft-backed Open AI brought the technology to the public. The tool uses natural language processing to create human-like dialogue, which has stoked widespread fears among some AI leaders that machines could, in time, achieve greater intelligence than humans, leading to unlimited, unintended consequences.

This shared concern across the global community is what prompted the summit, where for two days, leaders spoke about mitigating the risk of AI, and tech companies shared their fear about being weighed down by regulations before the technology can reach its full potential. 

Action plan from the international community 

One of the most notable outcomes from the summit was the signing of the ”Bletchley Declaration” by the over 25 countries in attendance. The declaration states that countries need to work together and establish a common approach to AI oversight.

As one attendee told the Guardian, “What we need most of all from the international stage is a panel like the International Panel on Climate Change, which at least establishes a scientific consensus about what AI models are able to do.” The declaration is a step in the right direction towards establishing an AI council on the international level.

The declaration advocates for AI to be developed and used in a manner that prioritizes safety, human-centric design, trustworthiness, and responsibility. It also highlights the importance of inclusive AI and narrowing the digital divide, with an emphasis on supporting developing countries in AI capacity building.

The declaration concludes by calling for continued international dialogue and a commitment to bring back the summit in 2024.

On top of actions taken at the summit, last week also saw individual countries implement added supervision to the AI industry. In the U.S., President Biden signed a 63-page executive order that requires AI developers to share safety test results and other information with the government. However, critics of the order feel it only improves transparency and does little to allow the U.S. government to take action if an AI model is deemed unsafe. 

What GBS should take away from the summit 

While the AI security summit was focused on gaining oversight into the solutions AI companies are creating, it can also serve as a reminder of the ethical and security-related issues tied to these technologies.

Many GBS organizations are taking the lead on digital transformation projects, and it's critical that GBS remain diligent in understanding and navigating these AI challenges, because as we learned from the summit, the international community is still in the early stages of creating a framework that protects businesses and individuals from the threats of AI.

For example, bias in AI is a legitimate concern to GBS organizations that are required to provide equitable and fair access to services across different groups and functions. If AI-models are misguided, they will perpetuate inequalities that can, for example, lead to certain functions gaining more assistance than others.

There is also the concern about data privacy tied to these large language models. Employees who put company information into a language model such as ChatGPT risk business information getting reused by the language model or, worse, being leaked in the event of a data breach.

If the Bletchley Declaration does lead to actual regulations being implemented, it may be an additional challenge for shared services with locations around the world. The declaration was signed by 28 countries, including GBS strongholds in India, the US, Germany, and the Philippines, however, there are many GBS location hubs that are not yet committed.

In addition, with many shared services and GBS taking the lead in ESG (Environmental, Social and Governance), they must be aware of the environmental cost of training AI models. The energy needed to run the latest processing hardware, servers and data centers is at odds with most organizations' commitment to reducing energy consumption.

To address all these AI concerns within GBS, it is crucial to implement strategies that promote fairness, transparency, and accountability in AI development and deployment. It is also essential that GBS employees receive training on how to use these technologies in a way that ensures data security and inclusivity.


If you want to learn more about the potential risks and benefits of generative AI in shared services, be sure to register for “Hype vs Impact: Generative AI in GBS and Shared Services,” a webinar set to be hosted by SSON Research & Analytics on November 9.  

Additionally, the "CX and Service Management in Shared Services" virtual summit on December 12 will also dive into how advances in AI can be leveraged to improve the customer experience.

Want to read more Intelligent Automation content?

Visit our Intelligent Automation content hub - where you will find the latest news, reports, webinars and events on the topic of Intelligent Automation.

Get Started


RECOMMENDED