The Challenge of Providing Excellent Service Globally

Add bookmark

The Department of State provides over US$1bn of shared services in 28 different cost centers to the employees and family members of the 280 government organizations that work at our embassies, consulates and diplomatic posts abroad. Only two of the cost centers are mandatory, so customers dissatisfied with service can seek alternatives. For that reason, customer satisfaction is a key indicator of future revenue. In the process of designing and implementing a customer satisfaction survey that would reach out to these 79,500 customers, we, in the Department, learned several lessons. This article describes how we overcame the challenges of gauging the customer satisfaction of over 79,500 overseas customers in over 250 locations around the globe.

The Department has been providing shared services to other American government agencies since the 1950s. In the early days, communication among embassies and with Washington was difficult, at best. A decentralized service delivery model was, under the circumstances, the only practical approach. Many elements of that decentralized delivery model live on. Starting in the 1990s, we required every diplomatic post to conduct an annual customer satisfaction survey. Shared services providers, however, conducted the surveys at different times, used different questions, and obtained variable levels of participation. Moreover, many service providers asserted that one could not compare levels of satisfaction from one location to another.

As a result, at the headquarters level, we had very little hard data about the level of customer satisfaction system-wide. This was a significant barrier to sound management. We also came under increasing pressure from the outside. President Bush’s management agenda includes a requirement to assess major programs and match performance with results. The Government Accountability Office (GAO), in a 2004 report on the Department’s provision of shared services, cited us for lacking any reliable, comprehensive data on customer satisfaction. Clearly, the Department needed to gauge the level of its customers’ satisfaction. To do so, we knew we needed to survey all our customers about all 28 cost centers, using the same questions and asking them in the same way at every location.

In 2001, the Department created a group, the Center for Administrative Innovation in the Bureau of Administration, to spearhead initiatives of this type. The Center found an excellent, paper-based customer satisfaction survey in use at our embassy in El Salvador. The Center worked with the survey originators in El Salvador and with a not-forprofit organization in Houston, the American Productivity and Quality Center (APQC), to develop an internet-based survey that would cover the 28 different lines of business or cost centers in which the Department provides service. APQC, a leader in benchmarking and knowledge management, provided the Internet site, developed the database and analytical tools and, most importantly, protected the anonymity of the customers responding to the survey.

Over the summer of 2004, the Center worked with representatives from the various stakeholders, including customer agencies and service providers, to refine and validate the questionnaire and develop a communication strategy to market the survey. The inter-agency executive board, at its June meeting, endorsed the survey and provided funding to cover worldwide implementation. We knew we would encounter some trepidation, skepticism and resistance. One of the major hurdles was anxiety related to the "unknown." None of the customers or service providers had ever participated in a global survey of the Department’s shared services, so there were many questions about what the survey would be like and how it would be used.

One of our first decisions involved the definition of "customer." One could define a customer as the people who receive service, the customer agencies that pay for the service, the U.S. government agencies in Washington that depend on those services, the American taxpayer and so on. We, however, defined the "customer" as the people, employees and family members, receiving service at the overseas locations.

This meant we had to make the survey available to over 79,500 people. Many of them are employees hired in the host countries. Quite a few do not speak English. Subsequent feedback, however, suggested that many employees appreciated being included. One noted: "After working many years for the U.S. Government, this was the first time that my opinion had been sought. Thank you so much for the opportunity."

To reach this varied and dispersed customer base, we recruited allies from among the interagency service center in Washington, customer agencies, service providers, the office that deals with foreign national employees, and the office that liaises with family members. We developed publicity material and provided it to all agencies and service providers. We developed a website that described the survey, discussed why it was important and had answers to frequently asked questions. We found, subsequently, that we succeeded in reaching employees, both American and foreign-national, but had little success in attracting participation from family members. Increasing their rate of participation will be a challenge for 2005.

We decided, early on, to conduct a pilot. We wanted to make sure that the survey, the technology, and our communications strategy worked. We also wanted to give the partners and allies we’d recruited to market the survey an opportunity to practice on a small group to identify the techniques that worked and the groups that were hard to reach. We chose four posts - Brussels, Seoul, Tunis and Djibouti - for the beta test. The four posts gave us a range of big and small, different parts of the world and different environments. We chose Djibouti, in particular, because we knew that Internet connections and other local conditions in the Horn of Africa would be a challenge.

We sought and used feedback both before and during the pilot. Many early respondents reported survey fatigue. Initially, we asked ten questions regarding each of the 28 cost centers. This led to a long survey. As a result, we cut the number of questions for each cost center in half. The five questions polled customers on whether they got what they wanted, whether they were kept informed, whether the service provider understood their needs, whether the service provider welcomed their feedback and whether, overall, they were satisfied with the service. Our willingness to use customer feedback prompted more input. The four beta sites provided us with tips on effective techniques to boosting participation. One site put a link to the survey on every computer desktop.

We discovered three key factors, however, that led to high rates of participation:

  • strong support from the Ambassador
  • an active interest by the chief service provider at that location
  • daily feedback about participation rates so that each site knew how it was doing

At three of the four posts, those factors caused us to exceed our minimum goal of 25% participation.

For the worldwide survey that ran from October 18 to November 11, 2004, we created marketing templates based on the beta test sites’ materials. We designed the templates to be readily customized for any group or location. The templates had specific sections which allowed the person marketing the survey to insert site-specific data. This gave us a consistent, yet flexible message.

With support from APQC, we used the same approach in reporting the results of the survey. Each location got a presentation template reporting their survey results by December 22, 2004. The report not only provided the scores of that location’s customers, but also contrasted those scores with the regional and global averages and provided customer comments. Individual locations also got reports that provided customer scores by cost center, by customer agency, by employment category (i.e. American employee, foreign national employee, family member, etc.), and length of time at that location. We had similar data at the global and regional levels.

So how did we do?

We asked customers for feedback on a fivepoint scale ranging from 5.0 – strongly agree, to 1.0 – strongly disagree. To the surprise of some, most customers responded with a 4.0 ("agree") out of 5.0 when answering the question, "Overall, I am satisfied with this service." The vast majority of the 28 cost centers had a global mean grouped around the worldwide average of 4.0. Two cost centers, cashiering and motor vehicle fleet management, scored highest at 4.3. Two other cost centers, procurement and leasing, showed the most area for improvement, scoring 3.8. Naturally, there were locations with above average scores and locations where we need to provide additional support. It was surprising that some of our tougher locations like Minsk, Ulaanbaatar, Phnom Penh, and Beirut had rates of customer satisfaction and participation that were superior to those of established locations in the developed world. We published all of the data at the global, regional, and post locations on the interagency service center’s intra-net web site, so that all locations could contrast their performance with their peers. We hope this will stimulate conversations among locations and transfer proven practices from above average performers to those who are below average.

Customers need to see that the organization is serious, is committed to customer satisfaction and will act on their feedback. The interagency executive board already has approved conducting an annual survey and has funded the 2005 edition. This will not only provide us with data about customer satisfaction, it will also provide information about progress against the 2004 baseline. The Center for Administrative Innovation will conduct additional, in-depth analysis of the results to identify leaders in each line of business, research the practices that made them effective and promote those practices for use at other locations. We are also encouraging service providers and customer councils at each location to develop an action plan to achieve improvement in their high priority areas so that they can show progress in 2005.

Perhaps the most important outcome from the 2004 survey, however, was the simple fact that the organization conducted the survey successfully. There will continue to be fierce debates about whether one can compare customer service in Paris with customer service in Phnom Penh. People will disagree about the weight to ascribe to customer service in comparison with cost containment or operational efficiency. The organization, however, will no longer have to deal with the "unknown". That fear is now behind us.


Lessons Learned

  • Top level support, both global and in each business unit, is critical
    Willingness to receive, and use, feedback builds credibility with customers and providers
    One needs a strong marketing campaign to ensure strong participation
    Identify and target "tough to reach" groups early. Recruit partners that can help reach these groups
    Test the survey, technology, marketing tools and partnerships on a small group first. No matter how much you prepare, you’ll still learn things. Regular and frequent feedback on how business units are doing boosts participation
    Transparent and public reporting of the results boosts credibility and encourages healthy competition
    An early and public commitment to permanence builds "buy-in" or acceptance by customers and service providers
    Customers don’t grade as tough as some service providers fear they might


The views and opinions expressed in this article belong to the authors alone and do not represent the official opinion of the Department of State or of the U.S. Government.

 


About the Authors

Matt Burns
Director
Center for Administrative Innovation
Department Of State
Email: 
burnsmj2@state.gov

Matt is the Director of the Center for Administrative Innovation at the State Department and oversaw the 2004 customer satisfaction survey. Matt created the Center in 2001 after almost 25 years of experience delivering shared services for the Department of State. He joined the Department in 1978 and served in Nicaragua, Trinidad and Tobago, Cuba, the Soviet Union, Italy and Israel in addition to several tours at headquarters in Washington, D.C. Just prior to setting up the Center for Administrative Innovation, Matt was Senior Advisor to the Department’s Chief Financial Officer and represented the Department on the interagency working group, developing policy, allocating resources and overseeing implementation for the US$1bn shared service system.

Sarah J. Field, J.D.
Program Analyst
Center for Administrative Innovation
Department Of State
Email: 
fieldsj@state.gov

Sarah is a Program Analyst with the Center for Administrative Innovation and led the 2004 customer satisfaction survey. A Presidential Management Fellow, Sarah joined the Center in August 2004 after an assignment with the Centers for Disease Control & Prevention’s Office of the Director, where she led numerous performance assignments – Program Assessment Ratings Tool (PART), Government Performance and Results reports (GPRA), Healthy People 2010, and the President’s Management Agenda Budget Performance Integration Initiative.

RECOMMENDED