Member-Centric Models of Benchmarking and Their Relevance to Shared Services Centers

Add bookmark

"If I have seen further it is by standing on the shoulders of giants."
Sir Isaac Newton, 1676

My experience with benchmarking has taught me that its true power lies in the open collaboration of organizations in pursuing process improvement through the sharing of each other’s practical experience and learnings. The overriding goal of the group is the continuous improvement of all its members. Each new learning, each increment in knowledge benefits all the group’s members.

If the power of benchmarking lies in the collaborative process, then the best way to effect this practically is to ensure open channels of communication between benchmarking group members. The benchmarking group or consortium becomes a quasi-organization, separate from its member organizations. Each consortium member represents the interests of his or her organization within the consortium.

I have coined the term "member-centric" for such models of benchmarking. The purpose of this article is to explore benchmarking and its relevance to shared services centers (SSCs), emphasize the benefits of the member-centric model, and conclude with a description of my own experience in setting up such an organization.

The Relevance to Shared Services is Simple 

The relevance to shared services is simple – the whole purpose of creating an SSC is to add value to your organization through continuously improving the efficiency and effectiveness of the services that you perform. The only way to continuously improve is to create an environment in which your people continuously question the way in which things are done. Benchmarking provides that incentive by challenging your people’s perception of what is possible.

The Member-Centric Approach Contrasted with the Consultant-Centric Approach 

The Consultant-Centric Model (traditional structure): a useful metaphor for the "traditional" benchmarking study structure is the constellation of planets, moving around the sun. The sun, in this metaphor, is the consultant, and the planets are the member companies participating in the study.

I believe that intent drives structure. The traditional structure sets in play a particular dynamic, playing on people’s fears about confidentiality and secrecy – all of which, in my opinion, work against a collaborative learning approach where the growth of all benchmarking participants should be the objective. In a consultant-centric model, the data flows into the consultant via a rigorous process, and is turned into glossy reports, which then flow back out to the participants.

The Member-Centric Model (collaborative structure): how then does the member-centric approach differ from the consultant-centric or traditional approach? Well, on a number of levels, actually, but fundamentally at the level of intent. That is: the intent of the member-centric approach is all about ongoing commitment to the process of improvement for all members of the consortium. It is about the learning and growth of the members through collaboration, openness and honesty.

The members of the group are in a supportive relationship, not in competition. Most importantly, they are in a direct relationship with each other, and not via the consultant. I understand that the idea of breaching company confidentiality and sharing information with the "enemy" may seem radical; however, with goodwill on all sides this issue can be addressed successfully.

If we return to the "planets" metaphor, it is the consortium and its members that are now the center of the universe. Real power is thus unleashed through the direct relationships between members of the consortium.

My Experience with a "Member- Centric" Benchmarking Consortium 

My personal experience with such consortiums began as a member of the senior management team of the Australia Post SSC, back in 2001. We had become frustrated with traditional models of benchmarking, and our inability to understand the wide discrepancy in results between study participants.

Driven by our desire to derive meaningful decision-making information from benchmarking data, we approached an Australian-based SSC, which had participated with us in a traditional benchmarking exercise, with a request for more detailed data. In retrospect, this first step was instrumental in leading to the creation of the first Australian-based shared services member-centric forum.

Pre-Planning 

We set out four key objectives as the basis of our benchmarking "charter":

  • exchange of sufficiently detailed information to ensure meaningful comparison and ranking of performance ("apples to apples");
  • identify a hypothetical "best practice" specification for each process studied;
  • generate value-added learning for the benefit of all members;
  • identify process improvement opportunities.

As a first step to setting up the benchmarking consortium, it was important for us to build internal support within our own organization. `To do so, we formed an internal steering committee made up of SSC management and a senior member of the corporate finance team, whose role was to ensure the validity and credibility of the results.

Initial Meeting with Consortium Members 

We sent invitations to other major, Australian-based SSCs in both public and private industry, to attend a forum at our SSC to discuss the possibility of forming a benchmarking consortium. At that forum we put forward our case for a "member-centric" group, and shared our data collection template with the invitees. From that initial meeting in May 2001, we gained agreement from 10 other shared services managers to become part of a member-led shared services forum.

Structure of the Formal "Member-Centric" Organization 

The role of the organizational steering committee was to ensure that the benchmarking project met the four "charter" objectives that we, at Australia Post, had set down as a pre-condition for our involvement in the consortium.

Within the benchmarking consortium itself there was a steering committee made up of one senior person from each company taking part in the study. The role of this body was to make key decisions in relation to direction, and to provide guidance on benchmarking projects. The consortium itself was underpinned by a confidentiality agreement signed by each of the member companies.

A project manager/coordinator was appointed (and financed) by the consortium, charged with the responsibility of managing the overall project and ensuring that timeframes and objectives were met. Other resources committed were: project consultant (providing guidance and advice to the consortium and presenting the final, independent report); and core groups (one core group per benchmarking study, each made up of one process expert from each company; in the case of the payables study our representative was our payables team leader. The core group leader was responsible within her/his organization for collecting and cleansing the data required by the study questionnaires).

Study Specifics 

Once the consortium steering committee had agreed that two studies were to be commenced ("payables" and "employee benefits"), the project coordinator, in consultation with the consultant and members of the core groups, created an exact definition of the processes that were in scope for the study.

The studies themselves were intensive 12-week exercises (both the payables and the employee benefits studies ran in parallel) which included the development of data collection tools, data collection, data collation, validation, analysis and report compilation.

There were essentially two elements to the surveys: a quantitative study focusing on process metrics; and a qualitative study focusing on the environmental conditions within the centers.

There were two key criteria for the quantitative results: cost (highest vs lowest); and performance (based on 10 features, e.g., number of duplicate payments, turnaround time for invoice processing), with actual performance compared to other companies’ performance.

For the qualitative part of the study there was a 104-question survey, with four key criteria: Process; People; Systems; and Customer Service Measures (basically a balanced-scorecard view of the organization).

A set of performance profiles were developed for each criteria. Each profile contained a set of features rated as: 5-excellent, 3- average, 1-poor. The data collection method employed for the qualitative survey was a mix of interviews and documentary evidence presented by the core group member.

As to the rigor of this process, we relied upon honest feedback. Whilst that may be open to subjectivity, it is important to view this in terms of the consortium structure and its emphasis upon collaboration and learning, rather than a clinical audit approach.

Key Challenges for the Consortium 

Key challenges that we successfully overcame:

  • agreeing on precise definitions/scope (three iterations of the model);
  • ensuring all members pay equal diligence to submitting accurate data on a timely basis;
  • role of the consultant – this issue took the members a significant time to resolve. It was finally agreed that a consultant would be employed by the consortium to assist the project coordinator with the collection, processing and analysis of the data;
  • confidentiality – of all the challenges this was the most significant. The fact that it took over three months with 13 revisions of the original agreement is testament to the sensitivity of the issue. In the end, as part of the practical resolution to member’s confidentiality, it was decided that "raw" results could only be shared between members of SSCs, and not with other members of the individual organizations.

Consortium Outcomes 

The studies were completed in early December 2001, and the final reports were issued to members in January 2002. It was agreed that the consortium itself had delivered a quality result that justified its continuation. The process itself had been thoroughly tested and now provided an infrastructure that would support future studies.

Each member organization received two final reports on each of the studies undertaken. First, an "A" report, which identified each of the SSCs and was to be available to members of the SSC only; and secondly, a "B" report which masked the identities of the other participating organizations and was available for wider distribution within the organization.

The reports ranked performance across six criteria, and profiled a hypothetical best practice organization. The main issues uncovered by the study were discussed in a "high level findings" paper for each member company.

In this study, unlike traditional studies, the participants had a strong understanding of the "what", the "how", and the "why" of the results. For the member-centric consortium, the report is seen as the beginning of the learning exercise, rather than the end of the journey.

Postscript 

Although I am no longer part of Australia Post Shared Services and have not been directly involved with the consortium since late 2002, I am pleased to report that it is still going strong.

I found the experience of setting up and being a part of such a consortium very rewarding personally, and would gladly undertake it again. If you have any specific questions regarding member-centric consortia, I would be more than happy to discuss them with you directly by email.

About the Author

At the time of his involvement in the benchmarking consortium Richard was the Manager of Accounting Operations, Australia Post Shared Services Division. He is now working in the Logistics Division of Australia Post and completing a Six Sigma Black Belt qualification. He retains a keen interest in the business of shared services and current developments in Business Process Outsourcing. rlangley@ozemail.com.au

RECOMMENDED