How to Leverage Coronavirus for Positive Change in GBS

Add bookmark
Rob Bradford
Rob Bradford
03/05/2020

In the wake of the recent Coronavirus scare that is engulfing our daily lives, it is important to step back and see how this can be used as a lesson and a springboard for positive change. Call me a hopeless optimist, but I really do believe in the adage that “anything that doesn’t kill you should make you stronger”. I also believe in the infamous Rahm Emanuel quote, “Never let a serious crisis go to waste.” 

So, what can Coronavirus do to support the GBS case and bolster investment in positive change? Well, looking back on all the disasters I have had to deal with in the past and applying the above principles, here is a quick list of things you should be thinking about as a GBS leader, while the travel restrictions are in place and investments in the normal day-to-day are cut off.

1. Update Safety and Security Plans

Scares like these make you realize how frail we all are, and it should remind you how important our people are to us, both personally and professionally. Now is a good chance to go beyond the safety posters and safety briefings at the beginning of meetings and start to lay out your policies and procedures for what to do when a large safety concerns look imminent.

Some of the things we have instituted in the past are simple and low-cost, but effective. Start by creating or updating your phone trees or automated employee messaging systems. Create a buddy system at work, so everyone has someone looking in on them. Teach managers and supervisors what to do if someone seems extremely ill. Make sure you have plenty of safety supplies. Your people are the most important asset you have. Think like a parent who wants to protect them.


HEAR MORE FROM ROB IN SSON's PODCAST


2. Test Your Cross-Training Plans

Now is the perfect time to make sure you are 3 to 4-deep in every critical role. Do you have backups if key people go out? Does everyone know their back-ups, their secondary and tertiary roles?  Do you have proper training updated for non-primary roles? Do you have SOD and system access roles laid out for shifting roles? Lot’s of people start with a chart that shows cross-trained resources and backups, but the devil is always in the detail.

In the past, I would often mandate that people would swap primary roles once every few months, just so everyone got used to working out the kinks and getting the right supporting processes fully tested.

3. Test and Update Your Fail-over and Continuity Plans

Just like cross training, I find a lot of people going through the motions of having failovers in place, but fail to work out the complex details or really think about what would happen in extreme cases. I can’t tell you how many times I have heard friends and colleagues brag about their ability to fail over to a hot site, only to find out that they have (for instance) a site in NOIDA to backup a site in Gurgaon. That is great if the disaster you are planning for is a highly focused one, like the power going out in your building for a few days, but disasters and pandemics tend to be more regional in their impacts, so you need to be able to fail over to facilities that are much further away.

This poses a lot of other logistical problems, like comms links, language support, etc. The good news is that there are all kinds of services today to mitigate these issues, but you need to build those capabilities and engage those providers. This is a good time to reinvigorate your work passing between regional delivery centers and follow-the-sun transaction flows.

4. Make Sure your Backup Systems and Processes Actually Work

A funny story. Years ago, we invested quite a large amount of money in a huge backup generator for a critical service facility in our GBS footprint. This state-of-the-art generator could supply our facility with full power for up to 3 days on the fuel tank it had, and we could refill the diesel tank easily and carry on practically forever off-grid.

Every time we had a storm, we rested confident that our facility would never go down. Indeed, a large storm hit the city a year later, and a large portion of the city went dark, but our generator kicked in a saved the day. Three days later, almost to the second, all of our systems and lights went dark, in the middle of a perfectly sunny day. After confirming there were no rolling brown-outs, or other electrical grid problems, we started to wonder why our generator hadn’t kicked in, even if we couldn’t pinpoint the city electric problem.  We sent a maintenance guy outside to look at it, and he opened the fuel tank to find it completely empty. The generator had worked so well and so quietly and seamlessly, that no one even realized that we had been running on its power ever since the storm 3 days before. The only indicator that we were on backup power was a little red lamp they had installed in the mail room, and nobody ever wondered what the light was for. Since the generator required the user to manually cut back over to city power, nothing ever changed, and it just ran itself dry.

Moral of the story, you may have backup systems and processes, but you should regularly test them and have drills to make sure people know what to do before, during, and after an incident.  Every few months, we ran tests, just like a fire drill, to shut off power, simulate unsafe work environments, etc., so people always knew exactly what to do.

5. Laptops, Internet, and Future of Work

One important decision we always make early on is to install laptops with docking stations in our centers, instead of desktop PC’s. These days laptops are competitively priced and outfitted, so there is very little premium to make this investment. However, the laptop has huge advantages, in that it has its own built in UPS with its main battery pack, and it can be taken home by employees and work can continue in a distributed fashion outside the office. Today, especially in the Americas and Europe, most employees will have access to high-speed internet at home. And, with modern IP phone systems, VPN’s, and cloud applications, it is entirely possible to have an office-like experience working from the house. Managers and Directors have been doing it for years, but there has been some trepidation about opening this policy up to all associates.


SEE ALSO: The Real-Time Business Impact of Coronavirus

A digital event on March 18, hosted by AIIA Network and SSON

Register now, here.


 

Now is the time. The new way of working will be in the home and alternative locations. You will need to have a future of work model to attract talent. Embrace it, because it also comes with disaster and pandemic mitigating power. But, be careful. Just like the cautionary tales before, you want to actively and repeatedly test this capability. In the past I have put in mandatory rotations, where all employees are required to work 2-3 home days per month, so they get used to doing it and test the infrastructure.  Make it part of daily life in the centers, and it won’t feel weird at all when a real disaster strikes.

6. Contracts, Providers, and Force Majeure

Now would be a great time to go back and take a look at your contracts with major providers and to also engage them with questions of their continuity planning. Over the years, I have been involved with dozens of major XPO contract negotiations, and in almost every one, there is a little catch-all clause (usually not explained in much detail) that releases your provider of financial accountability and/or SLA performance in the case of “Acts of God” or Force Majeure. You probably have the same clauses in your company’s contracts with your customers. Certainly, it is understandable that one cannot be held to account if there is an unforeseeable catastrophe, however, like all other things, the devil is in the details.

You should have frank conversations with your providers and dig under the surface of what constitutes a disaster of this nature, do they have backup plans, how long will it take them to recover, and how can you collaborate on your own continuity in the interim? Today, most companies have huge exposure and risk within GBS, because so much of our modern delivery capability lies in underdeveloped countries.  

What constitutes an unavoidable disaster in India is likely a different standard than Poland, or the US. You need to fully understand your risk, exposure, and alternatives. And, your business continuity plans should extend into your service partner supply chain. One hint, you can gain a lot of ground quickly by merging your transition, insourcing, and continuity plans into a unified approach. Don’t start from scratch!

7. SLA, OLA, and Contractual Performance in Captive Centers

Just like #6 above, your in-house SSC’s have exposure and risk as well, and it is rarely laid out in any kind of contractual detail like external providers. And as we see from #6, this isn’t even a great safeguard. In truth, good internal SLA and OLA documents should follow similar rigor to external contracts, and the idea of Force Majeure should be evaluated internally as well.  

Now is a great time to make sure your SLA’s and OLA’s are not simply a collection of basic KPI’s, but are upgraded to commercial-like documents that cover things like service during a global or regional disaster. They should also cover off backup and manual alternative processing plans with your customer base.

8. Stone Age Fallbacks

In a worst-case scenario, it is critical that you have some kind of primitive fallback plan that does not rely on modern infrastructure and technology too much. As distasteful as it seems, it is good to keep a “break glass in case of emergency” manual continuity plan that skirts the need for ERP systems, workflows, internet, cloud computing, etc., at least for your most critical processes. In a large-scale disaster, you may be called upon to keep going, even when our modern tools are unavailable for some extended period. In 30 years of building and running GBS and Shared Services organizations, the need for this has only happened to me once, and I’m happy to say that we were at least somewhat prepared, as it happened in a disastrous cutover to a new ERP system. The cutover was botched so badly that the IT professionals were not even able to get the legacy system back up and running. For almost 2 days, we were without any ERP systems.

Before the cutover I had asked our head of the 0rder entry department to print out catalogs with our most recent product pricing, phone books with all our largest customer contacts, and to mock up some manual hand-written order forms that our people could fill out in a disaster scenario. As it turns out, this crazy idea, ended up saving the day (or 2 days), as we were still able to process a large percentage of our orders by phone, and later to key them back into the new ERP system with help from temps.

What could have been a 100% loss of orders for 2 days, ended up only being a 15-20% loss, due to the manual inefficiencies. So, have a plan B, a plan C, and a plan D and be prepared to use paper and stone tablets if you must.

9. Know your Peers in the Supply Chain

One of the best things you can do is get to know the GBS leaders in your largest customers and your most critical direct-material vendors.  I can’t tell you how many times I have gotten some relief by reaching out to peers with whom I have developed a strong working relationship and negotiated a path through hard times. This works both ways, too. We have created GBS-to-GBS helplines to get our customers through tough scenarios they were facing.

For years I’ve heard people mentally relegate GBS to a dark corner of the company, to be seen and not heard, but we can be so much more important and essential than that. GBS is a small world, and leaders who reach out and create bonds in their own supply chains can have major impacts on external customer relationships and satisfaction.   

We can also play a vital role in business continuity for our entire organization, not just GBS processes. Now id a perfect time to get on LinkedIn, find out who your mirror is within your largest vendors and customers, and introduce yourself. I’ll bet they would love to share pandemic ideas with you too and exchange best practices.

10. Don’t Panic

In my 30 years of GBS, Outsourcing, and Shared Services, I have seen floods, hurricanes, political unrest, currency collapses, multiple earthquakes, multiple Tsunamis, a nuclear disaster, SARS, MERS, Ebola, Swine Flu, Bird Flu, and now the latest Coronavirus.  This too shall pass. Be safe, be smart, listen to the experts, tune out the political opinion shows, hunker down, think about how these things might really affect operations, and use the time, while the whole rest of the world is panicked and frozen, to quietly, cautiously, and intelligently improve your preparedness. I’ve done it several times, and it really does help and gives you some peace at night.


Robert Bradford is a 30-year veteran of GBS and Shared Services. He is one of a very small group of people who have worked all parts of the industry in their career, having worked as a strategy consultant in GBS fields, a service provider, an outsourcing contract negotiator, and as the primary executive building and operating 3 GBS/SSC organizations from a green field. His unique combination of perspectives allows him to see problems from many angles and work with diverse groups of people to quickly organize and execute.  

In July of 2019, Robert retired as the SVP of GBS for AkzoNobel, a €15 Billion multinational maker of decorative paints and industrial coatings. Since his retirement, he is working with large-scale enterprises to reinvigorate their GBS organizations and expand into the next wave of value, while perfecting customer and employee experiences. 

Connect with him on LinkedIn.


RECOMMENDED