"Dark Towers" – and What Happens if you Get Complacent

Governance & Controls Matter!

Add bookmark
Barbara Hodge
Barbara Hodge
02/24/2020

I’m deeply into a new book, David Enrich’s Dark Towers, the depressing story about the extraordinary transformation of Deutsche Bank from a respected [I'm starting from the basis of 1980, as otherwise there would be  alot more to say], albeit regionalized institution into a pseudo-global juggernaut. I say pseudo because, as the book explains in great detail, globalized success was more illusionary than real.

I find myself unable to not think about what transpired at DB over two dark decades – in terms of lost image, service and decimated stock price. It’s the all too human story of a Faustian pact with the devil: financial greed running rampant, and destroying an otherwise solid foundation. [I should, for the sake of full disclosure, state that my first job was at Deutsche Bank Capital Markets in London, pretty much around the time this drama started unfolding.] 

The point is not, of course, that transformation is bad. Netflix would never have become the success story it is if it hadn’t switched gears. But I find myself constantly being reminded that transparency, checks & balances, and centralization in terms of governance, are not just important, but critical – especially in globalized environments where so much data can, so easily, get lost.

As I find myself horrified at just how badly things can go wrong, I’m also heartened by the fact that centralized models exist which, if given sufficient access, can identify and monitor critical metrics ranging from cost, to volume, to risk. It's a vote for Shared Services. [Of course, if these risk metrics are ignored, that’s a different story…].

While thus occupied, another email, courtesy of Mike Sturm's Woolgathering, brought the phenomenon of Einstellungs Effect to my attention. For this, we have American Psychologist Abraham Luchins to thank, who, by way of his water jug experiment, highlighted the problem of developing a mechanized state of mind based on acquiring expertise in a given process – which leads to bias, and thus mistakes.

It’s a kind of functional fixedness based on a predisposition to solving a given problem in a specific manner – which ignores the fact that better or more appropriate solutions might exist. It leads subjects to apply the same solution to similar problems, ignoring optimized approaches.

There is a danger, in other words, of developing an expert way of solving for a problem. The danger is that we don’t take a fresh look at the issue to hand, to seek the best solution based on blue sky thinking. Instead, we deploy a tried and tested approach.


Seth Adler's SSON #Podcast with Michael Xiao of Blue Cross Blue Shield recently referenced the need for diversity in thought and "Group Think" – a cool concept. It's a fun and zany chat with excellent references to how work is approached. Listen here


I made the connection with Centers of Expertise, which we have been talking a lot about recently. They are emerging as the SSO equivalent of US Navy SEALs – a special operations task force that can solve pressing and unique problems (their most common deployment, according to SSON’s 2020 survey, is in scaling RPA across an enterprise).

The question thus presents itself: Is there a danger of getting stuck in a best way to fix a problem with automation? Are we training people to take the same 'expert' approach to selecting a process candidate and deploying automation? One which might ignore an alternative, obvious solution, of either:

  • doing away with the process completely;
  • reengineering it;
  • or using simple, already existing, in-house tools to fix it?

I’ve already heard a number of practitioners complain about exactly this.

So, while we remain a fan of CoEs and absolutely believe in complementing standardized transactional automation with the type of expert resolution that CoEs deliver, the takeaway here is to keep pushing innovation, keep an open mind, and not get stuck in thinking we know how to do things.

One of the cautions about RPA is that if you choose the wrong process you risk automating (hard-wiring) an inefficient process. We might say the same about the methodology applied in solving for problems.

We’ve been warned.

 

 

 

 

 

 

 


RECOMMENDED