Content

About

Cracking the Code: Data Readiness for AI

Mauro Portela | 02/06/2026

Last month, our team did something unexpected. Instead of the usual team-building exercises, we were handed mystery briefcases. Eight teams, eight separate puzzles, one goal: crack the Da Vinci Code.

Each team worked furiously on their assigned enigma. Codes to decipher. Patterns to recognize. Locks to open. The energy was incredible. One by one, teams cracked their codes. Success!

Then came the finale.

All the pieces we'd unlocked individually had to assemble into a single working catapult. That's when it got interesting. Suddenly, cracking your individual puzzle wasn't enough. You needed to understand how your piece connected to everyone else's. The real code wasn't in the individual briefcases—it was in the assembly instructions we never received.

Sound familiar?

The GBS Assembly Challenge

We've spent years cracking individual codes in Global Business Services. We've built service lines. We've mastered near-shore and offshore delivery. We've implemented sophisticated software stacks. We've added AI capabilities to our tools. Each piece works. Each puzzle solved.

But here's what we're not talking about: Do we actually understand how all these pieces connect?

Every tool in your stack has AI features now. Your invoice processing has AI. Your customer service platform has AI. Your reporting suite has AI. Each vendor cracked their code. Each solution works independently.

But the catapult doesn't launch.

Why? Because we never cracked the real code: Data.

What the Code Actually Is

Here's the truth most of us avoid: We don't deeply understand what data flows into our processes, what data flows out, and what data is created within each process step and supporting tool.

Think about that for a moment.

You've implemented digital twins. You've deployed automation. You've enabled AI capabilities. But can you map the actual data journey through your operation?

- What data enters your Quote-to-Cash process?

- What data exits to your ERP?

- What data is created by your OCR tool, transformed by your workflow system, enriched by your validation layer?

- Where does master data get consumed? Where does it get created? Where does it get corrupted?

This isn't theoretical. This is the assembly instruction manual we never wrote.

And without it, AI can't build the catapult. It just generates pieces that don't fit together.

Master Data: The Cipher Key

For years, we treated master data as an administrative task. Someone inputs vendor information so transactions can run. Boring. Necessary. Forgettable.

But here's what we missed: Master data is the cipher key. It's the encrypted alphabet that allows everything else to be decoded and connected.

Think about cracking a code. You can have all the encrypted messages you want, but without the cipher key—the legend that translates the symbols—you're just staring at gibberish. That's master data. It's the common language that translates "customer" across your CRM, your billing system, your fulfillment platform, and your analytics suite.

In a fully digital, AI-enabled operation, master data isn't administrative—it's what makes the code readable.

Here's why this matters now more than ever: In the old transactional world, bad data made transactions fail. Visible problem. Quick fix. But in an AI-enabled world, bad master data makes AI invent its own cipher key.

Your AI doesn't reject a poorly formatted vendor name. It predicts what it should be. It fills in missing tax IDs. It assumes payment terms. It hallucinates connections between data points that don't actually exist.

Invisible catastrophe.

This connects directly to the Virtuoso Dynamic Model—the harmonization of data quality, process excellence, and technology. You can't achieve process excellence with a corrupted cipher key. You can't leverage technology when your translation mechanism is broken.

And you can't skip this work.

Three Questions You Need to Answer

Before you implement one more AI capability, ask yourself:

1. Can you draw the data flow map of your core processes? Not the process flow. The data flow. In your Order-to-Cash or Make-to-Sell operations—what comes in, what goes out, what transforms in between.

2. Who actually owns data quality in your operation? Not on the org chart. In reality. When master data is wrong, who fixes it? Who prevents it? Who even knows it's wrong?

3. Do your AI tools talk to each other, or just to you? Each tool has AI. But do they share a common data language? Or are they speaking different dialects of "vendor," "customer," "transaction"?

If you can't answer these clearly, you haven't cracked the code yet.

Why There Are No Shortcuts

I know what you're thinking: "Can't we leapfrog this? Can't the AI fix the data as it learns?"

NO. well at least not yet.

And we're seeing the evidence now. Gartner's latest Hype Cycle research shows GenAI entering what they call the "Trough of Disillusionment"—that inevitable phase where initial excitement meets operational reality.[1] Less than 30% of AI leaders report their CEOs are happy with AI investment returns. The reason? 57% of organizations admit their data simply isn't AI-ready.[2]

This is foundation work. Data governance at enterprise scale—creating a common language, a shared cipher key across your company. It's not sexy. It won't make your next board presentation. But without it, every AI implementation you attempt will be trying to crack a code with the wrong key.

The companies that crack this code first won't just have better AI outcomes. They'll have a competitive advantage that's nearly impossible to replicate. Because while everyone else is debugging their AI hallucinations, they'll be unlocking actual intelligence amplification.

I've seen this firsthand. At Danone, we spent years building data readiness foundations around bank account and vendor master data. Unglamorous work. The kind that doesn't generate headlines. But that foundation enabled us to develop Vault—a multi-layer fraud prevention security model that's become an industry-recognized operating model demonstrating the real impact of data readiness.

Vault didn't succeed because we had better AI tools than anyone else. It succeeded because we cracked the data code first. Clean master data. Mapped data flows. Clear ownership. The cipher key that made everything else readable.

That's the unlock. Not the technology. The foundation.

The Real Unlock

Remember the catapult? The individual puzzles were clever. But the moment of real achievement was when all the pieces came together and something actually launched.

That's what cracking the data code enables.

Not individual AI features that sort of work in isolated pockets. But an integrated, intelligent operation where data flows cleanly, AI amplifies genuinely, and the whole system becomes more than the sum of its parts.

The code is data. The cipher key is master data. The work may not always be in the spotlight, but the payoff is transformational.

The question is: Are you ready to crack it?

What's your experience with data readiness in your AI implementations? I'd value hearing your perspective.

 

 

References

[1] Gartner, Inc. "The 2025 Hype Cycle for Artificial Intelligence Goes Beyond GenAI." September 12, 2025. https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence

[2] Gartner, Inc. "The 2025 Hype Cycle for Artificial Intelligence Goes Beyond GenAI." September 12, 2025. "Despite an average spend of $1.9 million on GenAI initiatives in 2024, less than 30% of AI leaders report their CEOs are happy with AI investment return... 57% of organizations estimate their data is not AI-ready."

Upcoming Events

MORE EVENTS