The Better Agents We Need: Reclaiming Human Agency in the Age of AI

Why reclaiming human agency matters more than ever in the age of intelligent system

Add bookmark
Edesio Santana
Edesio Santana
08/07/2025

AI Agents

The Turning Point for AI-Human Collaboration

While I am coming back home after a conference in San Francisco, I find myself returning to the larger themes we’ve been exploring across these articles: leadership through stamina and curiosity, the transformation of identity through technology, adaptability as a success metric, empathy as a core capability, and the shift from efficiency to meaning in enterprise services.

This time, we must talk about agency; what it means, the human ability to take action, choose, and shape systems rather than being shaped by them. The metaphor of the “agent” has long captivated popular culture, but in real life, we now need better agents, people, and systems that act not only efficiently, but ethically and intentionally. With artificial intelligence stepping into roles once considered distinctly human, the question is not whether we can automate tasks; it is whether we can still claim responsibility, especially as our tools gain the power to make decisions.

And so my reflection between airports led me to watch a randomly chosen movie, which turned out to be a bad idea. Instead of winding down, it has kept my mind racing and re-examining how we define action, power, and accountability in today’s business environment. It turns out, even leisure can become a kind of mirror, reflecting back the challenges we haven’t fully articulated.

The real challenge ahead is not fictional, nothing about dodging lasers or decoding messages. Perhaps that is about rethinking agency in an era when systems write their own instructions, and leaders are asked not only to drive transformation, but to safeguard what makes it human.

The Development of AI Agents in Shared Services

From Machines of War to Engines of Meaning

It’s easy to romanticize the agent archetype. From the Cold War era to Hollywood blockbusters, book writers and movie directors have created the kind of campy spy who is acting with confidence, following ambiguous orders with unwavering resolve, and often operating outside the constraints of conventional rules, bringing the results with collateral damage. Dame Judy Dench was immortalized in a movie as a boss grilling one of her secret agents as a misogynist sub-product of the Cold War era, that’s odd less because usually the chain of command is creating business culture, but because both leaders and workers on the field need to be held accountable for their actions.

The tools we work with today are not spy gadgets or explosive devices. They are intelligent systems that recommend, predict, optimize, and adapt. They don’t wear suits or carry passports, but they do carry risk. What once required human judgment, interpreting context, and responding to nuance is now being modeled in code. The systems are fast, scalable, and self-improving, but are they reflecting human-centered needs?

Agentic AI, as it is increasingly called, refers to systems capable of setting and pursuing their own goals, breaking down tasks into subroutines, evaluating feedback loops, and modifying behavior over time. They no longer require exact instructions. They can infer intent. That power is astonishing and dangerous. Not because AI might someday become sentient, but because critical human roles must retain the ethical checks and emotional intelligence that guide wise decisions, and it turns out that country leaders and business leaders are sometimes missing the point entirely.

In business, we have long valued process optimization. We build systems to minimize waste, maximize output, and scale operations across geographies. But there is a tipping point where efficiency without empathy becomes exploitation, and where automation without alignment becomes abdication. This is the central challenge of our time: how can we integrate AI and the people providing the inputs?

That’s a challenge deeply relevant to shared services and outsourcing. These functions are increasingly infused with AI-driven tools, from chatbots to workflow engines to advanced data analytics. But success here isn’t just about better SLAs or reduced turnaround time. It’s about how those tools are designed, governed, and contextualized. The best systems are not just fast, they are fair, reflect human intent, and take into consideration what human purpose looks like in both machine-dominated or ethically ambiguous environments.

The Evolving Role of Design-Thinking

The Return to Design and the Rediscovery of Self

At a recent gathering of thinkers, designers, and technologists in San Francisco, I found myself transported not just by the content, but by the space itself. Every detail, from a classic Aermacchi and Harley-Davidson 1960 motorcycle on the side to paintings and books, reminded me that design is not aesthetic decoration. It is applied empathy and shaping how people experience systems, making things work and things matter. Things that can include products, services, and the way corporations and global process leaders are managing their end-to-end handshakes.

Back in the 1990s, I was a web designer in São Paulo, viewing design as an artistic craft. Later, in corporate life, I traded that identity for a suit and a couple of titles. That was the time of moving through Lean Six Sigma and Change Management, process re-design and transformation programs, with global consulting roles. Something always remained, a desire to solve problems not just through spreadsheets but through a different kind of data points. That came back through design thinking, when I joined IBM and reconnected with practitioners around the world to analyze stories.

Design has evolved from product-centric to people-centric, and now to system-centric. It is no longer just the domain of artists or UX professionals; it has already become a strategic language for leaders. In shared services, this is especially true: we design operating models, stakeholder journeys, technology stacks, and governance frameworks. The more complex the environment, the more useful it becomes to think outside of pre-defined conventions, like a designer.

Understanding Broader Consequences of AI

At the same time, emerging technology continues to move faster than our ability to fully comprehend all broader implications. As I walked the streets of San Francisco before the event, just blocks away from driverless cars and high-tech labs, I also saw a parallel city with people sleeping on sidewalks, struggling with addiction, and largely invisible to the systems designed to optimize the future. That contrast is not a glitch; it is the result of human choices.

Technology reflects values, so if inclusion can not become part of the design brief, then exclusion at a certain point could come knocking at the back door. Artificial Intelligence doesn’t decide who it benefits, it doesn’t stop copywriting infringement and theft, or the workforce being impacted because of poor communication, when big changes are arriving. People drive all that, and the same goes for processes in shared services. If success metrics simply focus on cost, then perhaps care will vanish. If we measure success only in response time, then who knows? Maybe even reflection will disappear.

Leadership today requires more than agility, but the capacity to see unintended consequences, and anticipate second-order effects, while remaining accountable; that’s the real challenge.

Reclaiming Agency & Reframing Leadership

The Ethical Implications of AI

Much of the current debate about AI is driven by fear of job loss, the looming threat of state-owned surveillance, deepfakes, biased systems. History shows us that fear often accompanies progress, with the printing press, radio, and television. Change arrives before our understanding, I guess. The more pressing danger is not technology, but the misuse of machines by people in power. When algorithms are embedded with bias and presented as objective, trust erodes. When facial recognition systems are deployed without transparency, rights are violated. When smart cities are built without inclusive participation, they become spaces of control rather than empowerment.

For enterprise services, the path forward is not to resist AI, but to resist the abdication of responsibility. We must design systems with context in mind. We must equip teams to ask not just whether something can be done, but whether it should. And we must reclaim the role of human agency, not as an obsolete relic but as the defining advantage.

This begins with better questions. Answers emerge across global operations, with signs of this shift already taking place. Teams are embedding ethics reviews into automation pipelines, and leaders are including human-centric facilitators in system rollouts. Centers of Excellence are evolving from rule enforcers to learning networks, and new talent models are placing value on expertise, empathy, narrative fluency, and systems thinking.

This is the kind of agency we need: quiet, committed, and collective. The type that is present, mindful, and dares to ask the inconvenient questions.

What Leaders Must Carry Forward

As we close this article, there’s a question that remains: what makes an agent “better”? The answer is not found in processing speed or technical mastery, but maybe in the necessary alignment between human action and values, between systems and users, between what we build and who do you serve.

Leaders in shared services have a unique opportunity to reach out across functions, regions, and technologies. We are the link between business needs and operational capability, between strategy and execution, which means we are uniquely positioned to embed better agency into every layer of our operations, in every conversation with our stakeholders.

The conversation has shifted away more than two decades ago, from governance frameworks or compliance checklists to nurturing a culture where people feel empowered to challenge assumptions, question defaults, and advocate for those who are not in the room. All about designing with care, delivering with purpose, and choosing meaning over metrics.

Each of us is a leader, hearing a higher call to reclaim our role as designers of context, builders of trust, and agents of the future we want to live in. Systems don’t shape people; it is rather our choice to shape them every time we think about the right choices and become agents of our destiny.

For more insight into navigating the future of automation, download SSON's recent report: The State of Intelligent Automation 2025. 


RECOMMENDED