How do you see your clients adopting AI and grappling with the rapid changes it is bringing?
CEOs have identified that AI is simple to try and hard to scale, and that’s why they come to Accenture. And you can see that in the explosive growth of our advanced AI practice over the past couple of years.
How are you seeing these changes around AI impacting workforces?
[time-brightcove not-tgx=”true”]
AI changes the work, it changes the workforce, and it changes the workbench. The tools that you’re going to use, whatever your job is, are different. In the age of AI, you’re going to be augmented. Accenture is helping clients rewire how the work gets done to take advantage of the technology. And then for the workforce itself, you have to have different skills. You’ll really see two things happening: an upskilling agenda—and that’s why Accenture created LearnVantage, to enable companies to invest in their people—and you’ll see a talent rotation, because not everyone is going to make the journey when you’re transforming your workforce.
How are you thinking about training and upskilling?
When you think about people development, it’s important to have two things in mind. First: clarity and the ability to update the skills needed as the technology is changing. And second, a broad view of what skills are needed. For example, while we need to have the leaders understand the technology, it is as important that we have leaders who are good communicators and know how to drive change. Because fundamentally, the age of AI is about changing the ways we work, having new mindsets, and reimagining not only every part of the enterprise, but every part of the product.
How are you thinking about AI risk and responsible AI?
Trust is the foundation for the use of AI. Without trust, companies will hesitate to move beyond pilots, and with it, innovation will blossom. Responsible AI is critical for the success of any enterprise using AI, because it’s the foundation for scaling AI. Accenture had a responsible AI program before anybody knew the words responsible AI. We embed responsible AI in all of the work that we do, whether we’re delivering for a client or helping them use AI.
Can you give an example?
We’ve created a new product that ensures that when a company makes changes to their compliance policies, for example, that all of the AI that they’re using gets retrained to comply. That’s a hard thing to do. The product allows that to be done automatically across your entire enterprise, with all AI, to ensure that there’s never a disconnect between the behaviors you need, the humans that have them, and the digital agents that need to also comply with them. That’s a real, tangible example that applies to absolutely every use of an AI agent. And we created that product because companies today, their HR departments are not set up to ensure that AI agents who are acting for the company have these embedded policies and behaviors.
As a leader, what lessons have you learned during this period of transformation?
The human experience has to stay at the center of all design, because the technology doesn’t replace human ingenuities and humans. [That] ensures that you have business value and not [just] an example of what AI can do. Those are two different things. You can show cool things around AI and not have the business value.