Episode 269: A Smarter Framework for Human Centered Decisions in HR (with Kate O’Neill)
AI, layoffs, reskilling - everyone’s reacting to the same headlines. The question is: are they making the right decisions because of them?
In this episode of the Digital HR Leaders podcast, David Green is joined by Kate O’Neill, Tech Humanist, keynote speaker, and author of What Matters Next, to explore how leaders can make more deliberate, context-aware decisions in a landscape shaped by constant change.
Join them, as they discuss:
Why reacting to headlines can lead to poor decision-making
How to take a more structured, long-term view of change
The ultimate approach to prioritising in complex environments
What meaningful upskilling looks like in the age of AI
Why prompting reflects a deeper shift in how work gets done
How to build trust and communicate change more effectively
This episode is sponsored by Visier.
Visier Workforce AI is your GPS for workforce decisions. Spot attrition risk, uncover pay gaps, measure leadership impact, and track skills shortages before they slow growth. Then act. Align talent to real business outcomes.
Across industries, HR and business leaders are using Visier Workforce AI to navigate the biggest workforce shifts of our time. Move from knowing to doing, faster.
See it in action at visier.com
Also, make sure to read to explore Visier’s latest research on strategic workforce planning in the AI era.
Resources:
What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions
This episode of the Digital HR Leaders podcast is brought to you by Visier.
[0:00:08] David Green: Today, we have more data, more tools, and more frameworks for navigating change than at any point in history. And yet, I tend to find that as HR and people leaders, we have never felt less certain about the workforce decisions we are making, what to prioritise, what to ignore, when to move fast, when to hold back, and perhaps most pressingly, how to bring your people with you through change that isn't slowing down anytime soon. That's exactly what today's conversation is about. Joining me today is Kate O 'Neill, author of the recently published book, What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast. Known as the Tech Humanist, Kate, who is also on the prestigious Thinkers 50 list, is a strategist who helps organisations to improve the human experience at scale.
In our conversation today, Kate and I will discuss why trying to have one single narrative about the future of work might actually be part of the problem; how to approach decision-making when you don't have all the information; how to transform your organisation with technology through a human-centric lens; and why following the headlines, especially around AI and layoffs, can sometimes do more harm than good. We also get into where to start with upskilling; why becoming accomplished at prompting is more important than many think, and what that tells us about how work is changing more broadly. So, if you've been trying to get clear on what really deserves your attention, this conversation should help. Without further ado, let's get into it.
Kate, welcome to the Digital HR Leaders podcast. Can we start by finding a little bit more about you and your work, who is Kate O 'Neill, and what's the journey that's led you to where you are today? And I know you do lots of different things, so I'm particularly looking forward to this.
[0:02:06] Kate O'Neill: Yeah, thank you, David. Thanks for having me on the show. This is a great honour. I know you have a lot of really incredible people on here, so honour to be here. I am a technologist, a strategist, a writer, a thinker, and mostly I think I've been known these last 10 or 15 years or so as the Tech Humanist. And that feels like it's a great encapsulation of the 30-some years of my career, is that I've always been fascinated with technology since childhood. I've also always been fascinated with some of the basic building blocks of humanity, like language and communication and connection with one another and things like that. So, those never felt like they were in tension with one another to me. It always felt like, how are we going to use technology to help us better communicate, help us better connect? And how are we going to use our human presence with one another in the presence of technology to facilitate our interactions better?
So, that's been fascinating to me through all of the work I've done, as one of the first 100 employees at Netflix, which is always fun to go back and talk about. But also have worked in healthcare and media and music and entertainment, and all kinds of different fields, lots of B2B services. So, it's always that intersection of how we're using technology and how we're really bringing out the best of humanity that really makes me tick.
[0:03:37] David Green: So, you were one of the first 100 employees at Netflix. Tell us a little bit about that, because obviously the company's changed a lot over the years, hasn't it?
[0:03:43] Kate O'Neill: Yeah, it has for sure. It's funny whenever I meet someone now and then who's a former Netflixer, but they were there in the 2010s or something. They feel like it's been forever since they were there. I'm like, "Oh, well, it's really been forever since I was there". But it was the early days. Obviously, we were still renting DVDs and I got to be really kind of hands in the dirt in a lot of different things. So, I was the first content manager that the company had had, overseeing the content database, but doing it from a very human-centric place, kind of thinking about what does it mean to create better experiences for people so that they can find the movie that they want to watch, so that they can recommend the service? How do we make sure that people are getting that kind of experience? And so, getting to think about the completion, the completeness of the database as a metric that actually impacts user experience. I think it was a really formative experience for me in trying to blend a lot of these kind of operational ideas and analytics, and how we think about how that actually shows up on the user side for people.
[0:05:00] David Green: And obviously, we don't want to make the whole episode about Netflix, but I think one thing that really resonates in the workplace is Netflix is always used, as is Amazon, as a great example of using data to personalise the customer experience. And obviously, what we're seeing in the workplace now increasingly with AI is the ability to actually personalise the employee experience as well, whether it's serving up learning opportunities, career pathways, potential mentors within the organisation, etc. And it's one of the ways that technology actually does have a human face, in the fact that it does that personalisation. I'd love to hear your thoughts about that before we move on, but I guess we'll probably draw on that quite a lot in our conversation.
[0:05:47] Kate O'Neill: Yeah, it seems like such a rich mine to go down. There's a lot already at Netflix, even in those early days where the building blocks are there. Obviously, the company was not as sophisticated with the kind of implementation of data-rich employee experience yet, as you're describing, but I think the instinct was there. Of course, the very famous Culture Deck that came out of early Netflix, between Reed Hastings and Patty McCord and some of the other folks who were leading the company at the time, that having gone viral in the early 2000s and shaping a lot of Silicon Valley culture that followed was kind of no coincidence, because I think there was a lot of very intentional culture-building and a lot of very real thought and intentionality that went into thinking about how do we create the kind of experience for people where they show up fully and they bring their best thinking and they collaborate in a rich way, in a way that's not afraid of a little conflict, but that ultimately results in the best experience and the best outcomes for the company, and the best contribution to the industry? That, I think, shows over the course of years after.
[0:07:06] David Green: Brilliant. Well, let's bring that forward. So, I mean, as you've just explained there, you've spent a lot of time looking at how technology shapes a human experience, not just in work, but obviously from a customer perspective across society more broadly as well. When you look at the workplace today, what do you think feels most understood about the moment we're in?
[0:07:26] Kate O'Neill: I think the thing that I think about a lot, as it relates to work and jobs in the moment we're in, is we talk a lot -- as a professional keynote speaker, I get a lot of enquiries that have to do with the future of work, like companies and organisations and associations like, "We're doing a whole thing about the future of work". And I'm glad that people are having that conversation, but I think it's a very complex conversation to nest within the term 'future of work'. Because, as I like to point out, it feels to me like it's a nested Russian doll of topics. You've actually got, within the future of work, you're talking about the future of jobs, the future of the workplace, the future of workflow, productivity, tasks, job roles, titles, the labour relations. All these different topics have some bearing on what this kind of collective future, the aggregate future is likely to be. But I think we don't often enough kind of unpack it into its component parts and think about which of those we're trying to affect in any given moment with any given activity or policy or initiative, or whatever. I think it would do us a lot of good to bring more rigour and more of a complete taxonomy to the way we approach it.
[0:08:51] David Green: And do you think that's a problem that we're having what seems to be one conversation around whether it's the future of work, whether it's about AI, and there's actually multiple different shifts all happening at the same time?
[0:09:02] Kate O'Neill: It inevitably intersects with AI and with technology. And at the point of intersection on any one of those topics, it starts to feel like the whole topic. Like, if you're thinking about how intelligent automation can bring greater productivity, well that's a very different conversation from how intelligent automation intersects with job roles and titles. We need to separate those conversations very much so. And in order to think about that in a healthy, human-centric way, we also need to recognise that there are some constructs that we're allowing to bleed over that don't belong in those other conversations. Productivity is a metric we should be using to measure systems and tools, not people. When we think about people, we can think about their effectiveness or we can think about how well they're communicating and how well they're producing useful output, and what their outcomes are. But productivity is not a particularly useful metric for people.
So, it's one of those things where I think we've allowed the Taylorist workflow modelling of work to bleed across the entire discourse. And where it's really kind of showing up right now is where agentic AI and a lot of these other modalities of AI are looking like they cross the entire span. And I don't think that they do in exactly the way people are assuming they do, just because they're allowing one of those pieces of the conversation to bleed over across the entire discussion. So, if there's nothing else you take away from this discussion, I hope it will be that when you read about AI and the future of work, that you immediately ask yourself, which part of AI, which part of the future of work, what are we actually measuring and how is it changing at that particular level for that particular application.
[0:11:01] David Green: Yeah. As we said, it's not just about implementing tools. It's where can we apply AI to help us maybe with a process that is cumbersome, annoying, takes up lots of time, and that maybe AI could do it better and swifter, freeing up people to actually do more impactful work and actually taking them away from the humdrum of some tasks? Or, and probably it's an 'and', what are the business challenges or business priorities that we've got at the moment that AI can really support us with? And that's investing it there, rather than just trying to apply it everywhere.
[0:11:43] Kate O'Neill: Yeah, exactly. And I think some of the things too is that I think very few people who are in a position of decision-making authority for organisations have a very rich and robust understanding of the world of AI, beyond large language models as they've experienced them since the ChatGPT moment started in November 2022 to the public. So, those of us who have been working around various forms of AI and machine learning for many, many years, we take for granted that when you talk about automation, that you are talking about some of these other forms of big data, machine learning, things that can happen at much more scripted, much more finite, discrete levels of application. And you can solve really specific problems in ways that agentic AI is looking poised to do for us, if we can apply it in the right ways at the right levels. That's where I think the trick comes in, is we're skipping right past this kind of primer of an education about the robustness of what AI's history is and why we should understand that, and jumping right into sort of prompt-based, large-language-model-inspired, agentic application, and not thinking about how do we use the power of these tools in ways that actually support our organisation best and allow people to show up most effectively, doing what they do best and serving the organisation and its customers.
[0:13:16] David Green: This episode of the Digital HR Leaders podcast is sponsored by Visier. When top talent leaves and skills gaps appear, how do you find your way? Visier Workforce AI is your GPS for workforce decisions. Spot attrition risk, uncover pay gaps, measure leadership impact, and track skills shortages before they slow growth, then act. Align talent to real business outcomes. Across industries, HR and business leaders are using Visier Workforce AI to navigate the biggest workforce shifts of our time. Move from knowing to doing faster. See it in action at visier.com/demo.
So, in What Matters Next, you make the point that the pace of change today isn't just faster, it's fundamentally different, both in terms of how quickly things are evolving, but also the scale of the impact decisions actually can have as well. It feels like leaders, and I'm sure many HR leaders listening to this can resonate with this, they're being asked to make decisions with incomplete information under pressure and with much bigger consequences than maybe we've done so in the past. What's your guidance, Kate, around how leaders should be approaching decision-making in this kind of environment, which I don't think is going to change in the foreseeable future?
[0:15:02] Kate O'Neill: No. I think every time I ask an audience to raise their hands if they think that the future is looking very uncertain and that everything is moving so fast, and raise your hand if you think that's going to be any different or better in the years to come, no hands go up after almost every hand goes up. Everybody is sort of in agreement on this. I think we all sort of anticipate a very continuous sense of acceleration. And the problem with that is, as you say, what we're accelerating toward and what's causing that acceleration is this increasingly powerful set of technologies that bring capacity and scale and consequence to the outcomes of those decisions. So, the way we use them matters, and we need to be thoughtful about how we implement them, not just because we're missing out on a lot of their capability if we don't implement them well, although that's true too; but because if we implement them poorly, there will be suffering on the other end of that, whether it's because of human job loss or because of the people in our user and customer communities who won't get ideal outcomes from what we are trying to serve them, and so on, and sometimes people who are second- and third-degree removed from these decisions in larger communities who are affected by these decisions as well. So, it's no small matter.
So, what I like to do is first of all to say, we have to kind of change the way we're thinking about the future, because the future seems like it's this black box, we have absolutely no idea what's coming, it all feels too uncertain and unclear. And so, I think that the tool that I introduce in What Matters Next to deal with that sense of uncertainty is called the Now-Next Continuum. It is really just kind of a glorified timeline, but it is a way of showing the continuity from the past to the present into the future, and this sense that nothing is sort of magically transforming in any one of those time periods. Things have a flow. So, the decisions you've already made in the past are showing up in various ways today, and the decisions you make today will show up in various ways tomorrow. So, there's nothing super-mysterious about that. And I think it helps people feel more of a sense of grounding, also more of a sense of agency over the decisions that they're making today and how those play out in the future.
There's also this piece around, I talk about the idea of how do we accelerate, how do we go ahead and move faster in this world with a sense of ethics and responsibility, kind of carrying with us what we know matters into this moment. And that's where I introduce the Questions-Insights-Foresights model. And this is truly just a different way of assessing what we know and understand in order to be able to take the speed of questions and decision-making in a more agile approach, with a lot more human wisdom at our disposal. It's this iterative process of knowing that we have to ask better questions, we have to look at the answers that we get not as answers meaning their decisions, but as partial answers, as a set of information from which we can extract meaningful insights. And insights are really the tool here. They're the nugget that helps us be able to make better decisions using the clarity of the lens that insights offer us, and then be able to decide for today, while also the most powerful thing I think in the entire book is this idea of bankable foresights; the idea that as we go through this process, we not only accrue meaningful insights, but we also set aside things that we know with some confidence are likely to matter in the future, but we just don't need to deal with them quite yet. Yet the more of those we accrue, the more they sort of triangulate an obvious direction for how we're steering the ship.
So, it becomes not a matter of this mysterious black box of the future, but a lot of pieces that we already have laid in place, that we already understand as milestones and goalposts that we're working toward, we have some confidence about what's coming along with those decisions, and we can move with a lot more clarity and confidence toward those.
[0:19:25] David Green: Really good. And I think one of the key things you said there is the insights support better decision-making. And obviously, as a Tech Humanist, I'm sure you would agree with this, we don't want technology to make the decisions for us. We want us to help us, as humans, make better decisions.
[0:19:40] Kate O'Neill: Yeah, this is definitely one of those areas that the techno-utopists are very much about, the idea of automating and letting the computer make the decisions, it's going to free us of having to make decisions, and I think that sounds horrifying on most levels. Obviously, there are some low-level decisions that you could potentially build robust enough algorithms to handle most cases, and they have little consequence for human life, so you can deal with that. There are small choices that can have that. But once you get to the point where you're actually deciding at a business strategy level, at a truly operational level, at levels that impact what your next few moves are going to be and how that's going to play out for people, then you do need this more human-in-the-loop review process. You need AI tools and technology to be making informed recommendations to you and providing you with all of the reasoning that went behind it, all of the tools for you to be able to sanity-check and build better decision-making process for yourself, as well as codifying those decision-making processes further into algorithms and AI.
So, it's a really robust process that I think we easily can develop within any organisation. It's really just about making sure that you're not sort of defaulting into the, "Let's just lay a layer of AI over top of everything and let it run the show. And oh, by the way, we're going to cut 15,000 people from the call support side because we're going to let AI handle all of the customer support". There's just so many ways in which those are not the right ways to proceed. So, I think there's this kind of more measured approach, more stepwise approach, more intentional approach, and more thoroughly, integratively-designed approach to how it works.
[0:21:41] David Green: And what you're just saying there, Kate, about the humans making decisions, if we think about most of the people listening to this show, working in HR, supporting all the organisations to make decisions about people, who they hire, who they promote, pay increases, who's going to be put on the accelerated leadership development programme, etc, it's so important that those decisions are made by humans.
[0:22:05] Kate O'Neill: Yeah, absolutely. And I think there's a lot of facility that we get with different kinds of tools, even large language models, but certainly different sets of tools within the AI kind of umbrella as well, that can help us lift up the data out of data sets and recognise maybe opportunities we might not have otherwise recognised and review. Like, if you're looking at who sort of performed the best across a bunch of different metrics, it might be nice to have technology help you crunch the numbers on databases, on spreadsheets, and then be able to review those. And you sort of bring the context to it, you bring the emotional intelligence. You know that maybe someone's name floats to the top of the list, but that person is not management material by any stretch of the imagination. Maybe that means, though, that that person needs to be offered opportunities to become management material, to take management training and to take some educational opportunities that may help them grow and cultivate as a manager.
But it's that kind of intuition that you have as someone who works in HR, who has spent your life, your career, looking out for these ways in which the sort of magical ways that companies operate, that organisations operate, and what it is about the dynamics of human behaviour that make that right, that make that real, and AI is just not going to get that. It's not going to get it without you understanding that and bringing that wisdom to bear on decision-making.
[0:23:40] David Green: Very good. And you mentioned the bankable foresights piece, which I really liked the sound of, because obviously one of the challenges is everything's happening so fast, you're almost having to do things in the moment. And what I interpreted from listening to you there, Kate, you're talking about, okay, this is going to be important, but maybe not now, maybe in six months, maybe in a year's time. So, it's actually helping you with your prioritisation in the near future, rather than just having to make decisions in the flow.
[0:24:08] Kate O'Neill: Yeah, exactly. I think a lot of times, people think that we're moving too fast these days, so I ask about what's the longest time horizon on which you're planning. It used to be that ten years ago, if I would ask that question, ten years was not an unusual time horizon that I would hear back from companies. These days, it's very rare. Only the largest organisations will talk about anything over five years. Most organisations have really consolidated and shrunk their time horizons to three years or shorter. And I do think that that contributes to the sense that people have that everything feels kind of urgent and now, and they don't have a longer horizon of how they're thinking and planning. Even if those plans are to change over time, it helps in our human minds to have a clarity of, "Well, we think we're moving toward this, that's our North Star". And then, the North Star can always be nudged in a direction or to be adjusted as time goes on as we learn more information. That's the goal of learning more information. But I think it really helps to have a sense that there's something we're building toward, that there's a goal, that there's a vision.
There's this great piece of the equation that I think actually, I jokingly call this my million-dollar question in any room, because it truly can. In almost any of my consultation and advisory sessions, if I ask this set of questions, I will find an opportunity to make or save at least a million dollars. It is the opportunity, once you've identified as a question, that you're looking into the future on the Now-Next Continuum and thinking about, what is likely to matter? What happens is as you think about the multiple possible futures, the multiple possible scenarios that could come out of this question that you're asking, there is, if you're intellectually honest about it, there is going to be one answer that seems most likely, that all the external forces of the world are kind of converging upon and it seems like things are going to go that way. And there is probably one answer that is not the same as that answer that you would most like to have happen, something where you become the market leader in this space, you set a new model, a new standard for how things are done, and those two are not going to be the same answer. But the gap between them is extremely illustrative and it tells you exactly what you need to know about the actions you need to take and the decisions that you need to make in order to close the gap between the most likely outcome and your most preferred future.
What you want, obviously, is to move the most likely outcome toward your most preferred future, not the other way around. You're not trying to compromise and settle toward the most likely outcome. You want to change the world. But the way you change the world is not going to be big sweeping kinds of motion. It's going to be one decision at a time, every single day, incremental moves toward this vision that you have and the clarity that you gain by constantly looking at the now and at the next, and bringing the clarity of that view together.
[0:27:27] David Green: I want to take a short break from this episode to introduce the Insight222 People Analytics Program, designed for senior leaders to connect, grow, and lead in the evolving world of people analytics. The programme brings together top HR professionals with extensive experience from global companies, offering a unique platform to expand your influence, gain invaluable industry insight and tackle real-world business challenges. As a member, you'll gain access to over 40 in-person and virtual events a year, advisory sessions with seasoned practitioners, as well as insights, ideas and learning to stay up-to-date with best practices and new thinking. Every connection made brings new possibilities to elevate your impact and drive meaningful change. To learn more, head over to insight222.com/program and join our group of global leaders.
Closing that gap between what you maybe expect to happen and what you really desire to happen as an organisation, part of that gap is going to be linked to your workforce capability, the ability to close that gap, not everything, but a big way in which you can close that gap is through your workforce capability. And obviously, there's been a lot of conversation, we've had it on this podcast on several episodes, about upskilling and reskilling in the context of AI. But it could still feel a little bit abstract, I think, for some organisations. If you were advising a leadership team today, and maybe some of the CHROs listening to this episode, where would you suggest they start?
[0:29:18] Kate O'Neill: So, I've been talking a lot lately about the idea of minimum viable skilling. I think upskilling and reskilling are terms that, yes, as you say, I think we can get bogged down in. They're very, very important. I think we need to be deeply thinking about them. I've seen some really incredible programmes happening at the regional sort of public-private partnership level with different organisations and states in the US, or regions in different countries. Those kinds of programmes are incredibly powerful. But in the meantime, what I think needs to happen inside truly every organisation is that minimum viable skilling. And I think of that as being prompt skilling. I have coined this term, 'prompt skilling'. And it means the idea that in addition to the up- and re-skilling that we need to think about, everybody pretty much needs to know how to write a good prompt. The current generation of large language models and the coming generation of most of the agentic models we see in front of us are driven by prompts. There's a lot of prompting. I don't see this trend playing out anytime soon. We're going to see an awful lot of prompt-driven AI interactions.
So, what that means is everybody should have some level of familiarity and some level of comfort at how they articulate what it is they would like to have done, and what the sort of success conditions are, what they don't want to have happen, you know, "Don't include this, do include this, make it a list that includes five different options, act as a whatever kind of role so that you're giving me the best possible input". And what's really interesting about this is that with the exception of, "Act as a whatever kind of role", most of the characteristics of good prompting are also the characteristics of good communication and delegation. So, we talked earlier about that person who shows up high in the rankings of a spreadsheet analysis of who's performing really well in the organisation, but maybe doesn't have management material. Well, this is a way that they can get that management material. Let them become better at the articulation of what it is they're looking for, and prompting is a way to do that. Not only does it set up everyone in the organisation with a great deal more fluency and familiarity for the tools that are right in front of them, in most cases, I mean many of the organisations I'm consulting with are using Copilot or they're using some OpenAI set of tools. And so, this is an opportunity.
A lot of organisations are like, "We've got the tools in front of you. You're not using them. Why aren't you using them?" And a lot of employees are saying, "Well, I just don't really know how. I don't know how to get good results. I tried and I didn't like what I got. So, it's just a waste of time". And honestly, they're not wrong. The first few attempts at using a lot of these tools are not going to be very fruitful. But it takes learning the skill of actually getting the kind of output you're looking for. And once we do that, once we have that facility, I think a lot of things free up. We stop making it shameful that people are using the tools. We start trying to figure out what the boundary conditions of appropriate use of large language models and other AI inside the organisation are, how we disclose that, how we review it, how we make sure that we're vetting the inputs into the organisation and that we're not allowing hallucinations to be part of the reports we generate to customers, and that sort of thing. So, a lot of really good byproducts come out of this type of programme. So, this is very much my call to CHROs the world over, is implement a prompt-skilling programme.
[0:33:01] David Green: And obviously, with prompting, I think it's not just a technical skill. I mean, it's about clarity of thinking, it's thinking about outcomes, as you just said, communication, as you said, storytelling perhaps, judgment. It's being able to be quite specific perhaps. And then, if you look at the stuff that comes out from the World Economic Forum, their biannual Future of Jobs Report, I think last year's one that came out in January, it had analytical thinking as the top skill required in 2025. And there's other creative and soft skills on there as well, as well as some hard skills such as technological literacy. Are you seeing this as part of a broader shift in what good work actually looks like?
[0:33:44] Kate O'Neill: Yeah, I think it is a shift that way. One of the other things I see is that, like I was mentioning, a lot of these AI programmes within companies are complaining of low adoption. One of the incentives, I think, of a programme like prompt skilling, of putting one of these kind of programmes in place, is that one of the reasons I think people don't participate in AI programmes at companies is that they feel like they're training their successor, right? "I'm making the robot smarter that's going to replace me and I don't want to do that. I don't want to have any role, I don't want to be complicit in this whatsoever". And I'm sympathetic to that, because I think there's a lot of uncertainty and a lot of existential level angst that people feel about this, and I think it's important that we all feel some empathy for that.
I think that one of the very real pieces of persuasion is that these are skills that are marketable. To actually be able to interact with these tools is a way to ensure that you are still employable, even if you lose this particular job. Even if it does happen to turn out the way that you're fearing, that the company decides to reduce your department, your job, your role, then you find yourself out in the market and suddenly, you can't find the kind of work, because every job listing is asking for AI skills and you never took advantage of the opportunity to learn them. They're actually priced in now. The last number I saw was that there's a 28% markup for job listings that have AI skills listed among them versus those that don't. So, you're really asking for an awful lot more marketability and a lot more of a premium on your skills if you are willing to learn these skills on the job and show up more effectively at the same time. So, I think that's a really compelling, aligned with human values and incentives way to sell those programmes internally.
[0:35:52] David Green: So, there's obviously been a lot of headlines about, it usually is about the Big Tech companies, Big Tech layoffs, likes of Meta, Amazon, and actually the week we're recording this, Oracle. And from there, you invariably see it sometimes trickle down to other organisations as well. What's your take though on how HR leaders should be thinking about the stories they read in the news versus simply following suit? And I guess as an HR leader, sometimes you've got to be strong and actually challenge your CEO and CFO on the need for job cuts, I guess.
[0:36:24] Kate O'Neill: Yeah, I think all of the reasons you just listed are completely real, valid. They are driving the decisions. I don't think it's cynical to acknowledge them at all. It's at least a story that businesses like Amazon are telling about their job losses, that they're saying, "Oh, we just over-hired during the pandemic and this is just course correction. It's not actually layoffs due to AI". But then, of course, you do get the Googles and a lot of other companies that have cut especially tech roles, and they will acknowledge that the efficiencies that they're gaining due to AI have something to do with why they're cutting these jobs down.
I think a couple of things are true at the same time here. One of those things is that we do see efficiencies with good use of AI, and that there are ways in which not every job opening remains necessary with those efficiencies. And so, we do have to acknowledge that there's some compression, some shrinkage that happens at some level in some areas of the company due to the implementation of AI. That doesn't necessarily mean, and the story has been for a long time, that you will also see creation of new jobs and you will also see sort of replacement of jobs. Like, even though you don't have this job, you need this job now. And that doesn't necessarily happen at a one-to-one level, it doesn't necessarily involve the exact same skillset, so there's some difficulties with that model. It remains true that there's still some moving around and some new opportunity creation as a result of the implementation of AI.
[0:38:06] David Green: Yeah, it's interesting, because I mean history teaches us a lot. Every previous industrial revolution has created more jobs than it displaced, but obviously it's a very disruptive time. As you said, certain elements of roles or certain tasks will be automated and certain new tasks will be created that humans need to do, and obviously then there's a whole upskilling, reskilling piece that we talked about earlier as well. And we're quite early, we're very early in this particular industrial revolution. I guess it's happening faster and it's happening globally, which maybe makes it more challenging. And it's having an impact on white-collar workers, whereas maybe before it's had a lesser effect on white-collar workers, which means it's probably more newsworthy. But yeah, it's a really interesting time. Obviously, you help companies with strategic foresight effectively. I mean, I'd love to hear your thoughts on that. We don't really know at the moment, do we? I mean, you've got the World Economic Forum that predicts again, and there are other studies out there predict that there'll be a net creator of jobs rather than net destroyer. But at the same time, we've got all this geopolitical and economic uncertainty going on. We've got trust actually in government, in organisations, in other institutions is falling as well. So, there's a lot going on at the same time, isn't there?
[0:39:27] Kate O'Neill: That's right. A lot going on at the same time is the story of this moment. I think it's just kind of true in general. If I were to write a sort of side book to What Matters Next, it would be A Lot Going on Everywhere. Everything, everywhere, all at once; I've heard that somewhere. So, there's a couple of things that are concerning. One is, I think in a lot of the larger organisations, one of the things that gets cut in these moments is the ethics and responsibility teams around AI, and that is incredibly needed at a time like this. A lot of peers, a lot of friends who are placed among these hyperscale, huge organisations in those types of roles, and they are often in flux, often moving from one to the other, because one decides that it's their moment to cut those teams, and another one decides that it's their moment to invest. It's just a very strange reality in those roles.
But as someone who is doing this work more externally as a consultant and an advisor, one of the things that I see that you alluded to is, I just haven't really seen any companies, and I don't work with very small, small companies, but from some of the smaller companies to the very, very large ones whose names you all know, I have yet to see a situation where AI can be implemented and does not require an awful lot of customisation, of really thinking about the problems that we're bringing it in to solve, and really thinking at a human level about workflow and office politics and communication and hierarchy, and all of these kind of behavioural things. This is one of the things that I talk a lot about in What Matters Next, as well as in several books prior, is that digital transformation is this kind of catch-all umbrella term for a whole set of transformations that we don't often give name to, like behavioural transformation and workflow transformation and sort of the emotional and psychological transformation that comes along with the situation where Barry used to submit approvals for workflow to Sally, and then Sally would send them on to Tom; and now with automation, Barry's automation goes directly to Tom and Sally's cut out of this deal and Sally's having a lot of existential angst about, does she even have a job anymore? That's real. That's real for so many people in such palpable ways.
I think if we don't address that and really think in a very real boots-on-the-ground, nuts-and-bolts kind of way about this, if we always talk in broad strokes and abstractions about it, it's very easy to imagine that we're just going to replace everything with AI and the company's going to be more efficient, and so on. But when we actually get down to it, it ends up being, you know, I conduct these day-long and sometimes two-day-long sessions at organisations, and the things that come out are inevitably human things, right? They are inevitably human things that you are not going to fix with AI. They're still going to be there, even down to the language that's used and you cannot effectively implement company-wide AI across an organisation that uses the word 'batch' differently in one function versus another, or design, or order, or something like that, that shows up differently in one function versus another. Now you have to course-correct for the vocabulary that's used between different people. It's just fundamental things like that, that always, always surface. And I think people overestimate how easy it's going to be to make sweeping gains in productivity and efficiency, and they underestimate how much it's going to take at an empathetic, human, trust-building level to really bring the organisation along.
[0:43:37] David Green: Very good. And a lot of this is the responsibility of leaders. We talked a little bit about trust. What does good leadership look like when it comes to building trust for successful AI adoption, implementation, and then business outcomes?
[0:43:51] Kate O'Neill: Well, I think it's a couple of things. One is, I always come back to a clear purpose statement. That's something that I think is table stakes for good leadership. Articulating a purpose statement that is a three- to five-word, you know, not wordy, not one of these kind of un-language, un-speak kind of mission statements or vision statements that's a paragraph long, but just something that really succinctly articulates why you're in business, what it is you're trying to do and trying to do at scale, so that people can all rally around it and so that AI can work to it. So, when you actually get down to it, the reason that people are effective in their roles, whenever they're effective, is because they feel a sense of greater connection to something that feels meaningful. And purpose is the shape that meaning takes in business. And when you get right down to it, when machines are most effective, they are effective because you can give them clear and succinct instructions toward an outcome, and a purpose is a really great shorthand for, "This is the outcome that we're trying to achieve". So, that's one way that I think leadership can show up and be very helpful in this moment, is just making sure that that purpose is articulated and well understood and is felt organisationally and culturally throughout.
Another thing is just communication. It really comes down to making sure that as AI tools are rolled out, that there is clear and transparent communication about what the intent is, what the management hopes to achieve, and that if this is intended to not replace people's jobs, then say so, because people are definitely making the assumption that it's going to replace people's jobs. I think that's something that very often happens, where managers and leaders think, "Well, people don't think that here. We've always been a very employee-centric organisation, it's always been very human-centric". People, meanwhile, are sitting there nervously chewing their nails, waiting for leaders to communicate about this. So, those are two things. I think the very forward, transparent, forward-leading kind of communication, and really getting that purpose statement well understood.
[0:46:06] David Green: Very good. And let's shift to, obviously, you're known as a Tech Humanist, which is fantastic for our listeners, I think, in particular. If you were to guide our audience, particularly CHROs and other people leaders, on how to make human-friendly technology decisions, what would be your key pieces of advice around that?
[0:46:29] Kate O'Neill: Well, one of the things is, as I just mentioned about purpose, that understanding of purpose being the way that meaning shows up in business is important to kind of back out, like reverse engineer. Really understanding the sense of meaning is key to understanding humanity. I really think meaning is the secret code of humanity. It is what makes us fundamentally human. My TEDx talk that I gave a few months ago is called, "We Cannot Leave Meaning Up to Machines". And it's really at this core level of, I think it's such an important part of this large-language-model-driven moment that we understand that what LLMs do incredibly well is spew out lots of compelling and convincing-sounding language. And we as humans are hardwired to receive language as if it is true and human and it's only other humans who use language. So, it's natural that we sort of ascribe humanity to that entity that we're interacting with.
But I think the takeaway from a leadership standpoint, from a CHRO standpoint, is that people are constantly craving meaning. They're constantly craving a sense of what gives them purpose, what gives them a sense of significance and relevance, what makes them feel like they're contributing to something larger than themselves. And so, really bringing that full circle and making sure purpose is articulated, but also that when we're rolling out AI tools, that we're thinking about how these can contribute to purpose, and maybe encouraging people to bring that insight back to the organisation. Like, "Here's something that I did with AI, and it's amazing how it helped me understand how to achieve what this organisation is supposed to be all about". I mean, you should be celebrating moments like that, like really elevating those employees that are going above and beyond to make those authentic leaps of logic to connect what they perceive as their purpose with what the organisation perceives as it, using the best technology their disposal.
[0:48:35] David Green: Kate, I can't believe we've almost got to the end. We've got the question of the series before I ask you how people can stay in touch with you. So, this is the question we're asking everyone in this series of the Digital HR Leaders podcast. We've covered a little bit of it today, so please feel free to just summarise some of it. But specifically, again connected to HR, how can HR move fast, and you might think fast isn't necessarily the right word, how can HR move fast with AI without losing trust, fairness and governance?
[0:49:03] Kate O'Neill: Yeah, I think you said it there at the end there. The word 'governance' is a tricky word. I think people often hear it and they think like, "Uh-oh, we're putting rules in place now, it's going to be so unpopular". And there's this kind of way in which governance is thought of as brakes and it's stopping us and it's making it impossible for us to go fast and achieve things. But I think the most powerful reframe of that for me is thinking about brakes, for example, in that metaphor, not as about stopping you or even necessarily slowing you down, but about providing traction. If you're thinking about going around a curvy road on the side of a mountain, you do not not want brakes, you absolutely want brakes. And it's not because you want to go at a crawl or you don't want to go at all. It's because you want to make sure that you're going in a very intentional way, that you're going the way you mean to go, at the speed you can go safely and with control.
I think that's the great metaphor to apply here, is just to think about what do you need to do, what kind of understanding do you need to set in place around the tools and technologies in your organisation, around the communication that flows in the organisation, around the approvals and the decision-making, in order to make sure that you're going as fast as you safely can. And that's going to be the best way to think about building the right kind of future-ready organisation in this moment.
[0:50:40] David Green: Very good. Nice way to end the conversation. And I definitely recommend What Matters Next to our listeners and viewers, because obviously I've just stuck the book up. Kate, it's been a fascinating conversation, I've learned a lot. Love listening to some of your thoughts and ideas. Where can people find out more about your work, what's the best way to follow your thinking, and also, you mentioned the TEDx video talk there; and how can they find out more about What Matters Next as well?
[0:51:11] Kate O'Neill: Yeah, thank you, David. My company, KO Insights, does this work around consulting and advising, and you can find out about us at koinsights.com. But it is also the best place to find out about my speaking, my books, and I think even the TEDx talk is linked there. But yes, if you look up Kate O'Neill, Tech Humanist, in Google, I will definitely be the person who comes up, and there's plenty of information there.
[0:51:36] David Green: Yeah, you do come up first with that with that search.
[0:51:39] Kate O'Neill: Yeah. Well, Kate O'Neill is a very common name. There are a lot of Kate O'Neills in the world, unfortunately, or maybe fortunately, maybe it's just because we're so awesome and there are so many Kate O'Neills to be awesome. But I think what happens is when people search for Kate O'Neill, they throw in 'tech', there might also be Kate O'Neills in tech. But Tech Humanist is definitely my thing. So, feel free to use that as a way to find me. But again, koinsights.com will get you right to my company and the book, What Matters Next. You can, of course, find it on Amazon or any great book reseller, and I hope that you do read it. I hope you reach out too. If you're listening to this and you had some great aha insight listening to it, I would love to know what it is. If you have questions, please feel free to reach out. I love to hear from people.
[0:52:26] David Green: Kate, thank you very much. And you think it's bad being called Kate O'Neill? You try David Green. I even did a podcast recording once as a guest. And then, when it was published, it had a picture of a different David Green on the thing. So, we won't make that mistake, Kate, so don't worry. That's a little funny anecdote to finish there. Thanks very much for being a guest on the show.
[0:52:48] Kate O'Neill: Thank you, David.
[0:52:51] David Green: Thank you again, Kate, for joining me today. I really enjoyed our conversation. And I highly recommend Kate's book, What Matters Next, to listeners. Those of you listening, I'm curious. What stood out for you the most from today's episode with Kate? I'd love to hear your thoughts. So, please head over to LinkedIn, find my post about this episode, and let me know what resonated with you. I always read the comments and love learning about the different perspectives in the field. And if this conversation got you thinking, please subscribe to the podcast and share it with a colleague or friend who might benefit from hearing it too. It really does help us bring more of these conversations to HR professionals across the world. For those who would like to stay in the loop of what we're working on at Insight222, follow us on LinkedIn or head to insight222.com. You can also sign up for our bi-weekly newsletter at myHRfuture.com to get the latest thinking on HR, people analytics and everything shaping our field.
Right, that's it for the day. Thanks for listening and we'll be back next week with another episode of the Digital HR Leaders podcast. Until then, take care and stay well.