Bonus Episode: How Human Intelligence Can Guide Responsible AI in the Workplace (with Kevin Heinzelman)

 
 

With AI tools becoming more common across HR and people functions, HR leaders across the globe are asking the same question: how do we use AI without compromising on empathy, ethics, and culture.

So, in this very special bonus episode of the Digital HR Leaders Podcast, host David Green welcomes Kevin Heinzelman, SVP of Product at Workhuman to discuss this very critical topic.

David and Kevin share a core belief: that technology should support people, not replace them, and in this conversation, they explore what that means in practice. So tune is as they discuss:

  • Why now is a critical moment for HR to lead with a human-first mindset

  • How HR can retain control and oversight over AI-driven processes

  • The unique value of human intelligence, and how it complements AI

  • How recognition can support skills-based transformation and company culture during times of radical transformation

  • What ethical, responsible AI looks like in day-to-day HR practice

  • How to avoid common pitfalls like bias and data misuse

  • Practical ways to integrate AI without losing sight of culture and care

  • Whether you're early in your AI journey or looking to scale responsibly, this episode, sponsored by Workhuman, offers clear, grounded insight to help HR lead the way - with purpose and with people in mind.

Workhuman is on a mission to help organisations build more human-centred workplaces through the power of recognition, connection, and Human Intelligence.

 By combining AI with the rich data from their #1 rated employee recognition platform, Workhuman delivers the insights HR leaders need to drive engagement, culture, and meaningful change at scale.

 To learn more, visit Workhuman.com and discover how Human Intelligence can help your organisation lead with purpose.

[0:00:00] David Green: There's no doubt that AI is transforming the world of work.  But while the technology advances at pace, many HR leaders are grappling with this critical question, "How do we embrace the benefits of AI while ensuring the human element of HR isn't lost in the process?"  Because as powerful as AI can be, it still needs human intelligence to make it work meaningfully, ethically, and in ways that support a thriving workplace culture.  I'm David Green, and today, on a special episode of the Digital HR Leaders podcast, I'm delighted to discuss this critically important topic with someone who is at the forefront of shaping how AI and human intelligence can coexist and thrive in the workplace, Kevin Heinzelman, Senior Vice President of Product at Workhuman. 

Kevin and I, just like I'm confident many of you listening, share the core belief that technology should elevate people, not replace them, which is why, in our discussion today, we explore how HR leaders can retain control over AI-driven processes, embed ethical practices, and tap into the unique strengths of human intelligence to drive more impactful, inclusive, and values-driven outcomes.  So, whether you're experimenting with AI or looking to optimise your existing approach, this episode is packed with insights to help you build a more human-centric future of work.  With that, let's get the conversation with Kevin started. 

Hi, Kevin, welcome to the show.  Before we get into today's conversation, could you briefly introduce yourself and Workhuman and the mission that you're on to our listeners, please?

[0:01:53] Kevin Heinzelman: Yeah, absolutely.  Thanks for having me, David, I'm really excited to be here.  And actually, we were just catching up before the show here, it was great being able to see you speak at our Workhuman Live Forum event in London, all about the data-driven storytelling narrative.  It was the ending keynote of our forum and it was a phenomenal, phenomenal keynote, so thank you for doing that.  So, I'm glad to be able to be here to repay the favour a bit. 

So, yeah, my name's Kevin Heinzelman.  I'm the Head of Product here at Workhuman, so I oversee all of our product management and product design, data science, and linguists and people analytics and business analytics teams.  But Workhuman, in our mission, is simple.  It's actually in our name, it's to make work human.  Workhuman is a social recognition SaaS platform, and we work with the largest companies in the world to harness the power of their most valuable resources, which is their people.  We do that through partnering with them to build winning cultures, all through building strategic recognition programmes where employees feel valued and respected and seen, but that all results in a more engaged employee base, lower turnover, increased productivity, driving towards business outcomes.  And we call all of that, recognition done right.  It's really the essence of everything that we do.  And really, what it means is we get deep in with our clients to learn what outcomes they're trying to drive towards, what matters to them and what we can build with them to ultimately create the culture that they're after.

[0:03:28] David Green: Well, firstly, Kevin, thank you very much for the kind words about the keynote at the forum in London recently.  I must admit, I was super, super-impressed with the whole forum.  Your team put on a fabulous event and I know they have a reputation for doing that as well.  Amy Edmondson, I think, was the opening keynote.  So, I was very glad I didn't have to follow her directly.  But your CEO, Eric Mosley, did, and he did a fantastic job in talking about how you're bringing AI into the platform as well.  I know we're going to talk about that quite a bit in our conversation today.  So, let's start there, let's start with AI.  There's no doubt that AI is rapidly changing the world of work, and we're probably only in the early stages of it at the moment.  But as HR increasingly adopts AI-driven tools, why do you believe maintaining a human-centric focus in HR is especially critical?

[0:04:26] Kevin Heinzelman: Yeah, I mean, AI is transforming HR in powerful ways, right, from automating administrative tasks, uncovering deep workforce insights, we'll get into some of that today.  But you know, as that as that technology gets more and more advanced, our responsibility to stay human-centric actually becomes way more important, not less important, and that's kind of the throughline of this narrative.  And the why on that is because work is fundamentally human.  It's about relationships, purpose and belonging.  And so, AI can help us see patterns and predict behaviour.  But it can't feel what the employee feels.  So, it can't feel what it's like to be left out of a meeting, it can't feel what it feels like to receive a great recognition moment, something that can really change your day, it can't feel what it feels like to send a recognition moment, which is kind of like giving somebody a gift on their birthday.  You feel really good about that, it's really empowering.

But if we rely too heavily on algorithms without considering the human experience of it all, we risk making workplaces certainly more efficient, way less empathetic.  And that's why the real opportunity here is to blend the best of AI with the best of humanity.  Let AI surface the signals, but let people lead the response, and use AI to elevate and deepen that human connection, but never there to replace it.  I think, especially in times of uncertainty, so something that we're in right now, when people are worried about job security or burnout or all that's happening in the world right now, showing that we still see value in the human being behind the data isn't just the nice to have, it's the absolute essential thing that we have to have.  And so, I think in everything that we do, you'll see that come through, especially from the Workhuman side, but I think just HR in general, I think staying human-centric is absolutely essential.

[0:06:19] David Green: Okay, so most of the people that listen to the podcast are working in HR or supporting HR from a technology or a consulting perspective.  How can HR gain more control over AI-driven processes?

[0:06:35] Kevin Heinzelman: Yeah, that's a great question.  I think, if I talk about Workhuman just for a second here and about the Human Intelligence '25 release, which we've put out into the market and done a bunch of press around, it's the biggest release in our company's history.  But so much of that is because we're putting the power of AI into the HR leaders' hands, ultimately to help them build the cultures that they're trying to build, but also to help them make more data-backed decisions.  Now you'll notice I didn't say make the decision for them.  Again, this is allowing AI to surface a signal, but allow the people to lead the response. 

But it really always comes down to the data that you have, and we'll get into this a bunch, but recognition data is some of the most valuable data that we have about our employees.  It's incredibly unique, it's of the highest quality, it's really authentic.  What it is, is it's the little notes that employees pass back and forth to each other every day.  They're celebrating wins and achievements, but most importantly, they're speaking the truth, because it's different than other data that we analyse.  Other data is great data, but there's often slants to that data.  It's out there for a different purpose.  And you analyse the data, and it can mislead you occasionally.  But recognition data is somebody saying thanks, whether that's thanks because, "Hey, we achieved something great together", or, "I was in a really tough spot and you bailed me out and you didn't have to, but thank you so much".  When people are effusive in the way that they speak, whether that's praise or thanks or gratitude, or what have you, there's a ton that comes out of that.  And so, it's this gold mine of data. 

So, what we've done, we've been studying this for years.  We've had data scientists in the company for over a decade, studying this data and building models around it and analysing it.  And what Human Intelligence is, it was us bringing that all into the product.  All the work that we have been doing with our customers over the years, we developed a really intentional ecosystem of features to give you the power of AI, but not to lose control.  Ultimately, we have two goals.  We want to deepen human connection and have people sending really meaningful recognition moments; and we want to use AI to unlock employee and organisational insights to help people make decisions. 

So, just to walk through a couple of the component pieces of it, it starts with what we call our Culture Hub, but that's our homepage.  So, this is where employees log in to see what would be, amongst other things, the new speed of recognition.  And so, that's where they're in there, a ton of their culture is happening there.  It's a personalised view for everybody.  And they're in there interacting, commenting, liking the awards.  Again, that's kind of voting as you go, so there's really good data signals there.  But under the hood there, what we've done is we've built a number of models that score these recognition messages, right, they're looking at how impactful is the message, how specific is it, how emotional is it, is it going viral, and we use that to drive the sort of the algorithm that the newsfeed comes out with. 

So, what is that doing?  It's showcasing the best of the best in recognition, which ultimately is the best of the best work that is happening in your culture.  So, your values are being shown at the top in a really engaging way.  That could be work things, that could be new babies and puppies and buying new houses.  You learn so much about people in a recognition programme.  But it's also showcasing to the company, who are your cultural energisers, who are the people that are really bought in to what you're doing, and others see that, and they see what good looks like.  And ultimately, we all want to be the people that are doing good and doing well.  And so, as those folks go to create a recognition message, we have what we call our Recognition Advisor inside of our nomination flow where you create your messages. 

What that does is, it takes some of those same models, and it coaches employees on how to write great messages.  I said 'how' to write them, not write them for you.  So, it's not the, I say, "Hey, great job, David", I hit a button, it writes four sentences.  Back to everything that we spent the first couple of minutes talking about here, you've lost the human, and you've lost all the authenticity of the moment.  It's going to hurt your culture, certainly going to hurt the data.  And so, what this tooling does is it brings you along the journey to help you on the tips of the kinds of things that you could say.  What we're trying to do is get people to be a bit more verbose, a bit more effusive again.  It's great for your culture.  People who are seeing those messages are going to be really happy.  What they do when they start writing more is they're telling you more of the what actually happened.  I mean, the more you get of the what happened, the more you have some data to do some really incredible analytics off of.  That's where our suite of tools, inside the application that we call Workhuman IQ, sits, and there's a whole ton in there.  

I could spend the next hour talking about all that's baked into Workhuman IQ, but the most notable thing in there is our AI assistant, which we'll go through, but in this area of our product, we're parsing through all of that recognition data.  And the goal there is to put the power of that recognition, and all that recognition collectively, into our HR leaders hands, and allow them to make decisions based on really strong, leading indicators.  I'd go right to calling them facts, but we'll call them leading indicators rather than assumptions.  This is where they can go to see things like what kind of skills we're extracting from these messages.  So, as you go in and read a message, there's all kinds of hints in there of the skills that somebody did, because they're talking about more about what was being done that we can extract.  So, if you're building programmes around things like upskilling employees, or you're trying to build more well-rounded teams, if you're doing things around workforce planning, it goes on and on.  But there's so much there that you can do with those skills. 

You also can see the impact it's having on something like retention?  So, you can see that at the company level, at the department level, at the team level.  And ultimately, there's a very strong, science-backed correlation between strong recognition moments and retention.  But you also can see the flip side of the warning signs.  You have a strong performer, all of a sudden it's kind of going off the cliff, they're not getting a lot of recognition, maybe there's a chance to intervene here.  So, again, kind of surfacing some of those signals. 

Ultimately, what we find every time that we launch a new customer on this, or when we're using it ourselves, is that these tools will tell you some things you know?  Some of those things come out and you go, "Well, yeah, I knew that".  But the real power is the thing that you didn't know, because every day your employee base is voting unknowingly about people and initiatives and projects.  And so, what you end up with is this sum total of knowledge of your organisation, right at your fingertips, and allows you to query down into it for all the different insights, all the different reasons that you could want to.  So, it's really designed to give you control.  And so, you take the power of all that AI and that data and you end up an order of magnitude further than what was possible before and what you can do with it.

[0:13:53] David Green: Here's a good question for HR and indeed leaders everywhere.  In the two years since AI went mainstream, how much has it empowered you to actually improve employee engagement?  How about wellbeing?  Or to achieve cultural transformation?  The truth is you need a smarter kind of AI that answers your most critical needs.  Enter Human Intelligence from Workhuman.  By combining AI with the rich data of their number-one-rated employee recognition platform, Human Intelligence unlocks insights and capabilities that redefine talent management, cultural transformation, and employee engagement.  Human intelligence answers questions like, who in marketing is a flight risk; where are our talent or skills gaps; who are our next generation of leaders; or, how do we build engagement?  Learn about human intelligence at workhuman.com and join their force for good. 

So, let's dive a little bit more on two topics.  So, let's dive a little bit around how human intelligence can potentially elevate organisational culture, and then we'll dive into the skills one afterwards.  So, again, in what ways does incorporating human intelligence elevate HR practices and organisational culture in ways where maybe traditional methods might fall short?  And you might have some examples that you can provide on this as well.

[0:15:34] Kevin Heinzelman: Yeah, absolutely, I'd love to.  I think the thing you have to remember about this whole space, is the AI is only as good as the data that it's trained on.  So, that's the foundation of all these AI tools.  So, you've got your data, which trains your models, whether those are LLMs or SLMs or machine learning models, what have you.  And then, that's what everything sits on top of.  It's kind of a pyramid.  You get to the top of the pyramid and that's where your AI assistance, and your grab-and-go insights and the different things that you can do with all that all that information sit.  But if you have bad data and bad training data for these models, then you really don't have much to work with, and that's where we see AI fail.  We've seen some of the stats out in the industry that project 80% of AI will fail, and that always routes itself back to it being about good or bad training data.  And again, that's the beauty of human intelligence.  We've got the human at the centre of this, we've surrounded them with these tools that are designed to build culture and get great data out of it that's authentic and robust, to ensure that you're getting really good usage out of it.  But you talk around that, but what does it actually mean.  So, like what's a good example of an everyday use case that we see really commonly?

So, let's say you're a leader, you've identified somebody on your team that you want to match with a mentor.  And in this case, we'll say you want that person to be a mentor for public speaking.  And I always find this is a really good example to use, and we'll get into why.  Because that's an area that they're looking to upscale, maybe for their job, maybe for their career, or maybe it's something that they have anxiety about, they just want to get better at it so that they can do it when they need to.  Historically, what would you do?  And there's all kinds of programmes that have been run over the years about this, but you'd go around to your leaders in the company and you would say, "Hey, I've got someone I want to train up on public speaking.  Do you have anybody?"  Or, you've got a bevy of people that you go to for things like this, because you know that they're good at it. 

There are success stories all over from this model.  It's not that it doesn't work, but there's a lot of missed opportunity in this.  You're going probably to the same department or the same leaders all the time, you're using a lot of the same people for it.  And there are people inside of the company that have the skill that you just may not know about, back to the 'sum total of knowledge' comment.  But they also may not know they have that skill.  They may have done a bunch of things over time, and when you approach them and say, "Hey, we're seeing this AI assistant saying that you're a great public speaker.  Do you do a lot of public speaking?"  And they'll be like, "Oh, actually I do do a lot of stuff".  And you speak to their manager and you start to get that conversation going. 

So, with Human Intelligence and the AI assistant, you have the ability to go in and say who would be a good mentor for public speaking.  So, the AI assistant will scour the employee base, parse through all the recognition and the skills that we've put in there, and it will come back with a list of people and say, "Hey, here are some candidates that we believe would be a really good set of candidates to look through to be a mentor of public speaking".  What even an incredible unlock just that is.  Now you've got something that has data behind it, that's going to augment what you already knew and compliment your knowledge very well, and it's going to grow a mentoring network, quite frankly, as you continue to do this, because the people in your organisation have voted on that over time, all the time, it's not just certain leaders, it isn't the loudest voice in the room. 

But now you go back and say, "Okay, tell me why you recommend them.  Give me the why".  And it will come back with a synopsis that takes all the recognition moments or where we've identified that as a skill that calls out some examples.  And so, think about what we've just done there.  We've given a list of potential mentors for a mentee.  And then let's just say we pick somebody.  Now I've got a list of topics to talk to them about.  I'm not going to show up to that meeting and say, "Hey, I heard you're a good public speaker, I want to be a good public speaker, how do we get there?"  You say, "Hey, tell me more about this, what was the situation, what spot were you in, what work did you do along the way?  And ultimately, what did you do for a presentation, or for however you summed that up, that I can learn from?"  We've probably just collapsed two or three meetings down, of getting in and building.  In order to build, we're going to have much more meaningful conversations, we're going to have stronger interpersonal relationships much faster, and that's what this is all about, and helping people kind of upscale in those areas. 

So, it's a really good example, and one that we use here at Workhuman all the time, our customers use and have great success to help set up things like mentor networks, things that we do every day.  And sometimes you don't even think that AI could help you with it, but that's the beauty of the recognition data. 

[0:20:31] David Green: Yeah, I like that because it helps you get more precise on the conversation.  Because public speaking, there's a lot of elements to public speaking.  It could be that you're very good at projecting your voice, you're very good at delivering key messages, you're very good at summarising into three points, or something like that, as I hear a lot of public speakers do.  Or it might be that you're very good at using visuals.  So, depending on what you want your mentor for, it might be that, "Actually, I'm quite confident in speaking, but I need help with my visuals to make those better and more impactful to support what I'm saying".  So, I love that example.  I think it's a really good one that everyone can relate to. 

So, what about when building a skills-based organisation?  So, this is something we've focused a lot on, on the Digital HR Leaders podcast over the years.  We all know that many organisations are moving to be more skills-based, whether it's through their hiring or to try and support internal mobility, knowing that it's harder to find great people in the marketplace.  And actually, data shows it's actually probably more efficient to reskill people within the organisation as well.  How can employee recognition, enriched by the Human Intelligence that you've spoken about, Kevin, how can that support organisations navigate the shift effectively?

[0:21:46] Kevin Heinzelman: Yeah, I mean, you're absolutely right.  I'd argue almost every organisation in the world is going through this, trying to become a skills-based organisation.  And the truth is, while we're making some progress on it, there's a flaw in how we're going about it.  And I think that's why you see, when I talked to company after company about this, as we obviously describe and explain what we do, and the question I always ask is like, "Where are you on this journey?"  And they're like, "We're trying, we're trying so hard, we care so much.  It's really important to us.  We know the data and the research".  But I think you've got to go back to, how do we compile those skills today?  So, there are certainly the technical skills, those hard skills I talked about a bit earlier.  You've got a degree in something, you are certified in something, you've written 2 million lines of code in React, you can do those things.  But how do we get to those soft skills, things like public speaking, being creative, being innovative, who's efficient, who's known for leadership?

What I think we're seeing today, as companies are going through and filling out these skills clouds, is those are either self-promoted or recommended by other leaders, which is a great start.  Often, if you say you are good at something, you've got a lot of confidence in practising it and you're probably pretty good at it, that's a great start.  But it misses huge chunks of an organisation.  You might have folks that aren't putting the time in to fill out those skills clouds.  Again, back to the point I made earlier, they may not know that they're great at that thing, but they're learning over time, they're building up confidence in that skill.  But recognition data is, again, this beautiful source of truth.  So, using the proprietary models that we have underneath that, we parse through every recognition message to extract the skills.  Now, it's very rare that somebody goes in and says, "Hey, you're a great public speaker".  That would be keyword matching and very easy to do.  But they say things around that.  They're telling stories in these recognition moments, over just a couple of sentences. 

But with the right set of modelling that we've been tweaking for years now, as we've been embarking on this journey, you start to build this profile of skills over time.  And it's not that somebody was recognised for it at once, right?  "Great start", hardly statistically significant that you can put out any recommendation based on that.  Over time, these employees have voted on it, they don't know they're voting on it.  They're saying great things about their colleagues, the peers that they're working with, and we're able to extract that out and start to see this incredibly powerful view of a top-down perspective of what a department has, what skills, what skills a company has.  You can start to see, "Oh, we may need to upskill in this area, because I can see maybe we're not getting as much there". 

But I always think about this example of like, you're putting together a tiger team on something, whatever the initiative is.  It's cross-functional, it's multi-people, and you've got to do it, and you've got to do it quick.  There's a lot of risk as we do that, and we've all done that as leaders.  But if you knew that it was going to take deep research, some creative thinking, some type of presentation, and what have you, just think about any kind of standard skills taxonomy, you can start to build a pretty well-rounded team that sets you up for early success.  Now, that's great in my mind in three ways.  It's great for the people, they're being kind of plucked, this is a huge opportunity for them, they are developing in their career, they're developing new skills.  Ultimately, hopefully, we're going to lead to success in the initiative.  The leader who's leading it has had a set of tools where they were able to handpick people with certain skills to set up that initiative for whatever type of success it needs to have.  And if anyone is successful, that's great for that leader, they're able to do something fast.  And the company wins because the company needed this thing done.  And there was tooling available to them to be able to all the way cascade down to the people that are doing the work, right, available to grow and get this thing done fast and drive towards whatever outcome you're after. 

So, when you think about the 25 years of data Workhuman has running recognition programmes, the expertise that you build about how to attract our skills, how to map them and how to use them, again it's this incredible goldmine of data that you're sitting on that I think most folks don't think about recognition data that way.  And so, we're in this journey to educate and instruct.  And look, there is certainly data out there about it.  But when you show them even just a simple slide where that shows standard recognition message, let's educate you about what we can pull out of that.  And then remember, our tool is built to have people write like this, so that you're getting that out of it.  And so, it really kind of powers a flywheel there, but it just speaks to really the power of the recognition data and everything about becoming a skills-based organisation.

[0:26:45] David Green: I want to take a short break from this episode to introduce the Insight222 People Analytics Programme, designed for senior leaders to connect, grow, and lead in the evolving world of people analytics.  The programme brings together top HR professionals with extensive experience from global companies, offering a unique platform to expand your influence, gain invaluable industry insight and tackle real-world business challenges.  As a member, you'll gain access to over 40 in-person and virtual events a year, advisory sessions with seasoned practitioners, as well as insights, ideas and learning to stay up-to-date with best practices and new thinking.  Every connection made brings new possibilities to elevate your impact and drive meaningful change.  To learn more, head over to insight222.com/programme and join our group of global leaders.

That leads on to the next question.  Obviously, one of the things that everyone talks about around AI and using data more to inform decisions is around ethics and responsible AI.  It's interesting, when we set up our firm, Insight222, in 2017, which is about a year or less than a year before the GDPR came in, we were working with a lot of people analytics teams and big companies then to help them co-create an ethics charter, for example, for the ethical use of people data.  And many organisations have applied that within their companies, but it's almost turbo-charged now with AI.  How can HR protect their intelligence data, the data that you've spoken about, from biases, inaccuracies, or unintended negative consequences?

[0:28:47] Kevin Heinzelman: Yeah, this is always the million-dollar question.  I mean, when you bring out new AI tools, specifically generative AI tools, this is the question you get.  And the first answer to this question is probably a bit startling, but the reality is humans are biased by nature.  So, even though the data in a recognition programme we've talked about, it's incredibly authentic and unique and of high quality, not every individual message is a 100% correct, nor is it unbiased.  And so, we tackle this inside of Human Intelligence a number of ways.  I think the first thing that we do is back right when you're writing the message.  I talked before about our Recognition Advisor.  We also have a tool called the Inclusion Advisor, which is a tool that once you've written your message, you can click it, scan the message, and it'll look for those biases, those microaggressions, and there's little mini coaching moments along the way.  You've said, "Hey guys"; maybe say, "Hey all".  But it goes much deeper than that.  And we have this really large taxonomy of bias studies, and it picks up on these things basically in real time.  And so, there's the in-the-moment coaching. 

What we actually see is a lot of our customers take that data out, because back in our Workhuman IQ application, we group these things into the biggest types of biases that we're seeing.  And then, they're able to take that data out and see what's happening, and then they're running programmes internally to attack it on a more global, broad scale.  And then, you watch those numbers and the size of the bubbles change, because there's different things that start to come in play.  So, one thing you try to do is you try to catch it on the way going in, never going to be perfect. 

We also have a tool here that we don't talk about a whole lot, around ethical AI, but I think it actually plays a really strong governance role, we call Approvals.  So, you can configure Approvals in your system so that when one of your direct reports gets a recognition message, that it comes to you first and you can approve it.  And there's a myriad of positives about that outside of just the governance of it.  I mean, talk about being able to see what your team is working on in real time and who they're working with.  And as you hit approve, we'll show you the skills and that message, even throughout our approvals flow there.  But it gives you the ability for the manager to read it and say, "Hey something here is not right", or, "I don't like this language", or there may be giving going on between two people too much, so maybe there's something happening there.  Or, "Why is my top performer getting nothing, but this person who is not …" there's all of those moments that are all these indicators into it, so again, trying to stop it on the way in.

But also, once it gets through that, you start to see the newsfeed interaction.  And so, you start to see, again, it's this voting machine.  People are in there commenting and interacting with the post, and very unlikely to happen if you've got some kind of flaw in that message.  A couple of other ways that we think about this, I already mentioned we don't make recommendations on single messages or data points.  That's just wildly flawed in a number of ways.  It's the collective of messages that weed out the inaccuracies that drive us towards higher accuracy.  We have a number of very strict guardrails in place within all of our AI tooling.  Everything about our system is meant to highlight the positive, and the what people can do next.  We'd never look at something like termination recommendations or anything like that.  It is geared towards identifying people that are exhibiting positive things that are useful to the organisation, to other people, to drive towards the outcomes that you're driving towards. 

But the last thing I would say goes all the way back to our first question, which is this is a very well-informed input into the decisions that you're looking to make, but you should almost always confirm the findings of these things.  So, if we go back to the mentor example, when you pick the person you think you want to be the mentor, well, your next best action is probably to reach out to the people around them, their peers or manager, if you don't know them, validate the finding.  It's not a replacement for human decision-making; it's the ultimate complement to help you get to that faster and more efficiently and more accurately.  But the term 'human in the loop', is really big within AI and reviewing all the outputs.  But I think about it in this way as well, is the number one thing there is you've got to be taking those things out and looking at them and not just trusting that AI is going to solve all your problems for you.  It's not. 

[0:33:19] David Green: And I quite like the inclusion engine or inclusion tool that you described there, Kevin, because it's almost like you're coaching the person giving recognition if they are using maybe the wrong words.  I mean, that's not a great way to describe it, but guiding them to use words that are maybe less gender-oriented, for example.  Because there's enough research that shows that people that are completing performance management reviews for females versus males, they will potentially use different language.  They don't really necessarily realise they're using it, so anything that can coach you to do that better, I guess, is a good thing.  And then, the other thing I was going to say is presumably, one of the things you've got -- because people use different words for similar things.  So, you might have certain skills in the organisation that you're looking for when you're looking at your recognition data.  Maybe people will use different words to describe that.  But presumably you've got some sort of modelling in the back end that will say, "Okay, if they're using these words, they're referring to these skills", basically?

[0:34:22] Kevin Heinzelman: Yeah, that's right.  Yeah, I mean, it's an entire taxonomy around the Inclusion Advisor behind the scenes that again, it's not keyword-matching, because that would miss nearly everything.  It's understanding the intent of what we're trying to get to.  And I would say, as a user of it, obviously I use most things in the product, but it has changed the way that I talk about things.  And I see it changing the way I write emails, the way I refer to things, the way I do it in my personal life.  And then, you made some comments there about there's gender equality, there's race inequality in recognition.  And we have we have the Workhuman IQ tooling.  We also have the Workhuman IQ team, which is all of our data scientists and people analytics teams that are working with customers to bring all those insights forward as well.  And so, you really get a full picture and understand the distribution of how things are flowing, and it allows you to intervene.

So, we try to attack it in all of these ways with technology.  But again, even with us, we've got humans involved in this.  Because again, the thing I said at the beginning, we are deeply partnered here trying to understand what outcomes are you trying to drive.  We want to be the strategic lever that gets you there.  And so, we are surrounding our clients and all of their people, both with technology, and humans to help on that journey. 

[0:35:39] David Green: So, on the ethical AI piece again, Kevin, so again, thinking about HR leaders listening to this episode, what are the key practices that you would advise that they adopt when looking to deploy ethical AI?

[0:35:56] Kevin Heinzelman: Yeah, I'd recommend a few things here, because there are a lot of AI promises out in the world right now.  Some we believe are really good; some you can turn your head at and get a little bit sceptical.  I think first and foremost, it's incumbent upon all of us who are choosing AI tools to upskill ourselves in this area, to understand what is the AI doing, how is it doing it, what data is it using, what intended outcomes is it trying to drive towards?  This area is evolving very, very fast, and so I look at it from a Workhuman standpoint.  I always welcome as many questions from our customers as possible, because we've built it in a very thoughtful and considerate way, I went through some of what our guardrails are.  But if you're not comfortable answering those questions, then my red flags start going up.  We want to answer those questions because we believe, if nothing else, in a competitive situation.  We don't think anybody can answer them as well as us, because we're so deep there. 

I'd also recommend enquiring about the company's AI ethics policies.  Do they have an AI ethics committee or do they have safety measures in place?  We have these in place at Workhuman.  We put a ton of rigour into ensuring that we're following all of the rules and regulations to the strictest standard, as it pertains to AI ethics.  And what I find is, it all changes so fast.  I'm on that committee.  I feel like we're meeting every week just to make sure that we're okay with every little product release.  We're reviewing all of it.  But those kinds of governance policies put you in a place where you can't turn a blind eye, you can't feign ignorance to it.  You're on that committee, you have a responsibility, which is a very large one.  And so, I think even just knowing about what kind of rigour behind the scenes is happening there is really important. 

But the net is like, it's such a new area for everybody, there's no replacement for that research.  You've got to ask all the questions along the way.  And what's important to me is that the person that is launching the AI tool is comfortable, and they've put in the due diligence and asked the questions and gotten themselves comfortable to say, "Okay, maybe I don't know all of it, it'd be impossible or understand every little bit of it.  But I mostly understand, agree with how we're doing this.  I've checked with any experts on my side, so that I can feel comfortable that we're rolling out something ethical.  But it's an everyday battle.  It's one that you certainly can't do once and then just assume everything is good. 

[0:38:22] David Green: I totally agree.  I think you highlighted a couple of really important points there about upskilling ourselves.  I think HR naturally will take on the role of upskilling the organisation when it comes to using AI.  Sometimes we forget about ourselves a little bit and we can't do that.  It's important for HR as a function that we are AI- and data-literate as well, as more of this technology comes in to our everyday work.  And I think the other thing you mentioned there was about getting involved if there's an organisational AI ethics council.  I might even push it further than that.  I think HR should not just be involved, I think they arguably should be leading it, certainly in combination with IT.  And we see that in many of the organisations that we work with, people analytics leaders particularly getting involved in that.  I think it's really important, because obviously all data is sensitive, but arguably employee data is the most sensitive data that any organisation collects.

Yeah, and the other thing I was going to mention, Kevin, obviously with some of this new technology that's coming in, obviously your Human Intelligence tool is very powerful in terms of what it can do for organisations; are your customers able to pilot it in a small subset of the organisation first, and just make sure they're comfortable with it before they potentially roll it out? 

[0:39:44] Kevin Heinzelman: Yeah, absolutely.  So, I think they have full control over where these things go.  And so, I think there's a whole spectrum of AI and I think, like I mentioned the AI revolution earlier, AI is not new to Workhuman nor is it new to our customers, because we've been doing a lot of machine learning and different kinds of models over the years.  I think there's just many sides to it.  I look at the algorithmic type things that we're doing, I think you can explain that, there's more comfort.  We can certainly turn things on and off in populations, our entire product is configurable that way.  For me, it's especially when you get to the generative AI piece.  So, the big part for us of that is the AI assistant, and they have complete control over who has access to it.  We certainly have recommendations.  And then they even have access to, okay, if we give everybody access to the tooling, it's configurable inside of it, what's available.  And so, there's all kinds of that.  And we tend to lean on the side of bringing people along for the journey and showing them what it can do and showing it with their data, once we're through all of the necessary agreements that need to be through. 

But yeah, it's nearly impossible to say, "Well, you just blindly turn this thing on".  And so, you've got to bring people along for that journey.  And like I said before, we love that conversation, because we want to show, how deep, how ethical, how rigorous that we've been, so that they have confidence in us as a strategic partner.  But then, that's how you can get the confidence for them to start to use it, expand it to more people.  And ultimately, back to our mission, we're trying to make work human, and we believe that this tooling can do a lot of that.  And so, for us, it's an exciting conversation.

[0:41:30] David Green: Yeah, it's interesting what you said about generative AI, because I've heard others say, and I tend to agree with this, that the generative AI is almost democratising AI to non-technical users, particularly through those conversational interfaces that you mentioned there.  So, that needs more care when it comes to responsible AI, but it gives you more scale and it gives you more value ultimately if you get it right, as you said earlier, as long as you've got the right data foundations in place.  And I almost see that AI could act as a bit of a Trojan horse, because we've been talking about the importance of having good data and strong data foundations for a long time.  I think that's getting further up the organisation now, hierarchy, and they understand that.  They get that because they see that in functions outside HR.  So, hopefully, we'll get more investment in HR to make that data richer and provide that right foundation so we can get the benefit from AI. 

Two more questions to go, Kevin.  So, from a practical standpoint, again, for HR leaders listening, how can those HR leaders effectively integrate Human Intelligence into their existing talent management strategies?

[0:42:42] Kevin Heinzelman: Yeah, I mean, we talked a bit about the mentor piece earlier.  I think, as you look at Human Intelligence and think about things like trying to find somebody for a new job, so let's say you want to open up a Chief Product Officer job.  And I mean, "Okay, we know we need a Chief Product Officer.  What skills should I be looking for?  Help me write a job description, give me a job profile", a lot of that you can get pretty much anywhere.  That's not Human-Intelligence-specific, but it's certainly in our product.  Then the key to it is, "Okay, who in this company exhibits those skills?  And so, who are people I should be talking to?"  And so, now you're talking about scouring everything, looking at not only recognition data, but the hierarchy within your chain.  So, it's unlikely you're going to recommend an associate to be the Chief Product Officer.  Maybe, that would be the kind of hiring that we all want to do.  But you start to look at it for things like that. 

Again, it goes back to, we're not saying, "Hire this person".  We're saying, "This person strongly exhibits the skills that we believe to be to be really strong for this type of role.  It feels like someone you should be talking to".  And I kid you not, when we when we built the original versions of this in the lab, and we were sitting in a conference room with the POC playing with it, the names started coming up, and we were just putting in fake roles, I mean roles we didn't even have open, just trying to see what would come up.  And you'd be like, "Oh yeah, that person and that person".  And you'd be like, we're not an enterprise-level customer, we're 1,100 people; you'd better know that person.  And so you're like, "Well, we've got to prove this thing out".  And you start calling over and you go, "Look, this is what we're hearing.  What can you tell us about this person?" and they start raving.

What you find in those moments is those are the people that maybe have been missed.  They're not as seen or as heard, they may not be the loudest voice in the room, but they're incredibly talented, and they're in the middle of everything that you do.  And often, when we look at Human Intelligence and then all the different kinds of reporting that we have, we start to have these concepts of, you've got a department over here, a department over here on the left, and you go, "Okay, who's connecting these departments?" and you start to build that recognition map.  You start to find people in there and you go, "I thought that person was great, they've always been good.  I had no idea that if we were ever to lose that person, we have no connection point between these departments".  And that sounds like I'm being dramatic.  I'm not.  We can show you the visuals that exist in our data and in our customers' data and all the case studies that we've done. 

So, it's this incredibly versatile tool that the more that you are building strong, strategic recognition programmes, that's a huge part of it.  Every recognition moment can't be, "Good job, Kevin".  There has to be some thoughtful messages in it.  The opportunities of it are endless.  And you think about all that you can do within talent management and how you're building career paths for people, succession plans.  You start looking at maybe the middle manager, and you start to work on who could be the who could be the next C level in the company.  So, this person, potentially, but here's what a career path might look like for them.  Let's start discussing what that -- I mean, you start thinking about the possibilities, and those are a lot of things that we're working on, and a bunch that's already in the AI assistant.  And so, really everything that surrounds talent management has application through Human Intelligence.

[0:46:30] David Green: Yeah, very good.  This was going to be the last question, but I'm going to ask you one small one, which will be a little quick one at the end as well, Kevin.  So, firstly, and this might give you an opportunity to summarise maybe some of the key points that you'd like people to take away with them, what's your key piece of advice for HR leaders aiming to blend technological innovation on one hand with maintaining a meaningful human-centric workplace culture on the other?

[0:46:56] Kevin Heinzelman: Yeah, I think my biggest piece of advice is to embrace the change.  It's scary, it's moving fast, we're all learning so much.  But I truly believe, or Workhuman truly believes, I think a lot of us that are in the HR community truly believe that it has the potential to help us solve some of our biggest challenges.  And the only way to get there is to be part of that movement.  But it comes back to the first part of our conversation, which if the human's not at the centre of it, what's the point?  I would probably end with this, that the companies that have won in the past and the companies that will continue to win in the future won't necessarily be the ones with the best technology and processes.  It's the ones that have the best people.  And we need to use all these advances in technology to help us design workplaces that empower and support and elevate human potential, because that's where it is all bred out of.

So, I always think about embracing it, but for those right reasons of we're trying to build great cultures, and cultures in work are innately human.  And that's what always will be at the centre of it.

[0:48:03] David Green: Perfect.  I love the way you've encapsulated that, Kevin.  Two quickfire questions, very much related.  What excites you most about the rapid advances that are happening in AI; and what concerns you the most?

[0:48:18] Kevin Heinzelman: Great question.  I think what excites me the most is the potential.  I think, as much as we talk about the AI revolution, what's come and all of that, we're in the early days here.  And I think the amount of forecasting that's going on right now and what it's going to be in 2027 and 2028, there'll be some truth in there, we'll have some misses in there.  But there is no doubt that it will fundamentally change the way that we work, the way that we live, frankly.  I mean, we've all started to use this tooling in new ways and you find efficiency.  I find in areas that I need to jumpstart some brainstorming or something, it's really helpful to go in there and throw a prompt in there, and it's like, "Oh yeah, okay, all right, now I'm flowing.  I didn't have anyone to talk to, now I do". 

You think about there's all kinds of new things starting to come, but what I'm excited about is the pace of change.  Fear is just the other side of that coin.  It is with rapid innovation, there is always a downfall and there is always a risk.  And it's why some of the things that we talked about today are so important.  It is being comfortable before you're using these things, asking the right set of questions, making sure that the companies that you're working with have the right ethics policies in place, the safety measures in place.  There's going to be some bad actors out there, and that's the scary part of it.  But I think the good will significantly outweigh the bad.  And that's what, as HR leaders and as somebody in technology, like Workhuman is, that's where we get really excited about the art of the possible and bringing that for good.  And ultimately, back to the mission, making work human.  And so, the possibilities are endless with it, and we're just trying to keep up every day with it. 

[0:50:04] David Green: Perfect.  What a great way to end our conversation, Kevin.  How can listeners get in touch with you and find out about all the great work you and the team are doing at Workhuman?

[0:50:15] Kevin Heinzelman: Yeah, I mean certainly we post a ton of what we're doing on LinkedIn.  Obviously, workhuman.com is updated all the time with our newest white papers, our newest innovations.  If you go there right now, it's an entire takeover on Human Intelligence.  We just had the Workhuman Live Forum in London.  There is another one of those in Paris later in the year.  We have our large Workhuman Live event in Denver in early May.  And then, we have a bunch of different conferences and things around.  So, I would say LinkedIn is the best place to reach me.  You can learn a ton about what we're doing there, and on our site.  And, as you said, we have a reputation for great productions.  We're incredibly proud of all that we do at Workhuman Live.  It is the centrepiece of our year.  Every year, I look forward to it like Woodstock.  And so, there's a number of touchpoints there.  But I would love to hear from anybody that wants to learn more.  We're always here to talk about it.

[0:51:13] David Green: Perfect.  And anyone considering going to a Workhuman event, I highly recommend that you do.  Kevin, thank you so much for being a guest on the show.  I really enjoyed the conversation and look forward to seeing you maybe at a Workhuman event in the near future.

[0:51:29] Kevin Heinzelman: Oh yeah, that would be great.  Thank you so much for having me on.  This was fantastic.

[0:51:33] David Green: That's it for today's episode, and what a discussion it's been.  Thank you again, Kevin, for joining me and sharing how technology and humanity can move forward together in the workplace.  And of course, thank you to all the listeners for listening and being part of the growing community each week.  Here at Insight222, we are on a mission to help as many HR and people analytics professionals and leaders build the skills, strategies, and confidence needed to drive real business value.  So, if you enjoyed today's episode, please do subscribe, leave a review, and share it with your network so we can help more people drive meaningful business transformation.  And as always, if you'd like to dive deeper and learn more about us here at Insight222, follow us on LinkedIn, explore our resources at insight222.com, or subscribe to our bi-weekly newsletter at myHRfuture.com.  That's all for now.  Thank you for tuning in and we'll be back next week with another episode of the Digital HR Leaders podcast.  Until then, take care and stay well.