Episode 141: How AI Can Unlock Human Potential and Make Work More Meaningful (Interview with Tomas Chamorro-Premuzic)

Artificial Intelligence (AI) is everywhere! From the way we communicate, to the way we work – AI has become as staple in our everyday lives. But what is its impact on our society and our work?

This episode will discuss just that, as David is joined by Tomas Chamorro-Premuzic, Professor of Business Psychology at Columbia University, Visiting Professor at Harvard University, Chief Innovation Officer at Manpower Group and the author of the fantastic book ‘I, human: ai, automation, and the quest to reclaim what makes us unique’.

During this conversation, expect to learn more about:

  • Why AI I bringing out the worst of humanity

  • How we can reclaim the qualities that make us special as humans

  • AI through the lens of recruitment

  • How AI may not be as unbiased as we think

  • The ethical considerations of using AI in the HR and People Analytics Space

  • The skills that HR need to thrive in the age of AI

Listen to the full episode below. Enjoy!

 Support from this podcast comes from Workday. You can learn more by visiting: workday.com

David Green: I'm sure that many of you listening have had a little play around with ChatGPT, the AI chatbot that was released recently by OpenAI.  It's fair to say that the tool has created quite a big splash, strong views both for and against, and a slew of sensationalist headlines.  If we put the hyperbole to one side for the moment, it's undeniable that AI is becoming increasingly pervasive in our lives.  AI is also giving rise to a lot of questions and debates around its impact on our society and our work.

To help us dive deeper into the questions around AI, automation and how to protect our human uniqueness, I am joined by Tomas Chamorro-Premuzic, Professor of Business Psychology at University College London and Columbia University, Chief Innovation Officer at Manpower Group and the author of a fantastic new book, I, Human: AI, Automation and the Quest to Reclaim What Makes us Unique, which will be published at the end of February 2023.  I've had the pleasure of knowing Tomas for quite a few years and I always look forward to talking with him on topics related to the world of work.

Today, I'm particularly excited about our conversation as we delve into the implications of AI, automation and their potential disruptiveness to the workforce, as well as how we can ensure that human values remain at the core of our work environment as technology advances.  So, join me and Tomas as we explore the fascinating world of AI.

Tomas, welcome back to the Digital HR Leaders podcast.  I think the last time we spoke was back in July 2020, during those first uncertain few months of the pandemic.  We're in 2023 now and the majority of restrictions have been lifted in most countries, I think probably pretty much all countries actually, and hopefully we're starting to see the back end of the pandemic, although we obviously recognise that it's still affecting probably some of the people listening to this call at the moment.

Tomas, for those listeners who may not have had a chance yet to listen to our previous conversation, could you please share a little bit about yourself?

Tomas Chamorro-Premuzic: Yes, of course, and it's great to be back, David.  Thank you for inviting me.  So, I'm Tomas, I'm born and raised in Argentina.  My background is in organisational psychology and I really study two things: leadership, and the intersection between human and artificial intelligence.  I do a lot of work in this area, writing, doing research, and trying to apply some of the science to the real world, mostly in my job as Chief Innovation Officer at Manpower Group, where we try to bring the science to life to help organisations find better talent and find talent thrive wherever they go.

David Green: Tomas, we've known each other for a number of years and as I said, I also enjoy reading your work.  And actually, I've been reading your new book over the Christmas holidays, which is published I think on 28 February.  So, the book is I, Human: AI, Automation, and the Quest to Reclaim What Makes us Unique.  Tomas, as you're the author, can you tell listeners in your own words a little bit about the book?

Tomas Chamorro-Premuzic: Yeah.  So, in a nutshell, it's a book that examines the impact that artificial intelligence is having on human behaviour.  It is not a book about the future, it's a book about the present, and I know that you interview a lot of futurologists and people who make predictions.  I wanted to focus on what we know, which is happening now, and I wanted to do a book that is a book on AI, but not really on the machine or computer aspects of it, but the behavioural or human aspects of it, because I think we've mostly, by and large, neglected the human factor, which is what I'm interested in as a psychologist.

So, it examines some of the consequences, behavioural consequences that living in the AI age has had on human behaviour, and mostly I focus on some of the dark side traits or tendencies that are being unleashed by our ubiquitous and omnipresent immersion in this sea of predictive algorithms and data and machine learning tools that actually have done a very good job influencing us; things like the fact that we're becoming very impatient, very impulsive and unable to attend to something for more than 5 seconds.  I'm sure in the process of saying this, I lost some of our listeners already; the fact that we're losing self-control and we're easily distracted, the fact that we have become almost very narcissistic in a cultural sense, behaviours that were seen as obnoxious or undesirable in the analogue world, are now commonplace. 

We're all spending so much time self-promoting, talking about ourselves and have lost any sense of inappropriate self-disclosure or censorship when it comes to sharing what we do.  And I think the worrying ones are that we have actually become far more boring and predictable and are exercising far less of many of the faculties that actually made us the most adaptable species on Earth.  We're using our creativity and curiosity less and when we outsource thinking, we obviously end up thinking less as well.

So, that's probably the negative part.  The book finishes on a more positive note with a call to action to reclaim some of the things that actually make us unique.  And I think we're living at a very interesting point in time.  Obviously now with ChatGPT, a lot of this has been put into context in the context of one tool, but whereby we can redefine our humanity by focusing on the things that machines are not able to do: display empathy, kindness, consideration and general curiosity; and actually, interact with others on a humane level.  And I think that will be the USP for humanity in the years to come.

David Green: Great summary, and obviously most of the people listening to this podcast are working as HR professionals at varying levels in organisations around the world, so I think it's going to be interesting to touch on those behavioural and human aspects, rather than the actual technology itself.  Well, you'll lose me if we talk about the technology too much!  So, we'll touch on some of those topics, I think, as we move forwards.

So really, to all those negatives that you highlighted there, Tomas, and obviously I think all of us will recognise to a greater or lesser degree some of those, it may be in ourselves, but also in some of the people that we interact with, families, colleagues, etc, if AI is being created to supposedly enhance our lives, then why is it that it also brings out the worst in us?

Tomas Chamorro-Premuzic: Well, it's a great question, and I think primarily because the definition of enhance means, "To optimise a world for efficiency, productivity, pace", and those are no doubt important aspects of progress and critical engines of capitalism and for-profit work and corporations.  But they do squeeze out our humanity and actually have this paradoxical effect whereby they almost make us act like machines.

I find this really interesting from a philosophical point of view, or philosophical level, which is that we spent much of the last decade worrying about how AI might automate jobs and work and how we would need to reskill and upskill ourselves, which are really important questions; but in the process, we sort of automated ourselves, because even within the knowledge economy, what most people like you and I spend doing on an average day is really be glued to our screens or devices, and engage on a flurry of repetitive activities that involve classifying X as something, or coding and training the algorithms so they can predict us better. 

In the process, we've become much more predictable ourselves, because if what everybody does is just hit keys or stroke keys in response to algorithmic nudges, then anybody who isn't human or would arrive to this world from another planet or galaxy and observe our behaviours would perceive or observe very little in the form of creativity, imagination, etc.  So actually, again, when you create machines that do the thinking, and even some of the creative production for us, then the question is, what is there for us left to do?

I mean, when you create a dishwasher, it makes sense that you spend less time washing the dishes, and maybe you can go for a walk or think or write poetry.  But when you can create technologies that can do all these creative things and you end up just training it to become better, you actually become more predictable and less interesting as a species.

David Green: How can we ensure that we reclaim the qualities that make us special as humans, instead of diluting ourselves and making ourselves more predictable as a result of AI?

Tomas Chamorro-Premuzic: You know, I think it really starts with being self-aware, or aware of the situation we're in.  This really is probably my main objective with the book is, if AI is like a mirror we can put in front of humanity to see how it's reflected on it, I wanted to do a book that reflects that reflection in a way.  So, it isn't going to go away.  As any technology, it's neither good or bad, or neutral, as somebody said, so it's up to us to use it in the best way.  I think an awareness of the problematic behaviours that have arisen from it is the starting point.

Next, I would say really to understand that we can be in the driver's seat and be more agentic, managing ourselves and our time in a way that doesn't fall into the trap of this delusion that we can do all these things at once, that we can be much more productive if we multitask, that we can actually have three or four screens in front of us and be simultaneously exchanging emojis or memes with our friends, reacting to certain news, or what happened in our favourite sports match, while talking to our colleagues, while being on a Zoom, etc, because our attention is finite and our cognitive resources are finite.

If we kid ourselves or deceive ourselves into thinking that we can be productivity machines, we really are squeezing ourselves out of humanity and out of the things that…  So, I think in a way, and obviously I don't have a prescriptive solution, there is no silver bullet, but I think we can learn and get inspired from certain analogies.  The one that I think of is, this is the sort of intellectual or cognitive equivalent to what happened over hundreds of years, when the Industrial Revolution was successfully applied to optimise food supply. 

You could think of a scenario, and in some ways the fact that obesity is a widespread epidemic in the world actually confirms this, whereby when we evolve our cultures, or we drive progress in a way that goes from food scarcity to the ubiquity of food, or a food surplus, and actually we optimise our lives to be less mobile and to be stuck in our offices, sitting in our chairs and looking at our screens, it's no wonder then that unless you have self-control, you become morbidly obese and you suffer from all these health problems.

The same goes, in a way, with the AI age and information, the ubiquity of information, by which I mean every meme you get from your friends, every bit or bitesize of fake news, and all of these very, very smartly and cleverly engineered algorithms that are competing for your attention lead to an information glut that doesn't translate into a nutritious meal for your hungry mind; it actually diminishes your hungry mind.

So in a way, just like in Italy they created the slow-food movement to have an antidote to the fast-food movement, I think we need the intellectual equivalent for it, by which I mean even organisations and managers, which I think why I am somewhat optimistic by the recent price of the learning and development function and the fact that it is now far more serious and more influential than it was five or ten years ago, but it still has to change in how it operates because we really need to provide the conditions and the incentives and the freedom and the space for people to develop and grow intellectually, and that's in direct competition and tension to what all the technologies out there are doing; they want to actually see us using less and less of our creativity, less and less of our thinking and our imaginations, so that we just react to these influencing machines and tools and other technologies, that actually really lead us to act like automata, not like humans.

David Green: What's your synopsis on what AI means for the present, and of course maybe the future of recruitment as well?

Tomas Chamorro-Premuzic: Yeah, so I think first, in the space of human capital or HR, it's important to demystify AI, because it's very far from being robots or cyborgs or anything dystopian that we may have seen even 40 years ago in Hollywood or the movies, and mostly it's something that happens to the data; it mostly still is confined to data science.  And mostly what we mean by AI for HR or in HR or human capital, including people analytics, which of course is a field that you've been very, very active and influencing in a very important direction, it really is something that translates data into insights, hopefully actionable insights.

So, whether that is machine learning models or natural language processing, but it's something that happens to data with maybe the only peculiarity that these algorithms have the capacity to teach themselves after getting minimal instructions, and they continue to evolve and get better.  So again, ChatGPT, which is not a very catchy name, is a very good example of that, because a couple of years ago it was very basic and now it really feels like you're speaking to a human, or somebody like Scarlett Johansson in the movie, Her, although it's not Scarlett Johansson or as advanced either, which is a good disclaimer for those who actually might think there is a similarity there or not.

Now, I think recruitment is one of the areas in which this technology or this data science has been applied the most, primarily because there is still a lot of transactional, repetitive and standardised activity in talent acquisition, talent management and the identification part as well.  So, if it's about attracting, processing, sourcing and assessing, we no longer need a human recruiter to spend 3 minutes, which really was down to 30 seconds, examining whether a resumé has the right keywords that match a job description; resumé parsing technology can do that very well.

Equally, in the future, because we're not there today, we might not need a human interviewer to sit in front of a candidate and match their answers to a very well-defined template of competencies or a grade of potential.  Today really, in most areas, a combination of artificial intelligence and human intelligence or expertise will provide superior results than one without the other.  But there is no question that there is low-hanging fruit and room for improvement, because we're still living in a world where most organisations complain that they can't find the right talent, and most employees complain that they can't find the right job, career or employer.

If there is a formula for matching people's skills, potentials and talents to the right environment and to the right culture and to the right career and there is a logic, there is no question that a machine will be able to identify that logic, if not better than humans, in a more consistent and predictable way; because you might be a great interviewer, but perhaps today you're off, you're hungover, you're sad, you're depressed, you broke up with your partner; or maybe Argentina won and you're so happy that you're going to offer me a promotion or a job even though I'm rubbish.

So, I think it's certainly a technology that is really potentially disruptive and worth exploring.  Today, we find that it's best utilised for some of the phases of recruitment rather than the whole thing.  And I think we're probably going to stay in this space for the next five years.  The bet we're making, for example, at Manpower Group is that there will be recruiters in the future, but they will have to act much more like career managers or talent coaches, understanding what clients want, understanding what candidates and job seekers want, and really filling the gaps and the blanks that machines cannot fill and don't know how to decode because they're very linear. 

Once I know that you're a cybersecurity analyst, how do I know what you really want and how can I actually get a holistic and really detailed and accurate picture of your soul and the person you are?  A machine cannot do that.  They're getting better.  Humans sometimes cannot do that either, but we need to reskill and upskill humans more and more as machines continue to upgrade themselves.

David Green: There's a lot more dystopian stories around the impact that AI's going to have on jobs, and you will have read as many articles as I that it's going to replace X million jobs by 2025 or 2030.  But as you write in the book and as we've talked about previously, it creates new jobs as well.  So, as humans, we're all susceptible to biases, and ultimately it's humans arguably that are training a lot of machines to do a lot of this work.  So, AI might well be designed to reduce bias and recruitment and performance management. 

 

As we said, those designing and training those algorithms may be incorporating their own biases, or hardcoding their biases into the technology, which obviously means the technology isn't neutral after all.

What are your thoughts on this?  I know it's something you're particularly passionate about, you've written a lot about; and more importantly, what steps can we take to avoid this?

Tomas Chamorro-Premuzic: It's a timely question, because I spent many hours yesterday night actually chatting to ChatGPT about its biases and asking it to tell me how it's different from humans and similar, etc, and at multiple points it told me that it can be biased, because the data that it utilises is human opinion and information.  Obviously, that's the big one, so it's not necessarily because we're using humans, by which I mean not a bunch of young, white dudes in Silicon Valley with their hoodies who are deliberately trying to take down humanity or advance chauvinism or racism in the world; but the humans that are used to code the information that algorithms use are not necessarily because AI requires human coding, that the data will be biased or polluted or contaminated, but because we're training AI to debias very subjective human instances and interactions.

So, when we use humans to train AI to understand whether the objects in a map are trees or traffic lights or other humans or cars, it's relatively easy to do that, because any human can tell if something is a car or a tree or a traffic light; although, mind you, I confess I have failed the cybersecurity test myself!  But when we ask humans to tell AI whether somebody is a good performer or a great leader or a good manager, you can see there's a lot of subjectivity there.  We've known for hundreds of years that the ability of performance ratings is very low and that there's a lot of politics and nepotism and biases that get absorbed there, even if you have well-meaning and open-minded humans that have undergone a lot of unconscious bias training. 

I mean, when they say, "This is the best person in my team" or, "This is the best candidate" in a job interview, they don't even know whether that's because they really mean it, they're very good, or because that person supports Manchester United, or happens to have a similar accent to them, or happens to not be from Germany or Mexico.  So, it's inevitable that the data that feeds the algorithms will be contaminated and polluted, but I think this is a great opportunity to actually use AI to expose these biases.

The thing is a diagnostic tool.  So for example, yes, it is a bit of a horror story when we read that Amazon or Microsoft have tried to use AI to improve their recruitment or selection and they ended up with a surplus of middle-aged, white, male engineers.  That's not because of the algorithm, that's because of the culture.  And if AI can tell us that in those cultures, too many middle-aged, white, male engineers get promoted and science tells us that's not meritocratic, then AI has given us the opportunity to actually advise the organisation.

So, I think it's a great example of how machines and data, or algorithms and AI on the one hand and well-meaning, smart and open-minded humans on the other one can collaborate to advance diversity.  And I think that we will see in the next years a lot of progress in what I think is the areas of inclusion analytics or diversity analytics.  And so, if AI is something that happens to the data, we are in the driving seats when it comes to deciding what insights do we want to extract and how can we utilise those insights to improve fairness, not just productivity, but also equity, inclusion and fairness.

David Green: What are the ethical considerations you think that those HR and people analytics professionals that are listening should keep in mind when using AI in HR and people analytics in particular?

Tomas Chamorro-Premuzic: I think I would boil it down to four or five really simple and non-technical, almost common sensical principles, which is the first: accuracy.  I mean, if a model or an algorithm, or any technology really, this was the same with assessments before AI arrived, if it's not accurate, it cannot be ethical.  If it's creating the illusion of accuracy, as unfortunately it happens too often now that we have a proliferation of dashboards in the company and everybody has one or two dashboards per person and it looks great; but then if the data and the models that feed those dashboards are erroneous and inaccurate, it's a big problem; you're going to end up making worse decisions than before.  So, accuracy is the first one.

The second one is transparency.  I think there is absolutely no need to hide the purpose and the goal from consumers or employees or leaders or managers, whoever they are, the key stakeholders or the key beneficiaries or participants of the process.  So, make it very clear why you're doing something and what you're hoping to achieve.

The third is consent.  People should be able to opt in to whatever it is.  So, even if I tell you, "Hey, David, I'm going to have tons of algorithms mining all of your emails to realise whether you're worthy of promotion or not", which might be good if you're very hard-working and you feel that you've been disadvantaged by a prejudiced manager in the past, and bad if you enjoy slacking and you enjoy the performative aspect of job performance more than the actual performance itself.  But I should give you the choice; you should be able to opt in, and that should be based on transparency and accuracy.

The fourth, I think, is probably the one that subsumes all the others: there should be a benefit for the user.  You should actually benefit, so if I'm rolling out a new AI-based recruitment system or promotion system, or even a virtual, digital, automated coach, there should be a benefit, by which we mean not that it's perfect, but it has to be better than what you had before.  So again, here it might be that a video interview technology is still quite inaccurate, but if it's far more accurate than a human interviewer, we should roll it out.

The fifth one is perhaps more aspirational and longer term, although it's addressed a little bit today when we talk about explainable AI being one of the pillars already; but for me, AI, as any new technology in this space, needs to actually improve our understanding; it needs to boost self-awareness; it needs to provide people with feedback.  Now, it might be that I'm using algorithms to promote the right person and the organisation thrives and people thrive and I have a more meritocratic culture, but it's not enough for me.  I also want to help people understand themselves and democratise some of the information that algorithms are already instructing about.

I mean, wouldn't it be wonderful if all the data you have given away to Uber, Netflix, Amazon, Spotify, etc, could be synthesised and tell you something about you that makes you a better person?  Right now, you don't know.  You don't know what it means and Big Tech companies might not know either, but the information is there to enhance our understanding.  And with that, I think we can start to address some of the creepy dynamics of algorithms influencing us in a way that is almost spooky.

I think somebody once said, "AI is either creepy or crappy.  It's crappy when it doesn't work and it's creepy when it works too well and we don't know how".  But I think some of this understanding that it can have about consumers should be shared so that we increase our rationality, our maturity and we don't take shortcuts influencing people, based on their ignorance rather than their self-understanding or self-awareness.

I loved the last chapter of How to be Human, where you look and put a more positive spin on it.  I think you actually wrote, "We have a significant change to evolve as a species if we can capitalise on the AI revolution, to make work more meaningful, unlock our potential, boost our understanding of ourselves and others, and create a less bias and more rational and meaningful world".  That's just a quote, it's not the whole paragraph, people, so do get the book so you can read the rest of it. 

I think that leads on to the next discussion that I'd like to move on to; really curious to learn your thoughts on this.  Obviously, the book is focused on the current, so I'm now going to ask you to look to the future a little bit.  How do you think the use of AI in HR will evolve over the next few years; and appreciate you don't have a crystal ball, so it's just your views, so I won't hold you to them?

Tomas Chamorro-Premuzic: In a way, I think I confess that, I can't remember who once said, I think it was Niels Bohr, or somebody from the quantum physics movement that, "The best way to break the future is to create it".  I cannot obviously create it, but some of the stuff that I write about and say that it's more future-oriented is intended to be prescriptive rather than predictive.  So, I'm hoping that if we say, "Okay, this will happen", then people think that it should happen and then they actually do it, so sort of like a self-fulfilling prophecy.

But look, I am really first a strong believer, even if I'm also a critic of, I'm fundamentally a believer of the value of HR as a function in organisations.  And I think the past 10 or 15 years have only confirmed that all organisations have people problems, and the answers to the questions about people are not going to come from IT, the legal department, the marketing department or the sales department, they're going to come from HR.  And with that, we have seen an impressive acceleration in sophistication and in subject matter expertise within the HR function, which started really as scientific management 100 years ago, and it has a bad rep because we associate it with assembly lines.  But actually, a lot of that is still present when organisations try to turn their companies into an HR laboratory, and they have an experimental mindset to know to test what works and what doesn't because they know they don't have all the answers.

Then HR went into the kind of dark phases of the bureaucratic or legal phase, which still gives it a bad reputation, and the administrative phase, if you want, or the Soviet phase, if we might say so.  Then it arrived on this kind of spiritual or philosophical phase, which was all about talent.  Ever since McKinsey talked about the importance of The War for Talent, etc, and really created that phase, now we're in the people analytics phase, which you write about and talk about all the time, and I'm sure our listeners are very active in.

The AI phase is the people analytics phase 2.0, but progress in HR will come from successfully integrating all of these phases, of remembering that it's really about connecting the science or bringing the science to life in organisations because I'm very happy when people read my books and my articles, and I'm sure you are as well, when you create that incredible list of contributions every month; but ultimately, smart HR leaders need to find the answers by themselves and create the conditions where they can test what works in their companies and what doesn't, which of course is covered in a lot of the articles you shared.

Also, we need to remember the legal boundaries and constraints.  Also, we need to remember the philosophical and psychological aspects, because these questions, "What is talent?  What is leadership?  What is high potential?  What is engagement, or how do I create a good and effective culture?" they're not going to come from algorithms, they're fundamental human questions.  And I think if we can really use AI to augment the things we've always known have worked about leadership and management and performance and employees, then we can create more meritocratic organisations, which I think really is the direction that we should be going with organisations, to increase meritocracies. 

This may sound very philosophical, but we have done a lot of progress in the last 100 years in reducing nepotism and reducing bias and privilege, but with progress comes increase in expectations as well.  And as we achieve certain things, expectations rise in turn.  So, I think that's the agenda.  There has never been a more complex and difficult and challenging time to be a CHRO or a talent management head, etc, but there has never been a more impactful time to be in this role.  And despite all of these technical details and nuances, and the granular aspect, ultimately if a CHRO doesn't have the buy-in of their CEO and of the C-suite, they will continue to be on the menu, rather than having a seat at the table.

David Green: With this evolution to people analytics 2.0, as you called it, with AI and incorporating some of these tools more into day-to-day HR, how would you say this changes the role of the traditional HR professional?  I've asked because many of them are listening, so I think they'll be interested.  And, what skills do you think HR professionals need to increasingly acquire in this new age?

Tomas Chamorro-Premuzic: So, I think the fundamental change is the HR professional who really was under pressure for becoming a generalist in the last five or ten years, now has to be a connector more than a generalist, or a generalist and a connector.  So, I think each of the different parts of the puzzle are evolving and progressing and advancing.  There is the data side, there is the strategy side, the business side, the technology side, and even the psychological or content side of talent management.  And of course there is the business side, your P&L, profit, sales, etc.

I think strong HR professionals will have almost an accurate peripheral view of this and the ability to connect all these things in a way that nobody else within the organisation does.  And obviously, people are at the centre of this, but really understanding how knowledge of people can connect the different parts and how to enable these different parts to talk to each other and be deeply intertwined is what great HR professionals need to do today and will continue to have to do in the future.  And with that comes the importance of curiosity. 

You talked about the competencies; curiosity is really at the centre; social skills of course, even down to very basic things like listening, empathy to understand what other people are thinking and to put yourself in other people's shoes; obviously leadership and management skills to manage yourself first and then manage and broker all these relationships next.  So, I think it has become a very sociable role, it has become a very humble and curious role, and if people follow this script or this pattern, I don't think they need to fear automation, because they will be put there in the service of resisting automation and ensuring that people are reskilled and upskilled for the future, and ensuring that organisations are in a strong way to be accelerated by AI, as opposed to disrupted or made obsolete by it.

David Green: Yeah, I agree with all of that.  It's an exciting time to be an HR professional, similarly to CHROs, it's challenging as well, there's a lot to bring aboard.  And as you talked about earlier, yes you've got to acquire some of the technical skills, be able to interpret data, visualise data; but actually as you said, a lot of it is classic HR skills, the ability to influence people, ask the right questions, talking the language of the business to try and drive decisions, because ultimately analytics is about decision, support and driving better outcomes for the business and employees.

So, yeah, we've reached, unfortunately Tomas, we've reached the end of it.  This is a question that we're asking everyone on this series, in fact the last couple of series as well, and you would have touched on some of this today, but we can think broader now than technology.  What do you think HR leaders need to be thinking about most in the next, say, 12 to 24 months?  And related to that, what would be your biggest concern; and what do you see is the biggest opportunity?

Tomas Chamorro-Premuzic: I'm going to recycle from my answer when you kindly asked me, I think at the end of last year, and you were creating the big trends for this year.  I'm going to recycle that because I think that's the one that keeps me up at night and would certainly keep me up at night if I were a CHRO and an HR leader, which is really how to humanise the workplace, and how to not just understand the impact, both positive and negative, of all these technologies which are growing and spreading and accelerating, if not exponentially, incrementally, which is still enough; how to cultivate and create environments in which we don't see people as high-performing machines, but we see people as humans. 

So, injecting a much needed dose of humanity in the workplace, while of course not sacrificing or compromising productivity and all the things that businesses need to do, especially at a time when they're under pressure.  And what the potential risks or challenges are of this is that, much like anybody else in an organisation, HR professionals are being regarded as productivity machines themselves and they have scorecards and they have KPIs, and I'm sure most of them are not humanised work or create environments where we treat people as human beings. 

But I think fundamentally, the trend we have seen over the past 10 or 15 years to focus less on the what and more on the how is now being extended to maybe really understand that there is an attention between the two, and that the best way to focus on the how is to humanise work and create humane relationships with people.  If you do that, then the what will take care of itself.  And if you see attention, I think we've both met managers who've said, "I've reached my results but my engagement is not high" or, "My turnover is still high", etc.  The two are not incompatible, and if you focus on the how, the what takes care of itself.

So, I really think that's really what HR needs to be redefined as as well, instead of human resources, almost humane resources, or resources for a more humane workforce.  And I think each one of our listeners if they are in this space and if they are HR professionals, should be thinking more deliberately and more seriously about this, especially if it isn't part of their scorecard.

David Green: Yeah.  And maybe with all the tools that HR professionals increasingly have at their disposal, and the data, ultimately if we can start to show to leaders that creating that more humane workplace actually leads to better business results and a healthier workforce, then why wouldn't we do it?  So, I guess that's a wish.  I think we do see it in pockets of organisations, but that could be a responsibility for us to take on as a profession, which create, as you said, those healthier, more humane workplaces, and actually show that it actually leads to business results as well.

Tomas Chamorro-Premuzic: Yeah, it's no different in a way than when we read about the happiest nations in the world.  You need reliable metrics, KPIs and measures to actually make that comparison.  But the drivers, the enablers and the contributors of higher levels of happiness, whether that's Singapore, Denmark or Costa Rica, have nothing to do with data and metrics; have to do with how people behave, how they treat each other and how they interact.  So, the same goes for organisations.

The AI, the analytics and the data are there to tell us what goes on, and to tell us whether we're in the right direction or not; but the decisions, the behaviours, the motives and fundamentally the relationships, they have to do with human factors, not with technology or data.

David Green: Tomas, thank you so much for being a guest again on the Digital HR Leaders podcast, I always enjoy our conversations.  Can you let listeners know how they can find you on social media, learn more about your work, and about the new book as well?

Tomas Chamorro-Premuzic: Yes, so the easiest way is to go to my website, which is drtomas.com, and they can find everything there.

David Green: Perfect.  Thanks very much for your time, Tomas.  I hope to see you in person again.  I think last time we saw each other in person was in Stockholm at a dinner that Katerina Burke had kindly organised, so look forward to doing that again hopefully at some point in 2023.  And, yeah, thank you very much for being on the show.

Tomas Chamorro-Premuzic: Thanks again for the invite.