myHRfuture

View Original

Episode 13: How AI and Behavioural Science can Reduce Bias in Recruiting (Interview with Frida Polli, CEO at pymetrics)

Unconscious bias is a huge problem in the workplace, especially in areas like recruitment, promotion, and performance management as well as being a major barrier in efforts to improve diversity and inclusion. Whilst training and awareness can help, as humans we have inherent unconscious biases. So how can technology data and science help? And what steps do you need to take to minimise bias through the use of technologies like AI rather than to perpetuate it?

That's the topic for this week's podcast where my guest is Frida Polli co-founder and CEO of pymetrics, where we discuss how AI and behavioural science can reduce bias in recruiting.

You can listen below or by visiting the podcast website here.

In our conversation Frida and I discuss:

  • Why recruiting is broken and how AI and behavioural science can help to fix it

  • We talk about the ethical issues we need to think about when using AI to support hiring and people decisions

  • We talk about the bold initiative pymetrics has taken to open-source it's code as well as talking about the challenges involved in being a female CEO and founder of a successful HR tech company

  • Finally like with all our guests we also look into the crystal ball and ponder what the role of HR will be in 2025

This episode is a must-listen for people working or interested in recruiting, HR technology, behavioural science and people analytics as well as diversity and inclusion.

Support for this podcast is brought to you by pymetrics, to learn more, visit pymetrics.com.

Interview Transcript

David Green: I'm delighted to welcome Frida Polli, CEO and co-founder of pymetrics to the Digital HR Leaders podcast. Frida, thank you for joining us.

Frida Polli: Thank you for having me David.

David Green: And also we saw each earlier in the week in London.

Frida Polli: That's right.

David Green: And here we are in New York, welcome to the show. Can you give us a quick introduction to yourself and pymetrics.

Frida Polli: Yeah absolutely. So I'm Frida Polli, I am the CEO and co-founder of pymetrics, I spent 10 years as an academic neuroscientist at Harvard and MIT, loved the science we were doing wanted to have it feel more applied, transitioned out of academia through an MBA program at Harvard and started pymetrics which uses behavioural science and artificial intelligence to help hiring be more accurate and more fair.

David Green: Brilliant. Well we're gonna talk about a little bit about what you're doing at pymetrics later on. I've heard you describe hiring as being broken, which is a great line. What do you mean by that?

Frida Polli: Well I think most practitioners of talent matching, it's not just hiring, it's anywhere where you're trying to understand the fit of a person to a role that is really not working  very well and I say that because of all of the statistics that are out there put out by SHRM and others, where it's 250 CVs being submitted for one open role, it's 50% of first year hires failing. It's a clear documented disadvantage that women, minorities, people of different socioeconomic backgrounds, older people face when people are looking at their resumes. And it's quite frankly a pretty bad candidate experience, almost half never hear back after dropping their resume into the resume black hole and over 80 percent rate it as a very poor experience. So I don't really think that any part of the talent matching process is really working all that well. And so that's what we mean.

David Green: And one wonders what it does for a company's brand. Particularly if they're a B2C organisation.

Frida Polli: Yeah for sure. Well the way that I like to equate it is to how movie selection used to be done pre Netflix. So you remember the days of Blockbuster right? Where you would go into a store and there was a very limited selection of movies. You would select the movie in a manual process using the movie resume also known as the blurb. And in my case it was almost a 50 percent failure rate because I would hate one of the two movies. And I thought that that was the way movies would be selected from now until I died. Along comes Netflix and using better data and automation we are now able to open the top of funnel of movies to millions, tens of millions of movies. It's an automated process based on your own preferences. Which you still obviously then can choose this movie versus that movie, it's just a decision making aid. And the outcome is a lot better in terms of I'm usually much happier with the movies I've chosen and also I get to discover way more diverse and nontraditional movies than I otherwise would have. So I think that that's really the model that we want to move towards in the talent matching space. And I think that it's about moving from an analog process which is breaking under the weight of technology. I mean why do we have 250 applicants. Because LinkedIn and Indeed sends us 250 applicants. That wasn't the case 20 years ago. And so on and so forth. So I think that we need to now have our talent matching move into the 21st century just like our talent sourcing moved there say a decade ago.

David Green: Yeah and we all know of course if you look at Amazon their recommendation engine is something like 30 percent of their revenues. So it's clearly worthwhile doing. And obviously we can't translate it quite as much into dollars probably in the hiring space. So in terms of the hiring area, how can artificial intelligence and behavioural science help.

Frida Polli: Sure. Well it's it's very analogous to the example I just gave you. So what did Netflix do that transformed the way that we think of movies? So instead of using the blurb they got rid of it. They don't actually use that in their matching algorithm. What they do is they had people basically rate movies on their fundamental traits. And that's sort of what we do with the pymetrics system with people is instead of looking at their resume we look at them in a much more fundamental way. Looking at their cognitive abilities, their social skills, their emotional aptitudes and so on. So the first key area is getting  better data on whatever it is that you're trying to understand. The resume is very superficial and quite frankly it's also really really hard to un-bias even if you remove the name because you know men and women play different sports, people of different ethnic backgrounds participate in different extracurricular activities. So it's very hard to remove that that bias. So that's the first thing we can do with behavioural data similar to what Netflix did with their data.

And then the AI piece just comes in because instead of using a manual process, just like Netflix has you say I like these movies. And then they determine what are the fundamental attributes of those movies that make you like them which you may not even know yourself. I mean I'm sure none of us actually know what the traits of the movies that we like are. And that's kind of the really cool part of the discovery part. And it's the same with what we do is like we go into a company we say"Hey give us people that you think are successful, people that you think work well, they like the culture they're good culture fits and so on" and we'll take those people and we'll tell you what their fundamental attributes are. You may be surprised sometimes by what they are and that's where the AI piece comes in. It's finding... It's helping companies discover something about their workforce that they may not have known and then you get an algorithm that is essentially can be used to discover talent anywhere. I mean I think that's the beauty of it all is that you're no longer reliant on "Oh I have to know this." You can point it to any talent source whether it's in the West, Africa, you know some place that you've never gone before and it will work just as well as the places that you're more familiar with in a way that a manual process just can't compete.

David Green: It widens the funnel...

Frida Polli: Yeah. It widens the funnel dramatically just like we did with other recommendation engine technology. Whatever your favourite example is whether it's movies, books, whatever it is, in a way that just really allows all people in this case to be evaluated fairly and on the same and put on the same playing field essentially. Rather than what we do now in recruiting which is "Oh I need to narrow my talent search because I don't have the bandwidth to go everywhere." Well this gives you the bandwidth to go everywhere and then evaluate everyone in the same way. So that's a huge piece of it but then it's also helping the companies discover some things that they may not know about people what makes people successful in role.

Obviously we work with them. They give us the job skills, knowledge skills and abilities. We do a job analysis questionnaire. We work with traditional industrial organisation or occupational psychology construct so that we're not throwing that out but we're also saying "Hey help us tell you something about your organisation or in that role that you may not actually know." So I think it's a nice hybrid process.

David Green: And this technology is still quite new for HR professionals in particular. What are some of the misconceptions that you face?

Frida Polli: Well I think the biggest misconception that people have about AI in general it doesn't matter if it's applied to hiring or something else is what is it? I think a lot of people conflate AI with "Oh I'm gonna scrape stuff from the internet and nobody knows that I'm doing that." Well no that's not what we do. We are actively collecting data from the person that's different. That's data scraping. We don't do that. We don't do passive data collection on people. Some people do but that's not what we believe in. Another thing that you know people believe about AI is that it's black box right. Yes some forms of AI are black box but again that's not us. We are glass box, transparent AI, however you wanna call it. Another aspect of AI that people often assume is that by default it will have the biases of the human creators because at the end of the day artificial intelligence is simply a machine copying a human right? And again yes that can definitely be true if you're not careful, if you don't check your algorithms they can then replicate all the human biases. And because they're so powerful do it to force multiplier but that also doesn't need to be the case.

So I think the biggest thing that I always say is, and I don't think I was the person who coined it, artificial intelligence is an enabling layer that's just like electricity. And electricity can be a huge powerful force for good and it can also unfortunately be used in a harmful way. And technology is neutral and any technology is neutral. AI is neutral. It's really the design that the that the technologists have in mind when they create the artificial intelligence that matters at the end of the day.

David Green: So interesting talking about how you check the bias and validate against it. How do you do that.

Frida Polli: Sure, so the way that our platform works is that we build local job models for every company that we work with, in the U.S it's called the local job validation study. Which means that we don't assume that we have a sales profile that will work anywhere and we'll go into a company we'll have your top performing sales people go through our platform. We'll compare their traits against the baseline and we'll say these are the traits that make someone successful. We will also use a job analysis process to understand how those traits map to the actual job that they're performing 'cause obviously people want to know that. And so that's a big part of for us in any case the validation process. It's a concurrent validation process that then we follow on with a predictive validation process. So then after we've had the algorithm live for a while we'll then collect performance data, retention data and so on to validate it in a predictive way. So that's the validation piece. We also have construct validity and other types of validity that are important in the occupational psychology or IO world. And we have a whole team of Occ Psychs or IOs that have helped us really be buttoned up in that.

The de-biasing piece is actually something that is fairly unique to our platform or certainly we're the only ones who've open-sourced how we do this. So early on in the in the history of pymetrics we realised, look some companies no matter how hard they try don't have a representative sample at the moment of people in a particular job. Meaning it's overly Caucasian it's overly male. And so again you know if you just take that group of people even though your data may be unbiased you will pick up,  because of the Simpson's paradox, potentially bias in that sample and your algorithm will then be biased. And we all know this, it's a known fact. In fact Amazon got into trouble with that recently, it was in the news. And again as you know hiring tools can actually have adverse impact and be legal so long as they're job related. I think that's unfortunate. We actually take the position that even though that's true we don't subscribe to that and we really believe that that comes from a bias in the training set. So what we did is we created an audit process, essentially it's just like any audit process. You check the outcomes. You sa, okay this algorithm I'm gonna run it on a test group of real people that have given us pymetrics data. And we say okay between men and women are they getting equal pass scores, are people of different ethnic backgrounds getting equal pass scores. And if the answer is no because we're a white box or a glass box we can go back and say oh this feature is causing it. Okay let's remove it or deweight it and then run it again. And again when we say run a model we're running hundreds of models. So if one is showing that we can go and get one that's equally good and then is not having that biased effect. And it's a pre-deployment audit process that we've open-sourced on GitHub so anyone can go and check it out and see what we're doing and we and that's what we give to a client is we'll say we won't release an algorithm unless it's passed our de-biasing process.

David Green: Which is interesting 'cause I was at the Wharton People Analytics conference a few weeks ago actually and Adam Grant actually got on stage and said it's time that HR tech vendors open-source their code. You're doing it.

Frida Polli: We're doing that, yes. Absolutely yes. And we feel very strongly about this. We feel very very strongly about this because you wanna show what you're doing. You wanna be able to show... And look we're also engaging in research studies with our professors at MIT and other places so that we can have those peer-reviewed articles. Look I'm an academic scientist I've got a lot of peer-reviewed articles I have no problem in doing that. As you know it takes a while to get through the peer-review process. So while we're doing that let's help people understand what we're doing because I think at the end of the day if you have designed your technology to follow all of these principles it's good to show that because I think people need more examples of AI that is following certain ethical design principles and is actually you know working towards creating a situation that's not only more accurate but also more fair.

David Green: And it's funny because some of the companies like yours that are more at the cutting edge of some of the technology, I'm thinking about Ben Waber at Humanyze, Kieran Snyder at Textio. You seem to spend as much time trying to educate the market as you do trying to to win new customers.

Frida Polli: Well I mean look I think that you know both, I knew Ben actually from my MIT days so we've kind of grown up with this together. And I think a big part of what we did as academic scientists is tell people about our science. I mean it's a very common... I mean Ben has written a book. He's a little bit of a superstar in Japan apparently it had a big audience in Japan I don't know why! But anyways the point is that I think it is about education and I have no problem with that and I think that it's important to educate people because, look I was at this, we're lucky to be part of the World Economic Forum Technology Pioneers. As part of that uh event we were spoken to as a group by a woman who had written a book about trust and technology and how technology was, how it could bridge the trust gap, and she gave this great example of how back 150 years when steam engines were first invented, people were afraid of steam engines because women had never travelled at such high speeds, they thought their uteruses were gonna fly out of their bodies. Now we laugh and we think that's silly. But I think that there is always that fear when a new technology comes online. I'll never forget when when I was a brain imaging scientist and people started talking about "Oh we're gonna be able to image the brain and then predict stuff about people's mental health and all the rest of it". And as scientists we were all like "Oh my God" do you know how hard it is to predict anything? And people were fearing that it was gonna become this Gattaca like future that you take a scan of a newborn's brain and you're gonna be able to predict all these things about them. And scientists we all know that predictions are extremely hard. It's just not that we would ever want to get to that type of future but it's just not in our grasp even if we did want it.

So the point is I think people always fear these dystopian futures and we have to say to them Okay hold on a minute yes there is potential for that but let's look at the reality and let's put in guardrails. I mean essentially that's what everyone is talking about now with AI is developing ethical standards, developing guard rails, developing standards like IEEE that governs all sorts of technology to... And it's  you know people like Stuart Russell and others that are leaders in the field of AI. It's not practitioners like me. It's people that have really written the textbook, literally on AI and coming up with beneficial AI and and all of these councils that are getting stood up to really coalesce around what are the ethical principles of AI going to be?

So I think that education is a critical component of alleviating some of the fear and mistrust that people have. And I think that that's a good thing because at the end of the day I do believe this technology has immense opportunity for improving the current state of hiring and talent matching. So We need to kind of educate people. And then I think the other thing to note is that when people fear these dystopian futures I always say to people, well have you looked at the present? It's not that utopian, I mean there's a lot of socioeconomic inequality. There's a lot of people that are left out of the workforce. There's a huge dislocation of people from these types of jobs to these types of jobs. And AI has the potential done well to actually alleviate a lot of these things. So rather than thinking "Oh my gosh" the way that things work now are fantastic because unfortunately for a lot of people they're not let's think of how this technology could actually be helpful.

David Green: So make things better and as you said put the guard rails in place...

Frida Polli: It needs guard rails. I mean right now like I would equate it to when cars first came on the road and there were no seat belts, there were no airbags. There's no consumer report. I mean everyone's just kind of developing stuff and that's normal when something first comes online that's what's gonna happen. And we should be worried. I mean if people still didn't have seat belts and airbags and yes driving would be a lot more dangerous. But that's not... I strongly believe that everyone is well-intentioned and putting.... not everyone's well-intentioned, but the people developing AI are coalescing around the idea that we need guardrails.

David Green: It's interesting as we work at Insight222 with a lot of People Analytics leaders and actually the big problem they said to us last year is can you help us put an ethics charter together. So I think the people actually doing the work realise that. So people maybe with less knowledge about it who maybe get too much in the dystopian side of things.

Frida Polli: Yeah completely. And again I just recently listened to a podcast with Stuart Russell where he was talking about exactly that right. That look  it's not just... There are AI experts so to speak who are kind of blindly going along and thinking, oh you know I don't need to be thinking about these things. The objective function is not important and my objective function can be whatever. And Stuart Russell is basically saying no, no, no we gotta roll back the tape here and really think more carefully about this.

So I wouldn't just say it's the public at large that is not realising that these guard rails are necessary. I think there are also AI practitioners that are not thinking about it at all. And they're just thinking, what's the coolest new technique that I can use and I don't care if it's black box and I don't care if it's this and that. So I think it's writ large I think some people are sounding the alarm and saying, no we definitely need guardrails. And then others need to be educated as to why that's important. So I think, this is a totally off-topic but I think things like the Facebook data privacy and then all of the echo chamber algorithm stuff that happened is a wake-up call to a lot of people even though it's in a completely different field. And I think that you kind of need those public wake-up calls to say "hey well this stuff can really go wrong if we don't want to keep an eye on it."

David Green: But I think it's had an impact because if you look at the the work that Accenture published at Davos on trust and using Workforce data. I think  actually employees, 92 percent said actually we're quite happy for our organization to have data about us as as long as we get benefit from it. So I think it's that trade off.

Frida Polli: Absolutely and Accenture is a great example, in full disclosure they're a client of ours but they've done really some fantastic thinking around... Okay they've created a tool inside Accenture that says these jobs .Are at greatest risk of automation And what they're trying to do proactively is say, okay, and how do we help people in those professions um find new roles at Accenture. So Ellen Shook the CHRO has made this pledge that instead of mass layoffs and everything else it's really  about taking people that have been successful in a role that for no fault of theirs is going away and helping them be successful in different roles. And we see that across the board with other clients as well, I mean JP Morgan is another one where re-skilling is a big part of their future work plan. And again so pymetrics isn't just about recruiting it's also about internal mobility and re-skilling and you can tell a match anywhere. It can be internal and that talent matching of "okay you used to be in this role or you're in a current role like a call center and now you wanna be up-skilled or re-skilled, how can we help you find your most optimal fit?" Because at the end of the day and there's lots of research to show this, yes people are malleable everyone's malleable but I think we're also born with inherent predispositions to be better at certain things than others. And that's fine. Because there's everything from ballerinas to engineers to everything in between. And at the end of the day it's helping people understand these are the things I'm more likely to be well suited for. It's not deterministic. You can still say nope, I want to be a ballerina even though I'm built like a football player and you know vice versa. It's just gonna be a harder road but if you're willing to accept that go at it. You know what I mean? It's just about helping people understand the probabilities I think of a place where they may be better suited than not.

David Green: So that's interesting you were talking about using assessments outside the hiring process and I think you talk about misconceptions. I think there is this misconception that companies should only use assessments or do only use assessments in the hiring process. And you were talking the example there and using it for internal mobility and identifying skills for potentially for retraining and stuff like that. Can you talk a little bit more... maybe give some examples of where where that where that's happening.

Frida Polli: Sure. And again I mean just to be clear I think some of these newer technologies that are coming online like pymetrics we can act as an assessment but it really is a talent matching system that is a one-to-many mapping. Meaning that it has a broader capability. So to answer your question around re-skilling I think that was actually one of my favorite use cases because there's a ton of very accurate discussion of how the current workforce is being disrupted and people are being dislocated and you know everyone has... I mean if you go to Davos, anywhere it's just the prime topic of conversation. And people are... It's correct. I mean it's  happening. And it's not only happening to truck drivers, it's happening to physicians. So it's happening everywhere.

So how can artificial intelligence, a system llike pymetrics be helpful? So what we can be helpful with is again, it is much easier to train someone with the right aptitude to have the skill than it is to take someone who has the skill but doesn't have the aptitude. I mean and I can think of many careers where I don't have the aptitude and I would say I would not be very trainable... So what we do is we then work with companies to say okay these types of roles we know are at threat of automation and how can we help assess the people in those roles for aptitude for roles that may be coming online. And roles that they may never have considered that may not have even existed five to 10 years ago. And there are multiple ways of doing that, I mean there are other platforms that look at it from a skills perspective. So there's a very interesting piece of research I read recently that said truck drivers that at some point maybe completely dislocated by automated vehicles actually make great drone pilots. So that's a super interesting... They looked at that from a skills perspective and it makes perfect sense. It's hand-eye coordination all the rest of it.

So we're not looking at it from the skills that you've learned we're looking at it from sort of the cognitive, emotional, and social characteristics that you exhibit and you know a two-pronged approach can be helpful with this but at the end of the day I think understanding your fit for something before you spend six months training, a month, a year of training and so on is really critical. And we see lots and lots of large companies like Accenture, like JP Morgan and many others engaging in these types of initiatives.

David Green: And I guess it it's almost everyone's a winner out of it because organisations are struggling to find people for the new jobs they're creating. Do they go outside and recruit? Or can they actually as you said re-train which seems socially more responsible to do.

Frida Polli: Well it's well so

David Green: And cheaper...

Frida Polli: Well you said it, I dunno if  it's cheaper or if it's on par but the point is trying to go... So first of all those people don't exist outside your organisation. I mean to go try to find a data scientist, good luck to you, I mean you're gonna be fighting with everyone under the sun for that very rare Unicorn, and then two back to your point, if you already know that somebody is a good fit within your organisation, they're suited to the values of your organisation that's a big part. I mean there've been a lot written by Boris Grossberg and others that it's not just about the role it's about the values fit to the organisation. Are you a culture fit, values fit however you want to describe it. And different organisations have different cultures or values. So if you already know that person works well within your organisation and you can retrain them to perform... To be in a role that is very hard to find outside it's a lot less risky a bet in a way than hiring somebody from the outside world that the values fit is maybe more in question. Not to mention that it's just a much more disruptive process.

David Green: Yeah I mean to think that's where you can really see how this technology can really support organisations with these big transformations that they're going through.

Frida Polli: Completely. And I think it's also helpful with what's called or what I've heard called the brick ceiling right where companies want to allow opportunity for blue collar workers or non-degree people to move into more professional roles because there has been some great talk, well  not great, somewhat depressing research showing that 50 years ago you could start as a secretary and move to be the CEO. But now that doesn't happen for a variety of different reasons and now I think there's a desire to go back to that time where even if you start off in a blue collar occupation you have the possibility of of rising through the ranks.

 So it was actually interesting to hear the CEO of Walmart because at Walmart everybody starts in one of these hourly roles. That's not common, and it makes sense that they would they would do that, but the point is I think AI can also be helpful in helping understand if I start out in a hourly role how can I be considered for a professional role or something that has more of a career path? And again looking at that matching or fit. So that's another use case that we come up against.

David Green: And rather sad which is even though we're supposedly progressed as a race we're actually... There's less liability

Frida Polli: Well that's I mean that's the other thing so I think one of the things that that surprises people but doesn't surprise us is how much this technology can be used for socioeconomic inclusion. So we were speaking about the Unilever use case where yes they got efficiencies and cost reductions and better candidate experience all those wonderful things, better accuracy, retention... But at the end of the day I think the thing that's surprising the most was just how incredible a socioeconomic-inclusion tool this was. It was unexpected to us and it was unexpected to them. And what did they do? They opened up their top of funnel instead of going for the early careers program to just a handful of schools, they literally in the U.S alone went to 2,500 schools and they then hired, because it's putting everyone on the same playing field, they then got a lot more people from those sort of schools they had never even thought about. And it was such a dramatic shift that they had to change their relocation policies because people were accepting offers and then saying "thank you but I don't actually have the resources to fly cross country and rent an apartment." And it was only at that moment that they really realised "Wow, our pool of candidates prior to this was pretty homogenous when it comes to a socioeconomic  perspective. And if we want to really use technology to start to move the needle on socioeconomic inequality and socioeconomic inclusion, I think AI is the only way to go. Because again I always say you can un-bias an algorithm, you cannot un-bias a human. It's been shown time and time again that unfortunately unconscious bias training doesn't work. So if you really want to expand the top of funnel in a way that is only scalable through technology and not through human processes and on top of that remove the bias we have to look to AI that's created with these guard rails that we've been talking to put in place.

David Green: And you think of the business benefits of that because a company like Unilever which is the one of the biggest consumer companies in the world probably says that their  workforce ends up becoming more representative for their customer.

Frida Polli: Absolutely. And I think that's what's exciting to them about it. And I think that when you hear Leena Nair speak about it that's exactly her thesis is that look we don't want our company to look completely different than the consumers that we're trying to serve. And I think it's the same with many companies at this point. I mean whether you're a consumer brand or not I think people realise the value of having that diversity that represents the people that you are selling to or buying your products or whatever. And I think it's it's a very common theme that we see.

David Green: Yeah and I think Leena is big on the we need to be more human in the digital world and and that is a good example of how that actually is effective.

Frida Polli: Yeah absolutely. And I think the other thing that you know always is a  misconception is that candidates don't like it. You know we hear this all the time. And I have two responses to that. So having transitioned out of academia through an MBA program where I was part of that recruiting process on the job-seeking side, one of my  responses is "you clearly haven't been through a typical recruiting process anytime in the last decade because let me tell you it's pretty terrible." And again it's at Harvard, we're over-served, no one should feel too sorry for us but it's not a fun experience. And so if you take Unilever they had their highest candidate rating ever after deploying this digital process. And why was that? Let's just think about the reasons for why that was?

So one is that everyone got feedback. Immediately we give people feedback not like good or bad just like "Hey this is you as a person." So you learn something about yourself. The second thing is it's quick. I mean it's called a resume black hole for a reason because you put your resume somewhere and then poof you never.... it's gone. Versus here at least you're hearing back. And I think people wanna hear back, they don't wanna be strung along forever. So that's part number two. And then the third thing that pymetrics does specifically, we're fairly unique in this, is we rematch people. So if you are not a fit, I think your colleague Adam Grant called us the Harry Potter sorting hat and that's pretty accurate. So if you're not a fit for the role you initially applied for we can actually help you rematch to another role at that company. And then if you get taken out of that company's process all together we can actually rematch you across other companies that are using pymetrics.

So at the end of the day you are much more likely to get a job using this AI-based system than you are otherwise. And I had a gentleman asked me recently he's like "Wow, if we're using these games and my competitor's taking people out to dinner won't we look bad?" And I had two thoughts that I shared with him. I said, first of all you can use games and also take people out to dinner. They're not mutually exclusive. We're not deciding anything for you we're just helping you make that decision. And at the end of the day a job seeker doesn't want a dinner, they want a job. It's like when you're single, you don't want to go on endless dates where people are taking you out to dinner, you want to find your mate. And so I think that that's lost on people that anything you can do to increase the likelihood that someone is going to match with their right role I think is the winning the winning formula.

David Green: So it's almost like widening the funnel but improving the shortlist.

Frida Polli: Absolutely and that works for both sides. Because we're shortlisting people for the company for role-specific and then we're also telling candidates we're putting them into that shortlist pool for things they may not have ever considered which is another... Again it's a discovery process on both sides I think.

David Green: Great so pymetrics is doing very well. And obviously you had your series B funding last September. I won't mention the number but it's out there in the public domain. Where are you investing next?

Frida Polli: Sure well I think the biggest place that we can make an investment is really in technology and the product. I think that behavioural science and artificial intelligence is really just at its infancy in terms of what it can do to help this matching problem. So that's where we're really doubling down and investing And everything that we've done at pymetrics is homegrown. We've developed all of the technology ourselves and we will continue to do that until, again our aim is to be the most ubiquitous talent matching platform on the planet. A lofty aim I know but to do that I think you really have to continually hone in on what are the best signals? Are they valid? Are they predictive are they fair? Do they comply with legal standards? And so on. And it's just a constant evolution of the product.

David Green: And moving on into you and your role. It's a fast-growing  company. HR technology is a pretty dynamic space. Lots of competitors  around. What are some of the challenges involved in running a company in this space?

Frida Polli: I mean there's plenty of challenges, there's plenty of really enjoyable moments. I think you as a female and mom I've had two kids while running Pymetrics. And I actually started Pymetrics when I was a single parent. So balancing child rearing and running a company is always interesting. I mean we were just talking about how I have a six-week-old right now and when you're traveling and we met in London a couple of days ago so I had to fly from New York to London and I'm still breastfeeding and you have to hide under a blanket while you're basically producing food for your little human at home and storing it in bottles and doing all sorts of crazy things with it. So it's definitely a challenge. I'll give you another kind of funny example. So I was in need of becoming a human food-production system on the plane and I went to the stewardess I said what should I do? And she's like, oh you know you can you can go to the bathroom. And I felt so bad because it was literally occupying this airplane bathroom for a good half hour while all these people were probably like "what is this person doing in there?" Doesn't she know there's a line but it's definitely a challenge to balance all of these different moving parts and again I'm not complaining 'cause I'm very fortunate to be a woman who's running a technology company and I hope that it shows other women who are thinking "can I balance this? Can I juggle this?" That you can do it. Although it does take a village as they say. It takes a small a small village of people be it both at work and at home. That can really help you balance all these things out. But yeah I think that right now that's probably the challenge that I'm most faced with.

David Green: I can imagine, and so your co-founder is also also a woman and there's quite a lot of stuff out there about the challenges for female entrepreneurs to get investment. How have you found that?

Frida Polli: You know people always ask me that and I think it's a hard question to answer because there have definitely been times that I strongly suspect that investment didn't come our way because of my gender. I'll give you a perfect example on our last series B where we ended up raising 40 million so we were very excited with the outcome. We got a rejection letter from a VC firm that went something along the lines of: "You are amazing, your team is amazing, the growth of the company is amazing the financials are spectacular, the market, but there's just something that we couldn't put our finger on that let us not invest." And I was like "Oh let me give you a multiple choice, is it that I'm female, is it that I'm female?" And again you don't know maybe it was something else. You can never really put your finger on it. But obviously when you get things like that you start to think, oh maybe that's what it was.

Other times that I I'll never forget somebody said, oh you know Frida doesn't seem like, she didn't exude that confidence that we expect to see in entrepreneurs. And anyone that knew me, their jaw fell and said really? Have you spoken to Frida? So I think  it's just maybe it's a different form of... I don't know maybe a man goes in there and and acts differently. So I do think that if you look at any of the research, MIT put out this study where men got funded with the exact same pitch at twice the rate women did and so on and there's lots of research to back it up. But on any particular instance, I have never been able to say oh my goodness I definitely know that it was because of that. And quite frankly I think we've been very fortunate. I mean I'm you know fortunate and obviously you know we had a great product and I also think I'd benefited from having domain expertise. Because I had a whole career before me as an academic that I could lean on and say look I kinda know what I'm doing here. So I think all of those things played into it.

David Green: We're coming towards the end of the discussion unfortunately 'cause I think we could continue this for quite a while.

Frida Polli: In Paris next time...

David Green: In Paris next time...

Frida Polli: ...Hong Kong

David Green: Yeah Hong Kong, why not. What do you see is the next evolution in the space?

Frida Polli: Yeah I I agree with you that we are kind of at the early stages of AI being adopted and so it's hard for me to really pinpoint what do I think is the next thing that's gonna happen especially because I think there is a lot of regulation in the space. Also that we have to take into consideration when we're building all these platforms And at least in the U.S recent laws have come out around all sorts of things like making sure that the candidate knows they're going through something. There was a recent one making sure that if you're using a chatbot people know that they're speaking to it. So it'll be a very interesting journey that we're all on to see how this technology co-evolves with regulation which is also evolving and I think that regulation has a role. I mean I think I personally don't think we should move to the China system where social scoring and all of that, it is a bit frightening and then obviously the opposite end of the spectrum is Europe and GDPR and I think the U.S is kinda somewhere in the muddy middle. So I don't know what the next evolution is going to be because I think it's a little bit like the weather, the butterfly effect, there are just too many variables that for me my limited mind doesn't have a clear picture of exactly where that evolution is gonna go. However I do strongly strongly believe that artificial intelligence certainly with this talent matching problem will play a huge role in it. I think we're gonna be a big part of that and I think there will be other platforms that will also play a huge huge role. And I think that the evolution hopefully of the space will be towards removing a lot of the inaccuracies, lack of scalability and biases that we see in the current process. That would be my my hope.

David Green: And hopefully enabling things like social mobility.

Frida Polli: Absolutely I mean that is a bias right? And it's a bias we don't actually speak about a lot. Because again yes, there is some... If you go to Harvard yes you tend to have better grades and all the rest of it However you also tend to often come from a much more socioeconomically privileged background. And in the U.S there's 18 million college students 0.4% go to elite universities and yet those elite universities are the ones that get the vast preponderance of interest from employers. And a lot of people don't go to those elite universities because they can't afford to because they didn't grow up in a situation where they had access to as good schooling and so on and so forth.

And so if we can't use AI to correct some of those social inequalities that exist then I think it's...We're gonna just perpetuate and double down on it. And I think that it is a bias in our system that we have now whereby if we see certain markers of pedigree whether it's a school you went to or a company you worked at or so on and unfortunately those often track socioeconomic status. So I don't think that there is a way for us to really get around that unless we start using some of these platforms that allow us to really remove some of those preconceived notions.

David Green: And I wonder if things like GDPR in a few years time people will actually see as a good thing. I imagine it's a hell of a lot of hassle for a tech company and an organisation, but what it is forcing people to do is be a little bit more transparent.

Frida Polli: Absolutely.

David Green: And as you said let people know they're actually going through an assessment. Let people know they were talking to a chatbot.

Frida Polli: Yeah exactly And I think... I heard Stephane the CEO of Upwork had a great quote. He said in a self-driving car would you rather have a self-driving car that had an explainable algorithm but then had a higher fatality rate or one that had a black box algorithm and that had a zero fatality rate. So it was an interesting.... He posed a great question right?

David Green: That is a great question yeah...

Frida Polli: However unlike self-driving cars, HR algorithms aren't hopefully killing anyone. There's no fatalities involved. Hopefully we can say that with confidence. And so I do think that in cases like that explainability, especially if we're concerned around things like bias because again hopefully the bias issue is less relevant to a self-driving car. I think when we are worried about things like bias then explainability becomes more important. And so I'd agree with you that and or or even just active data collection letting somebody know that you're actively collecting data as opposed to just passively scraping all this stuff and then poof you know you've built an algorithm and they don't even know it.

So I do think that it depends on the use case you can't just come with blanket statements. And so I agree that GDPR has been helpful in that capacity.

David Green: Okay, moving on to our last question and it's kind of related to the last one I I suspect. So this is something we ask all our guests on the show. What do you think the role of HR will be in 2025?

Frida Polli: I think it's going to be finally the strategic function that it really should be. I mean, HR should should lead not serve. That's kind of the way that I think and if you think about an organisation, everybody believes that talent is their greatest asset. And yet the HR function has been completely divorced from that and just... I've heard CHROs describe it as order takers right? And I agree to some extent that that has historically been the role but I think they haven't been given the types of tools that really allow them to become strategic. So I think that that's what I would hope to leave any listener or viewer with the idea that look all of this AI that's coming on board yes it's a little bit scary it's a little bit frightening, we need to educate ourselves but at the end of the day what I firmly believe it's gonna allow people to do, and we've seen this over and over again in other verticals, is going to allow you to stop doing all this manual busy work and actually become more of a strategic thinker. And my goodness wouldn't anyone want that?

And that's what we see at all the companies that we work at is that we don't see massive displacement of recruiters. We see them having their functions elevated to something that is more strategic to the company.

David Green: What a great way to end it. Frida thank you so very much for being a guest on the show. How can people stay in touch with you?

Frida Polli: Sure It's easy, www.pymetrics.com and I'm just Frida F-R-I-D-A. It's like Friday without the Y at pymetrics.com.

David Green: And do you do much on social media at all?

Frida Polli: You can find me on LinkedIn, you can find me on Twitter. I'm just Frida Polli and Friday without the Y and then P-O-L-L-I. It means chickens in Italian. So for those you know that want something to remember me by.

David Green: Friday chickens.

Frida Polli: Friday chickens. You can remember that.

David Green: That's right! Frida, thank you very much

Frida Polli: Thank you David