myHRfuture

View Original

Episode 179: How to Overcome AI Adoption Challenges in HR (Interview with Eric Siegel)

Despite an abundance of research highlighting the importance of AI in enhancing HR processes, the vast majority of organisations are still at the earliest stages of adoption.

Even among those that have already implemented AI in HR, many are using it only for one or two niche applications. This is largely due to the perceived complexity and high costs associated with implementing AI technologies.

To tackle this issue, host David Green is joined by Eric Siegel, founder of Machine Learning Week and author 'The AI Playbook: Monitoring the Rare Art of Machine Learning Deployment' to discuss how organisations can overcome these challenges and effectively integrate AI into their HR functions. Therefore, in this episode, listeners can expect to learn more about:

  • The current state of AI adoption in HR and the barriers to widespread implementation;

  • The potential benefits of using AI in HR processes, including improved efficiency and decision-making;

  • Practical advice for organisations looking to adopt AI in their HR strategies

  • How HR professionals can prepare for the impact of AI on their roles and responsibilities;

  • The role of ethics in AI adoption and how organisations can ensure responsible and ethical use.

You can also expect to hear about real-world examples of successful AI adoption in HR, as well as insights on the future of AI and its potential impact on the evolution of the HR industry. So tune in to this episode to learn more about how AI can revolutionise HR and what steps your organisation can take to embrace this powerful technology.

Support from this podcast comes from ScreenCloud–the digital signage platform that helps HR around the globe elevate their digital employee experience, with 'screens that communicate'.

To learn how ScreenCloud can enable your organisation to increase employee engagement, drive productivity, and improve compliance, visit screencloud.com

[0:00:00] David Green: One of the highlights from BCG's recently published Creating People Advantage 2023 report was that while AI is gaining significant traction among people managers, the vast majority of organisations are still at the earlier stages of adoption.  Incorporating AI, analytics, and machine learning into our work is a huge opportunity for HR to elevate its impact as a strategic business partner, as well as transform and personalise the HR programmes we deliver.  Research by Mercer finds that 58% of employers plan to use generative AI in HR by June 2024, so this is a topic that is receiving a lot of investment and a lot of attention.  As such, it is important for HR professionals and leaders to understand the potential opportunities and challenges of AI and machine learning in relation to our work. 

That's why I'm delighted to have Eric Siegel, founder of Machine Learning Week and author of a new book, the AI Playbook: Mastering the Rare Art of Machine Learning Deployment, as our guest today.  Eric and I will discuss how to successfully deploy machine learning in organisations through the six-step model called bizML that is outlined in Eric's book, and how this can be applied in HR.  We'll also discuss some of the key opportunities for AI and ML for HR, guidance on overcoming common challenges, and talk about some of the ethical considerations associated with these technologies.  So, without further ado, let's get the conversation with Eric Siegel underway. 

Today I'm delighted to welcome Eric Siegel to the Digital HR Leaders podcast.  Eric, welcome to the show and congratulations on the publication of your book, which launches today, the same date, 6 February, that this episode goes live.  The book is called, for listeners, The AI Playbook: Mastering the Rare Art of Machine Learning Deployment.  Please could you start, Eric, by kindly providing a brief overview of yourself, your professional background, and also why you wrote the book?

[0:02:19] Eric Siegel: Great, thank you so much, David, for having me on the podcast.  Yeah, I'm really excited about the book because I think it addresses a problem that's been bothering me for about 30, well, at least 20 years.  So, I'm a former academic, I was a professor at Columbia University, I taught the graduate courses in AI and machine learning.  But that was kind of a lifetime ago, and I've actually been an independent consultant, providing services to deploy machine learning for business applications mostly, for 20 years, and I'm the founder, as part of my consulting practice, of a long-standing conference series called Machine Learning Week.  And that conference series focuses on the commercial deployment, the actual successful deployment, as it says in the title of the book, "deployment" of this technology, which although it's critical and it's the culmination of every machine learning project, it's often the step that's missing.  It's often that last inch, that last mile, you don't get there, and it's really becoming clear that this is a somewhat endemic problem across corporations in this industry. 

Now there's plenty of amazing successes, but the majority of new machine learning projects actually fail to deploy, and this comes from a bunch of industry research.  IBM recently came out with research showing that according to executives, the average returns on AI projects is lower than the cost of capital.  So, basically no returns, and I believe that's largely because they failed to deploy.  In industry research I've been involved in, where we surveyed data scientists from the technology side, they tell the same story.  So, they're commissioned to generate or develop a predictive model to be deployed, learn from data in order to predict, and in the deployment, you're using those predictions to improve all the large-scale operations, which could be who to hire, which employee may be at risk of quitting, and therefore some retention offer, very much analogous to marketing where you try to retain customers or acquire new customers. 

That's the deployment piece where after the number crunching, you've got this predictive model that can now make probabilities, that is predictions, about each individual case as far as whether they're going to click, buy, lie, or die, commit an act of fraud, quit as an employee, any outcome or behaviour for which they may be valued to predict.  So, what I do in my book, the AI Playbook, is I present a standardised business practice, called bizML, business practice for running machine learning projects, that's been designed and intended for wide adoption.  And in fact, that's why I worked so hard on this new buzzword.  I've carefully constructed this buzzword with hopes that it will catch on, because what the industry needs is something to hang their hat on to say two things.  One is, "Hey, many new data scientists haven't quite caught on to it yet, because the education process is just about the core technology, the number crunching".  And in general, business professionals don't know that there needs to be a customised, specialised business practice, let alone know about anyone in particular, and there's none that's caught on in the business world. 

So, that's what I'm doing with bizML.  The book covers that six-step practice so that you actually successfully get a viable predictive model integrated in your improving operations, such as how to filter resumes for job applications, how to look at your 300,000 employees and decide which ones are most likely to be quitting over the next three months.  Whatever it is, you need to make use of that discovery, those patterns, the mathematical formula in the model that's been learned from data; make use of it to actually improve some large-scale operation, so that there's a return on investment with these projects.

[0:06:05] David Green: So, most of our audience, Eric, are HR professionals and leaders and it's important for all of us in the field to be aware of the organisational opportunities for AI and ML, as well as what it means for HR programmes and professionals as well.  As such, what would you say are the key opportunities of AI and ML for organisations, and then perhaps specifically for HR as well?

[0:06:27] Eric Siegel: Well, there's basically two prongs of opportunity for HR.  The first is for the organisation as a whole in that, look, machine learning is the most important general purpose technology.  It's so widely applicable for all large-scale operations, or virtually all of them, which basically are broken down to many kinds of micro-decisions, lots of individual decisions, "Should I interview this job candidate?  Should I audit this transaction as potentially fraudulent?" lots and lots of decisions.  Business is a numbers game, there's no getting away from that, can we tip the odds in our favour a little bit?  Well, the holy grail for improving decisions is prediction, and that's the only way to systematically apply science, is to use machine learning.  When I say the only way, because that's basically the definition of machine learning.  If you want to apply science, that is a data-driven method, to learn from historical cases and render these probabilities, then you're doing machine learning.  So, if you want to make use of this, it's so fantastic across the organisation, including HR functions. 

The other opportunity for HR, and to really become an important piece of a leading enterprise, is to upscale, to make sure that not just the data scientists, by any means, but the staff in general have a semi-technical understanding of what it means to run a machine learning project and to integrate it, to deploy it, what they're doing.  And that really, it's really quite simple, it just comes down to three things: what's predicted; what's done about it; and how well it's predicted.  This stuff is actually easier than high-school algebra at the core to get what I'm referring to as a semi-technical understanding, and a heck of a lot more interesting because it's so pertinent; it speaks to what it means to improve everything we do as organisations, all the large-scale operations. 

[0:08:14] David Green: How do you think AI and machine learning will impact the way that these HR programmes are developed and delivered?

[0:08:20] Eric Siegel: Well I think it's become pretty common with larger organisations to do things like predict employee retention.  In fact in my first book, Predictive Analytics -- and Predictive Analytics, by the way, is basically a synonym for enterprise applications of machine learning -- and in that book, I cover a case where HP generates what they call a flight risk score, so the probability of an employee leaving their job, of self-terminating.  They render that calculation to calculate the flight risk score for all 300, at 2009 Machine Learning Week.  It was previously called Predictive Analytics World.  And so I have some experience now in understanding which kind of use cases and which kinds of organisations are willing to present publicly under their own name at a conference. 

There is a sense that HR applications along those lines, they tend to be a little quieter, not the quietest across industry sectors and application areas.  And in fact, we even had Predictive Analytics World for Workforce.  You can see all programmes.  If you go to the programme we had for the Machine Learning Week Conference just last year, 2023 in June, and just look at the agenda and search for the word "Workforce", you see there's five or six use case presentations there as well.  So in general, it's not that people are that hush-hush, but there is a little bit of a tendency because there's a there's sort of a concern at least about the cosmetics, if not about ethics, because there is something to be said of, "Hey, look, if this thing makes a pretty confident calculation that you're really thinking about quitting your job and then it delivers that information to your direct supervisor, well, your boss might be the last person you want to know the information you've not yet disclosed to anybody".  So, a little bit of an ethical quandary there but not the worst ethical quandary in the world. 

Not to mention that in general again, these things are about putting probabilities.  For most cases they don't have high confidence.  So it might say, "Hey, look, this particular strangely defined group, really high confidence, I know that this customer is going to buy, I know that this customer is going to commit fraud, I know this employee is going to quit their job".  Instead, you're putting probabilities and probabilities help, right?  They're better than nothing, we don't have clairvoyance and we can't expect computers to be clairvoyant either and that's the value proposition.  That's what it means to deploy machine learning for enterprises.  In general, it's not about high confidence.  It is in certain cases, "Is this a picture of a cat or a dog?  Is there a traffic light in this picture?"  But for most of these things where you're predicting the outcome, especially of human behaviour, whether they'll click, buy, lie or die, it's simply about predicting better than guessing and putting odds that pay off over a large number of decisions much better, for example, than random guessing.

[0:11:09] David Green: As AI and machine learning become more integral to business operations, and it's already happening, and obviously we're seeing it happening more and more, how do you see the role of HR evolving to support organisations through these changes?  You talked a little bit about upskilling.  Are there other areas where HR is going to have to support organisations through these changes? 

[0:12:24] Eric Siegel: Yeah, that's a great question.  I mean, I think upskilling is first and foremost the most important thing.  The thing that makes this technology successful is people, right, so that's sort of the irony.  To improve what's currently a relatively dismal track record of projects that actually lead to successful deployment and improve that track record, what's most needed is upskilling in people, not improvement technology.  The core technology is there.  For most of these, there's a ceiling in exactly technically how well it can predict anyway, it's not going to be a magic crystal ball.  But the improvement of technology is always there, and there's definitely a lot of infrastructure around how to deal with and prep the data, and then use it moving forward, and get the model deployed into the existing operations.  There is a lot of technology there, but the business planning part of it is the main missing piece, and that's what I'm talking about, a six-step process, and you could break it into eight steps or in five steps, people do it differently.  I've chosen six for a variety of reasons.  The important thing is an understanding that there needs to be a very particular specialised practice.  So, I'm branding it as bizML and I'm putting that out there.  You need to do that in order to actually get the thing deployed.  So, the upskilling is crucial. 

The other thing that HCR needs to be aware of, there's a business role with enough, at least semi-technical understanding along the lines of what I'm describing, so that they can bridge that gap.  Right now, one way to conceive of the problem is that you have this gap between business and tech, right, between the quants, the data scientists, and their client, their stakeholder, the person who's running the operations that's meant to be improved, but with the predictions of a machine-learning model.  So, you've got that pair of people, and they're not speaking the same language and they're not getting involved in detail at the level of detail that you need to plan in reverse exactly how this thing's going to deploy, exactly what's going to be predicted and exactly how operations are going to change according to those probabilities, because when people say prediction they basically mean probabilities. 

So, that role could be key, but so long as people are taking the bull by the horns by following a practice like the one I'm putting out there, bizML, then they'll intrinsically be planning in detail from the beginning.  So, that HR upskilling is greatly omitted in general.  There's no standardised curriculum and that's the first thing that's going to really make a big difference.

[0:14:46] David Green: And two follow-ups there, Eric, we are going to come to the six steps if you want to talk about that in some detail, so two things.  Actually, I like that role you mentioned about the analytics liaison.  So, we see it in the people analytics work that we do; we just happen to badge them consultants, but effectively they've got that semi-technical understanding that they are the connection between the data scientists and the business, first of all identifying the problem that needs to be solved; secondly, in breaking that down to analytical questions or hypotheses that can be tested with analytics; and then third, in actually taking those findings, those insights back out, putting them in a language where action is going to be taken and then the deployment happens, I guess, as well. 

I've seen Tom Davenport and Tom Redman, there was an article that they published recently in MIT Sloan Management Review, they talked about these, called them connectors, but essentially it's people that can -- or translators, "translators" is another phrase that I see, but it's basically that link between the business and the technical data scientists and machine learning engineers, so that what we can achieve then is a common language.  And I guess that's a big need to get to that deployment level, isn't it? 

The second follow-up is really around upskilling.  In terms of upskilling, what are basically the skills that the business professionals need so we can be better at the deployment piece of this?

[0:16:12] Eric Siegel: That's a great question.  So, I love the term, "upskilling" because it sounds jazzy.  What I'm mostly alluding to though is know-how or knowledge, really.  The skills part comes after a couple of projects, actually putting this knowledge to use.  The knowledge is what I've been saying, what's predicted, what's done about it, and how well it's predicted, not necessarily in that order, but that concrete sense of what the project's going to do, in combination with also understanding what the six steps are, and I can step through those. 

Tom Davenport actually wrote the forward to my first book, Predictive Analytics, and I had seen that article.  Yeah, "analytics translator" is another one of the terms being thrown around.  Absolutely critical to get to bridge that gap.

[0:16:54] David Green: So, let's get on to the six steps.  Obviously, we've talked about this, it's a core of the book really, isn't it?  You've got a chapter for each of the six steps.  But talk through the six steps for implementing machine learning in organisations, and walk us briefly through these steps and their significance, and I think we'll see how that resonates with HR and particularly people analytics leaders who are starting to bring some of these machine learning technologies into some of the work that they're doing as well.

[0:17:24] Eric Siegel: Sure.  So, the first three of the six steps are basically pre-production and they involve that same three things I've been mentioning, so: what's predicted; how well; and what's done about it.  They're not quite in that order.  And then the next three are the actual production where you're conducting the machine learning project.  And those three sort of always exist by definition of machine learning.  They've always been there since the 1960s, when people first started applying linear regression to target marketing and to do credit score risk assessment.  And those three are: prepare the data; train the model, that's the rocket science part, that's the analytics, that's the number crunching, so you're learning from that data and what you're learning is a predictive model, the patterns or formulas; and then deploy it, that's step six.

So, I culminate with step six, the way I formulate it.  Certainly after that, there's still monitoring and maintaining and managing.  Once you've actually launched a rocket, you've got to keep the astronauts up there alive, you've got to keep it working, right?  In the case of machine learning, you've launched it, now you've changed the way you're doing business and you've got to keep that new process going.

So, what metrics do apply?  Well, then it turns out there's a difference between technical metrics, which is how much better does this thing predict than guessing; that's called lift, and that can be really interesting, but it's only telling you the relative performance compared to a baseline like random guessing.  What you also need are the business metrics, the ROI, the profit, the number of dollars saved, the number of employees saved, the savings on cost of hiring new employees relative to the expense incurred to retain existing ones, which tends to be lower.  So, it's a good payoff, but you have to measure those.  So, translating what the model would deliver in those business terms, which are very concrete and any stakeholder can understand them, actually takes a little bit of tricky legwork.  They depend entirely on exactly the plan of how you're going to use the model, how operations are going to change.  You can't just say, "Hey this model is worth a million bucks".  It's only relative to exactly how you intend to use the model. 

This is something industry hasn't quite wrapped their head around.  Data scientists are not trained to translate from technical to business metrics.  This leaves stakeholders in the dark, and that's a big piece of why they're ultimately going to get cold feet and be unwilling to change whatever this very critical, large-scale operation for the company is.  You know, sometimes it becomes a lot easier to kill the project and sweep the failure under the rug.  So, that's sort of the phenomenology we're seeing a lot of right now.

[0:19:59] David Green: So, those first three steps essentially are, "Let's define the problem properly that we're trying to solve", essentially, and obviously that involves not just a bunch of data scientists going off and doing that; that involves working with business stakeholders to properly define the problem.  The second one is, "How do we intend to tackle the problem and what are we trying to do?"  The third one is, "How are we going to measure success?  So, what are the business metrics and what are we going to do?"  And that will help us understand whether it's even worthwhile doing, isn't it?  We've got the problem, we know what we're trying to do, we know how we're going to measure success, and that then leads to the second three steps that you talked about in terms of preparing the data, building the model, and deploying it. 

As you said afterwards, obviously you want to check the efficacy of the model, because you're possibly going to have to update the model moving forward, and also measure the success and go back to say, "Okay, well, we've deployed it now.  Is it actually meeting the metrics that we defined in the third step?

[0:20:59] Eric Siegel: Exactly.  Step three, which is the metrics establishing how you define performance and what your goals are, quantitative offline process, and at the same time you're doing an evaluation and assessment of how good the model is, how well it predicts and what kind of value it could deliver depending on the deployment.  And then after you deploy, yes indeed, you're continuing to monitor moving forward and the same metrics tend to apply.  Now that step five, that's the core rocket science, that's where you're learning from data.  That's the exciting technology that we're all trying to leverage.  And as with most data scientists, in my origin, that's all I used to care about from the beginning.  I was like, "Wow, learning from data", it's like a nerd's dream, right?  Learning data to ascertain patterns or discoveries that are true, that tend to hold over new unseen cases never before witnessed, non-included in the training data, that's really cool science.  And yes, lo and behold, it is potentially valuable to be able to predict.  Not just cool, but actually valuable if you can get it to deploy. 

But because that technology is so cool and the data scientists get so excited about it, again this applied to me in early years for sure, and I think it applies to most data scientists, they just want to load the data and make a model.  But you can't just skip over those pre-production steps, because they define exactly what data you need by way of defining exactly what you need to predict, which is determined by exactly how you intend to use the prediction.  So, you're working backwards in both directions, right?  You have to go through the pains of this organisational process if your use of this amazing rocket science is actually going to turn out to be valuable.

[0:22:45] David Green: In the book, Eric, you also talk about the innovator's paradox.  Can you explain this concept and how it applies to machine learning projects? 

[0:23:44] Eric Siegel: Right, so the innovators paradox is that, "The more novel or radical an idea, the greater the struggle to gain support for it".  So, if you have a really potentially valuable idea, which is going to mean overhauling a major established operational process, the higher the stakes are, the better the potential win; at the same time, the higher the resistance you may face because this is a change management issue.  And that's one of the main messages or themes of my book, is that you need to look at these projects as process improvements projects, which means you need to deal with change management, which may sound super-obvious and change management is a well-established discipline, but here's the thing; people aren't generally applying change management techniques for machine learning, and that's why they're failing, because they're so focused on the core technology like, "This is the best technology, surely it's going to be valuable". 

But again, the purpose of the project isn't to do awesome, cool number-crunching, as much as I wish it were, it's to make use of that to actually improve an operation, which means change.  No matter how good the technology is and how well the model predicts, you need to then actually implement those predictions so that operations change.  So, for example, I tell a story in the book about UPS, and Jack Levis at UPS was trying to get them to improve 16 million package deliveries a day.  And the way he did it was by predicting where there would be deliveries tomorrow in order to make better, more optimal plans for loading the trucks, so which package gets assigned.  So, making these changes to the practice met a lot of resistance.  And resistance kind of makes sense, right?  I mean, when things start to work, then you very quickly need to ensure that you don't break them.  So, there's intrinsically with large-scale operations, with existing enterprises, there's intrinsically resistance to change.  So, if your idea is good enough, you have to push quite hard, right?  So, the better the opportunity, the harder the battle that you may have to fight.

[0:25:54] David Green: So moving on, we talked a little bit about ethics earlier, and obviously one of the big concerns we hear a lot about initiatives around responsible AI and everything else.  We need that, we see more legislation coming in or being talked about, whether that's in individual states in the US, in New York, for example, around use of AI in the hiring process.  We're seeing, obviously, the EU bringing in regulation around AI as well.  What would you say are the ethical considerations that HR leaders and frankly business leaders as well should keep in mind when implementing AI and ML in people analytics, but in in other areas as well?

[0:26:34] Eric Siegel: Well, so these models can drive very consequential decisions as far as whether you get a job, whether you keep the job, whether you're approved for a loan, all these consequential decisions that really matter and affect individuals.  The ethical questions really start to arise when...  So, the fact is the thing's going to make wrong choices.  It's going to suggest you not hire somebody who really should be hired, it's going to suggest you keep someone in prison longer who really is not going to commit crime again after release, but the model is wrong.  The overall idea is that even for consequential decisions, it's sort of very analogous to self-driving.  A machine's going to be better, there's going to be fewer errors than humans in general.  That's the idea, that's the direction we're hopefully heading.  And when we do get to self-driving, which by the way will take a few decades in a large scale, if you put the hype aside; but when we get there, it'll save a lot of lives.  And same thing, injustice could very much be fought. 

But right now, what we're seeing is it turns out that these unjust or unfair or detrimental decisions that are sometimes made, hopefully less often with maths than a human, but the problem becomes when they're made more for one protected group, for example one race or ethnicity, than another, and that's called machine bias.  So, it's one thing if a model is explicitly using a protected class and based on your race or ethnicity is, if you're part of a minority class, it could penalise you.  I'm very much against that and by the way, I've written a bunch of op-eds on these very topics that try to spell it out and make it clear to all readers.  I've published in the Scientific American blog and San Francisco Chronicle and other places, you can see all my op-eds if you go to civilrightsdata.com.

In any case, that sort of explicit prejudice done by a model, that's a rarity, although it's still, unfortunately, an open debate, and that's called machine bias.  I believe that should be rectified with something analogous to affirmative action, because the only way to balance...  So, the good news is that this analysis puts the problem and the state of the world on the table in very concrete, quantitative terms that you don't generally get when all the decisions are being left up to humans.  And it provides potentially the opportunity to adjust for them, compensate for these problems, because we've got the maths and we've got a system where we've deployed the model.  We're actually using these predictions to inform or drive decisions.  So, we have a system where there's opportunity to make adjustments that counter those injustices.  I wouldn't say that's done so much, and the only way ultimately to do that is to reintroduce the protected class, for example race, and then have it intentionally adjust so that you balance.  So, needless to say it's not going to be an easy ethical question to resolve but at least now we kind of have it on the table. 

As far as the first one though where it's very clearly discriminatory and it's explicitly making decisions based on protected class, there's a famous HR example where Amazon had a model, and you've probably heard of this one and I imagine a lot of the listeners have, and it turned out that this resume-sorting model would involve gender.  It was meant to be gender-blind, so it found ways to ascertain the gender and was using those.  I would not call that machine bias in the sense of using a proxy, I would say it sort of found a way to reintroduce explicit gender.  And again, the fact that it made that discovery is not a surprise, that's what number crunching does, and I don't think Amazon was actually acting on those predictions.  So, it's just a nice example of, "Hey, look, this model made discoveries we don't want it to make.  We don't want it to drive decisions based explicitly, even in part, on gender.  Now, we noticed that.  Oh, we didn't think of women's volleyball team and what have you", now you can change that.

[0:30:47] David Green: Given the rapid advancements that's happening in machine learning and in AI generally, how can HR leaders, and again frankly business leaders, ensure that they are up to date with the latest trends to really successfully deploy these types of technologies in their organisations?

[0:31:05] Eric Siegel: Yeah, that's a great question, and I think the narrative needs to change.  So, when people say trends, they're usually talking about what's the best technology or what's the best analytical method.  But again, the message that I'm trying to bring to the world here is, "Look, the technology and the analytical methods are awesome.  That's where I'm coming from originally.  But now we need to look at this with business acumen".  So, the trend that is definitely coming, and as things get disillusioned with the lack of returns on these projects, the trend is, "Hey, let's put a business practice in place and make sure that we're planning, from a business perspective, exactly how this technology..." this is not plug-and-play technology.  You can't just plug it in like, "Hey, I've got a better database solution and that's going to make my whole system run more quickly, so I can throw that requirement over the wall and my engineers can just basically take this new solution and integrate it".  No, this is a change the way business is fundamentally being conducted.  It's a change to operational decisions on a daily basis, hundreds of thousands, maybe millions for many of these use cases.

[0:32:17] David Green: Yeah, and last question, Eric.  This is one we're asking everyone on this series of the Digital HR Leaders podcast.  And we've covered some of it already, but again, maybe it's an opportunity to summarise it.  How will AI, and I'm going to add machine learning to that as well, how will AI and machine learning transform the role of HR?

[0:32:36] Eric Siegel: I think that's a great question.  At least at large organisations, running all these major processes by the numbers is just going to continually become more of a trend.  And ultimately, the most actionable form of analytics is machine learning because that's where you're learning predictions on a per case basis, so, "Which benefits package would this employee benefit from the most?"

[0:33:02] David Green: Great.  Eric, I've really enjoyed the conversation and so glad that we've been able to release this episode on the same day that the book is published.  Before we end, can you let listeners know how they can find you on social media, find out more about your work and find out more obviously about the book?  Sure, well all of the above comes from the book's website, which is bizml.com.  So again, the name of this titular playbook, of the AI Playbook, the book, is bizML; that's the six-step practice.  So, bizml.com gets you to the book's page, and then there's an author page, an about page, and you'll see my LinkedIn link on the bottom there.  So, any and all information about me and my work in general you'll find one way or another by going to bizml.com. 

[0:33:48] David Green: Perfect, that's very simple for people and we'll put that in the show notes as well.  Eric, thank you very much for being a guest on the Digital HR Leaders podcast and helping us demystify machine learning, and certainly how we can get more to the deployment level and start really getting the value from it.  Thank you very much and I wish you well with the book.

[0:34:09] Eric Siegel: Thanks, David.  My pleasure.