Blog
Conversational AI
2 min read
April 15, 2024

Our CEO chatted with Josh Bersin about generative AI in TA — here's how it makes chatbots more sophisticated.

Josh Bersin recently sat down with Paradox CEO Adam Godson to talk about new AI developments, what sets Paradox apart, and more.

Article Quick Links
This blog is part of a larger collection of client story content for .
See the full collection
This blog is part of a larger collection of client story content.
See the full collection

Industry analyst Josh Bersin has seen his fair share of HR technology. But rarely has there been a development that “radically changes the way we think about recruiting.”

What seems like hyperbole is merely the reality of 2024: AI changes everything. To make sense of things, Bersin recently sat down with Paradox CEO Adam Godson to talk about new AI developments, what sets Paradox apart, and more.

Listen to the full conversation here:

The following is a short snippet:

Josh Bersin: In the beginning, we kind of had AI, but it was not the AI we know today. So what are we working with today?

Adam Godson: In 2016 there was this early era of chatbots that used natural language processing. Which, when you look at it, is a subset of AI. But today, you wouldn't really recognize that in many ways as intelligent — as sort trained models that mimic human conversation. A lot of the time when you’d use it in customer service, you'd get a “does not compute” or “I can't answer that.” 

Thinking about the way the world has evolved in the last several years with technology and particularly large language models has really begun to change how conversational AI is done.

One of the core principles for us, though, is self-determination from companies. Some companies are very comfortable throwing the anchor forward and saying, "Let's use large language models and let's use generative AI to make all sorts of interesting advances." Many other companies have lots of lawyers and security or privacy folks that are going to want to run things through the wringer. 

And we welcome all types in that realm. Some companies are going to focus on really deterministic models: “We want it to always say this.” Others are going to use different methods that are able to generate answers.

JB: Right, the original model was more deterministic. It was a whole bunch of tables: If they ask this, then answer this. 

The newer one is actually generating answers from content?

AG: Yeah, we use a technique called retrieval-augmented generation to give structure to responses. To say: This is the answer, and if we have the answer, we will give that answer. And then lots of flourishes around that as well.

So we’re always thinking about the context of the conversation: Is there a point to insert empathy? Can I use an emoji? 

Emojis are actually really hard because if you get them wrong, you get them really wrong. And so the quality has to be really high there. But it's an important thing for us to do as well because those add the feel to the conversation. And we measure our conversation quality in lots of ways.

Another way we do that is when people say “thank you.” And you wouldn't say thank you if you knew it was an automated conversation subconsciously. The feeling we want to produce is being consciously aware you’re having an automated conversation, but subconsciously feeling like it's a real person. 

JB: Given the fact that LLMs are sort of a commodity already — it’s very easy to get your hands on one, and you can ask it questions and it will answer reasonably well — what makes Paradox different from me taking all my recruiting docs, sticking them into ChatGPT and asking it a bunch of questions?

AG: It’s all about context, specialization, and the pre-training of our models. We use open-source large language models, and we undergo significant effort in pre-training them. We've been pre-training our models with tens of millions of conversations we've had over the last seven years, all in the context of recruiting, to be able to use that data to form conversations specific to this context. 

If someone asks, "What are the benefits?" We know that they're probably talking about employment benefits related to a job, and we can break that down into 30 different other subsections of follow-up questions, like, "Are you talking about insurance benefits or time off benefits?" Whereas if you just ask ChatGPT that string of words, it may mean any number of things. So the pre-training of that model matters a lot.

JB: The big key is searching and finding the right content to answer the question. In your case, you've gone from a candidate conversational bot to an ATS, essentially, and more. How do you manage all the data behind this? 

AG: The data is managed in our systems to be sure that we can understand the context of when someone's answering a question. 

So in my earlier example about benefits, it might matter where in the process you are. I might give you an answer with a level of specificity in the recruiting process, and once you're an employee, I might give you a different answer that says, "Here's the provider to call, because you're already in our other systems." 

So having the context of not only the previous conversations you've had, but also where this person is in the process. Have they scheduled an interview? Have they already had the interview? Have they already been hired? All that matters to the type of conversation we can have.

JB: Okay. So there's essentially a workflow management system going on behind the scenes here that's keeping track of what this person's been through. Okay. So in a sense, it's a very sophisticated domain-knowledgeable chat system built on AI. It’s not a chatbot, in a sense?

AG: That's right.

JB: It looks like a chatbot, but it's very sophisticated in this domain.

Written by
Jack Dimond
,
Contributing Author
Jack Dimond
Written by
,
Back to top

Every great hire starts with a conversation.

Demo Olivia now