Blog
Conversational AI
5 min read
November 6, 2023

An Executive Order recently followed NYC’s regulation of the use of AI in hiring — here's why that's actually a good thing (and also liberating).

A new set of standards and regulations were released for AI safety and security. Here's why it's actually a good thing for you — and us too.

Article Quick Links

I’m guessing the title of this article threw you for a loop because nobody ever feels good about regulation. It is, by definition, restrictive. It’s the legal equivalent of the walls closing in on you.

So when New York City passed a new law back in July that regulated the use of AI in hiring, the news was met with a chorus of groans and murmurs in the HR technology space. Everybody — including us — was confused (and concerned) about the whole thing. What does this mean? Does this law make any sense? Is the AI they’re talking about my kind of AI?

For about five minutes, I’ll admit it felt like the jig was up. This was all going to get really messy.

Just go back to paper. 

But then NYC rewrote the law to clarify things. The latest version of the law made a clear delineation between:

  • Using AI generally to automate certain tasks
  • Or — and this is critical — making decisions

They specifically cited what they called an “AEDT” (automated employment decision tool) and defined what it means to make or assist in a hiring decision. And employers using that sort of tool are subject to the law, which then requires full transparency to candidates and annual audits to prove their system isn’t discriminatory.

And then to further tighten the regulatory squeeze, an Executive Order was issued on October 30 that “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” 

Now, if you’re only loosely familiar with Paradox and our product, you might think all of this is bad news for us. But here’s the thing: Most of it doesn’t apply, because we actually don’t do decisions. 

And we don’t want to. At least not right now.

Our philosophy on this has remained consistent from our inception: We believe that humans should make the important judgment calls in the hiring process. AI is a means to an end — it automates all the hiring tedium at scale and enables your teams to invest more time and energy back into those critical decisions only they can make.

So, in a way, I find the Executive Order and the NYC restrictions to be oddly liberating. Because we get to continue to do what we’re good at without the burden of being grouped in with a completely separate subset of AI that is trying to achieve a totally different outcome than we are. 

And while there’s been plenty of criticism over the law and fears that this opens the floodgates for more regulation in the future, in the here and now, I think it’s actually a net positive for all parties involved. 

Why this is better for tech companies and employers.

Because you (we) can still do a lot of cool, creative things with AI — without the worry.

I’ll be honest, as the chief product officer for an HR software company that utilizes conversational AI to power said product, I’ve had some preoccupation over the discourse around AI lately. This year has provided plenty of positive developments (ChatGPT brought generative AI to the mainstream, which was cool) but like clockwork, the doomsayers have started to, well, say doom. There’s been a tempest of negativity swirling, and occasionally Paradox gets swept up in it. It’s largely been due to a fundamental misunderstanding of what we do, which stems from a lack of clarity around AI in general.

I don’t really blame anyone for getting this wrong — all of this is fairly fresh and simply hadn’t been defined yet. But now it has, by law. It’s black and white.

Here’s what this means: Whether you’re a vendor or practitioner, if your AI is not being used to make critical judgment calls in the hiring process, and is instead used solely for automating simple, fact-based decisions (like if you have a certain amount of work experience or a necessary license), then you can cleanly remove yourself from any and all negative sentiment about bias or discrimination — you can point to the law and declare that’s not us.

This isn’t about shirking responsibility. It’s about clarity and peace of mind. It allows you to utilize AI to solve certain problems within the hiring process without looking over your shoulder.

At Paradox, we’re just going to keep doing what we do best until we collectively figure out how to solve for the biases and shortcomings that currently exist.

Why this is better for candidates. 

It’s never fun to be turned down for a job. It’s even less fun when you don’t know why. 

And it’s least fun when the thing that turned you down for a job for seemingly no reason wasn’t even a human.

This law fixes that. 

If AI is being used to make a judgment call — for instance, to analyze a candidate’s resume and determine if they’re a match for a certain role — then the employer legally has to disclose that information to the candidate. This means more overall transparency for candidates and less ambiguity over why certain decisions are being made.

While we’re not subject to this either, we will and have been transparent about this. We’re not in the game of tricking people. 

My philosophy is: We want you to know that you’re talking to an AI, but we want the AI to be so conversational and intuitive that you forget that you are. I’m proud to say that we’ve had dozens of clients who tell us their candidates think their AI assistant is a real person and are bummed out when they realize they won’t actually be attending the interview in the flesh — but that’s an organic outcome due to a good product, not deception.


And in the cases where it’s more obvious that a candidate is talking to an AI, that’s fine, too! Depending on the context, it can be by design. I’ve actually seen detractors point out the direct, simplified language from our AI during some scheduling processes as a negative, but let me ask you: When you’re trying to get scheduled for an interview for an hourly role, do you really care about having some elevated, engaging conversation, or do you just want it to be fast and accurate?

There’s a time and place to humanize AI in the candidate experience. I’d be lying if I said we got it right 100% of the time, but we’re getting closer. And this law inches us even further along.

AI is at its best when it’s simple, focused, and transparent. 

AI doesn’t have to be complicated. It really doesn’t. 

And I think all the tech companies and software vendors that are selling you some complex solution with vague algorithms and dubious outcomes are the ones you should be most worried about. Those are the ones this law is designed for. 

If anything, I view this new set of regulations as something that will separate the vendors who truly believe in solving your hiring problems from the ones trying to sell you a bag of magic beans. Those magic beans might help you grow a giant, magical beanstalk, but everyone knows how that story ends.

Our story ends where it begins: Trying to help people spend time with people, not software. This law doesn’t rewrite that — it merely adds a helpful footnote to provide more clarity. 

Written by
Adam Godson
,
Chief Executive Officer
Adam Godson
Written by
,
Back to top

Every great hire starts with a conversation.

Demo Olivia now