Candidates: Are you interviewing and need support?
Candidates: Are you interviewing and need support?
At the annual Society for Industrial and Organizational Psychology conference, a yearly summit of IO psychologists, talent professionals and Data Scientists coming together to share and learn about the latest research in the field, I had the pleasure of moderating a panel discussion with some of our clients and HireVue’s Chief Data Scientist, Lindsey Zuloaga.
The session focused on what organizations are looking for in an AI assessment solution provider at a time when the regulatory landscape promises to drastically shape the future, and an expectation of a provider’s quality of technology, experience and validity evidence are now the standard prices of admission.
One of the most salient themes of the session centered around the ethical use of artificial intelligence and algorithmic transparency. It not only echoed in our session but was a theme that resounded across many standing-room only SIOP sessions this year.
This intense interest is due to the patchwork of new regulations around the requirements that bias audits be conducted on any automated employment decision tools (AEDTs) used to screen a candidate in a selection process. According to the NYC law, the “AI Law makes it an unlawful employment practice for employers to use automated employment decision tools (AEDTs) to screen a candidate or employee within New York City unless certain bias audit and notice requirements are met.” Subsequent regulations are being written across states and the Whitehouse, DOJ and the EEOC have all provided new guidance with the use of this technology. And this is just the beginning.
Here are the top takeaways on what organizations are looking for in an AI assessment provider:
Organizations want a partner that can stay ahead of the rapidly changing regulations around AI and be a trusted advisor with the expertise and confidence that their algorithms are being used and designed in the most transparent and ethical manner.
With these new bias audit requirements, organizations may need to ask additional demographic questions of candidates so stricter levels of data privacy and security are warranted.
Organizations want to ensure the use of AI algorithms are fair to their candidates and want a partner that understands what the impact of their technology and assessment solutions is on protected groups.
From a more practical perspective, organizations want a partner that listens to their specific needs and offers solutions that fit their individual challenges and use cases and not recommend a one size fits all approach.
For additional criteria on what to look for in a vendor see the AI Assessment Vendor Vetting Checklist.