Switching to a more efficient genAI model boosts Checkr’s background check process

Switching to a more efficient genAI model boosts Checkr’s background check process

With over‌ 100,000 businesses relying on Checkr for monthly personnel background ‌checks, the use⁣ of generative ⁤AI (genAI) and machine learning tools is essential to navigate through⁢ vast⁤ amounts of unstructured data.

Through an automated process, each potential​ job candidate⁤ undergoes ‌a thorough background check that analyzes information from various sources to identify any criminal or other relevant issues.

About ‍2% of Checkr’s data is considered “messy,” requiring the adoption of genAI tools like OpenAI’s GPT-4 large language model (LLM) to handle such records efficiently.

Despite GPT-4 achieving an 88% accuracy rate it dropped to ‍82% when dealing with messy data, falling short of ‍customer standards.

To address this challenge, Checkr integrated retrieval augmented generation (RAG)⁢ into its LLM⁣ system⁤ to​ enhance accuracy. While this improved accuracy rates for ‌most records to 96%, it decreased‍ significantly for more complex data sets down to just 79%.

In addition to accuracy‌ concerns, both the standard GPT-4 model and the RAG-enhanced version faced slow response​ times during background checks – taking up to 15 and seven seconds respectively.

2024-10-11 ​03:15:03
Article from www.computerworld.com

Exit mobile version