Article · 6-minute read
By Richard Williams – 21st August 2024
Recruiters are currently being overwhelmed by a ‘barrage’ of poor-quality AI-generated applications for roles, making it more difficult to spot real talent, according to a recent piece in the Financial Times.
Applicants are increasingly using AI tools such as Chat GPT to create CVs and complete online application forms, according to studies, leading to a high volume of poor quality. or unsuitable. applications for hirers to sift through.
A survey by HR start-up Beamery found that 46% of job hunters are currently using generative AI tools to search and apply for roles. While a separate study of 5,000 global job seekers tells a similar tale, with 45% of saying that they have used AI-tools to create or improve their CVs.
With figures like these, it’s not hard to see the challenge that these rogue applications present.
In this article, we look at some of the steps that hiring manager can take to ensure they are shortlisting the right candidates, for the right roles.
While generative AI tools such as ChatGPT have many valid uses and valuable applications, when it comes to job seekers they’ve been accused of ‘lowering the bar’ to applying by reducing the time, effort and thought that goes into an application.
Candidates can take advantage of AI by copying and pasting the hiring questions into ChatGPT and then copying the auto-generated answers into the application form.
But when it comes to job applications, there’s no substitute for the human touch say critics.
“CVs need to show the candidate’s personality, their passions, their story, and that is something AI simply can’t do,” says Victoria McLean, chief executive of career consultancy CityCV.
Poor grammar and spelling, as well as clunky, generic language can be among the tell-tale signs of the use of an AI application in a cover letter.
Speaking on LinkedIn, Laura Allen, a Career and Life Coach, suggests that AI does have legitimate uses when it comes to job applications, it’s just about candidates knowing where the line is. “AI is a wonderful thing especially for people who struggle with dyslexia or for whom English is not their first language.” Apps such as OpenAI ChatGPT and Google Gemini can be used to review a candidate’s CV and make sure it is legible for the reader, or even to reword sentences if candidates are struggling. Most recruiters would probably agree that this is acceptable and marked difference to using AI to fire off hundreds of unsuitable applications.
According to a recent study by ResumeGenius, an AI-generated CV is currently one of the biggest red flags for recruiters, with 53% of those surveyed naming it as the biggest warning sign that an applicant isn’t suitable (just ahead of frequent job-hopping at 50%). Factors that those surveyed named as looking out for included poor formatting, typos, and irrelevant information.
“If you use AI to write a resume for you in minutes, it tells me you didn’t put a lot of time and thought into applying to my job,” says Michelle Reisdorf of recruitment firm Robert Half, commenting on the research for CNBC.
Another concern in the industry is around candidates employing AI tools to help them complete, and pass, assessments such as aptitude tests, or to answer required personality questionnaires. Individuals may also try to use Chat GPT to prepare their answers for video interviews, if given the questions in advance, for example.
To ward against this, our assessments have a number of built-in measures that flag where generative AI may have been used, and we’ve run a number of trials to see how robust our assessments are against ChatGPT:
The majority of the aptitude tests in our portfolio mix verbal and non-verbal information such as diagrams, symbols and graphs, making it difficult for candidates to rely on ChatGPT for completion. With the verbal questions, we found that ChatGPT would often make logistical errors and was not able to fully comprehend a question, or arguments in a passage of text; highlighting its limitations.
Our aptitude tests are also strictly timed. This makes it difficult for a candidate to be inputting questions into generative AI, receive credible answers, and submit them, all within the time limits.
Should organizations require further peace of mind, we are also able to offer remote supervision of tests, allowing hirers to check for cheating, should they require it.
While our Wave personality questionnaires are not time limited, the response format and smart scoring mechanism is highly effective in detecting erratic or inconsistent responses.
The unique ‘rate and rank’ format of Wave also makes it difficult for ChatGPT to provide a precise rating of an item, and then sensibly rank two or more items, while remaining consistent across the different areas measured.
ChatGPT also does not have the ability to create a personality profile that is appropriate for a particular job, or reflects the candidate’s personality, and therefore, a candidate would struggle to replicate the persona at interview, or in a feedback session, which should raise flags to hiring managers.
You can read more about this in our article Reducing the Risk of Bad Hire: Shining a Spotlight on Candidate Faking
Situational Judgment Tests such as our Situations tool present candidates with workplace scenarios that they are likely to experience in the job to which they have applied, assessing their decision-making and suitability for the role.
While some competitor SJTs involve asking the candidate to compare and rank multiple items at the same time. Our SJT format, where items are presented one-at-a-time, requires more a more nuanced response. This format would also be very demanding for an individual to input into ChatGPT, and much more challenging for ChatGPT to then produce an appropriate and timely response.
Due to the sophisticated format and scoring mechanism used, it is unlikely that candidates would be able to effectively use or gain any advantage from using AI to complete our SJTs.
You can dive deeper into how our assessments help combat the use of generative AI in our article ‘What Does the Emergence of ChatGPT Mean for the World of Assessment?’
As generative AI technology becomes more sophisticated, it may well become harder to spot those using it, but help is at hand; as the technology gets more sophisticated, so will the prevention measures.
Specific features embedded in Saville Assessment Wave, Swift Aptitude, and our scenario-based SJTs, as outlined above, provide reassurance that tools such as ChatGPT do not pose a material threat to the integrity of assessment results, flagging suspicious activity for human review, and ensuring that you are confidently hiring the very best talent for your organisation.
Our experienced team would be happy to discuss this issue in more depth with you and show you a live demonstration of our tools.
Richard is Marketing Manager at Saville Assessment and heads up our product and training marketing activity, as well as helping to organize virtual and face-to-face events.
You can connect with Richard on LinkedIn here.
© 2024 Saville Assessment. All rights reserved.