Interviews have always been a necessary evil of the recruitment process. It is a performance. Nobody is their real self and can easily pretend to be whatever you want them to be for those 30–45 minutes. With the addition of Zoom interviews, it has become even more difficult to draw meaningful conclusions after the interaction.
You always get a gut feeling within the first 5 minutes, and that is important, but assessing the level of understanding a candidate possesses is both an art and a science. I have interviewed candidates throughout my career for various roles, but it has always been face-to-face interviews.
The pandemic made virtual interactions the new standard. In October 2022, the advent of ChatGPT, a publicly available large language model, changed the world. On the flip side, this elevated the meaning of the phrase “fake it till you make it” to a new stratospheric level.
In the context of interviews, the issue is not entirely with candidates but with the outdated hiring workflow. For years, your chances of being screened through to an interview have relied on keywords within your resume. So, making sure your resume matches the job title of the role you are applying for helps recruiters who do not have a technical understanding to match you with the job description.
I’ve had to do this too until I realized how this game works and learned to deal with recruiters.
Over the past year, every industry has been affected by the use of LLMs, and hiring and interviewing are no exception. Every few months, the iteration of large language models approaches the capabilities of human intellect. The latest models from OpenAI, ChatGPT-4.0, and Claude 3.5 have the reasoning skills of a teenager, and at some point, AGI will be smarter than 8 billion brains combined.
These are useful and amazing tools that can simplify tasks and amplify our capacity. The one thing they can’t do, for the time being at least, is sound human. The output is mechanical, and if you read it out verbatim, pretending that these are your thoughts, you sound like a smart assistant reading a Wikipedia article. In the past year, companies have been using AI to triage resumes, and candidates have been leveraging AI to pass through that triage.
There is nothing wrong with utilizing LLMs where it makes sense, but if you are trying to build a whole other persona through LLMs that manufacture a resume for you based on a job description with no basis in reality, then this is misrepresentation. It is actually a felony in 11 states in the US, in the UK, and other countries.
Over the past 2 months, I’ve had the experience of interviewing candidates for various levels of cybersecurity roles, all the way from the C-suite to analyst level.
I was shocked to discover that 70% of the candidates were following the same playbook. Specifically, there were 2 playbooks.
The first strategy appeared to be using an LLM to tailor the resume based on the job description and using words like senior, experienced for a work history of 3 years in total.
The second strategy was using a really impressive resume that had a work experience of 15 years and company names like Deloitte, PWC, Accenture. The strategy was to read through the made-up resume and then repeat each question with a pause and then start every response with, “Let me answer that for you.”
In either case, when I looked up a LinkedIn profile, the work experience matched the resume, but the roles on LinkedIn were in a completely different department than what was listed on the resume. For example, someone working as an Assistant Store Manager in a retail company had replaced that with Cybersecurity Analyst but had nothing to do with that.
Some candidates had a resume listing senior roles with the Big 4 but no online presence whatsoever.
Having LinkedIn is not a prerequisite, but I find it odd for someone to have spent 15 years with the Big 4 and not have any voice online.
During the interview, the candidates would blatantly read from the screen and in most cases repeat variations of the same answers regardless of the question.
There was no understanding of any of the domains in cybersecurity for the obviously fake resumes, but even the ones that did work in existing cybersecurity roles did not have any understanding outside of their day-to-day tasks.
The next striking observation was the correlation between certifications and actual knowledge. As with most industries, most people are split 50/50 on whether it makes sense to pursue certifications or not. I have obtained various certifications over the years, but I don’t do that anymore. I haven’t stopped developing my knowledge and understanding, just approach it differently.
Some of the candidates I interviewed had a list of certifications as long as my arm but couldn’t explain anything.
One candidate had just passed his CISSP exam. I congratulated him and proceeded to ask what some common domains across frameworks are that we would look to implement policies, processes, and controls in a small business.
The answer was, “I know I just passed the exam, but I don’t remember any domains.”
This relates to a concept called “The Map is not the territory.” This phrase was coined by Polish-American scientist and philosopher Alfred Korzybski in 1931. It means that looking at a map of a city is not the same as walking in it and experiencing it yourself.
The first 3 years of my career in cybersecurity, I was limiting myself to the tasks and activities of my job. As soon as I started getting exposed to other functions and started connecting the dots, I began creating a map in my head after walking through all the domains in cybersecurity. I put in my 10,000 hours and keep going.
Most candidates have a narrow field of vision and understanding of the big picture, and in some cases have no desire to do so, so they turn to ChatGPT as a shortcut.
The instant gratification promised by internet gurus gives people the false hope of going from 5 figures to 6 figures in 30 days, just because they went through a bootcamp or completed a certification.
Now, the current advice is to not even do that. The TikTok influencers are showing people that by using LLMs, you can pass technical interviews even in industries like aerospace without prior knowledge. I watched someone interviewing for Boeing as an engineer, answering questions on material strength, and passing the interview.
This is criminally dangerous. Imagine getting a job at Boeing and being the reason that planes start flying out of the sky.
If you haven’t heard it already, LLMs lie.
Specifically, they hallucinate. This is all to do with their original programming. The goal in most cases is to provide an answer. It is not to provide a truthful or factual answer but a complete answer, even if the LLM has to manufacture details that are untrue.
As these models evolve and become more capable, we will always look for people to build relationships based on honesty, integrity, and character. You cannot build these things by repeating words spit out from code. Have more honest conversations. It’s okay to say, "I don’t know this." Be honest. Be human. People appreciate that.
Put in the hours. Ask ChatGPT questions but then go and actually work on a hands-on project. Write a policy document. Use VMs to create a lab. Build a domain, configure group policies, and harden the OS following CIS guides.
ChatGPT can help with brainstorming project ideas, but you have to actually work on the project. Create something you can talk about and write about. Figure out what interests you and then apply for jobs in that field.
Don’t waste other people’s time. Do something you are proud to talk about to others.
This seems like a problem that will only get worse, how do you combat fake interviewees?
The engineers for example, must get caught out when they actually do some work and it's all wrong right? I hope so, as I plan on many flights in my future 😅
I agree. It's really challenging and scary.
Most times, you can tell if someone is really faking it as they start repeating themselves, as they haven't experienced the pain and agony of setting up Microsoft cloud products for example, to be able to talk about their battle scars.
As they rely on LLMS, I find that LLMs provide high level output and If you have experience the output is annoying but If you don't know anything it looks good enough.
A good example of a hiring process is from 37Signals. They get candidates to work on a deliverable and see how it works out. If they aren't a good match or their quality of work is poor, they pay them for their time and move on.
Some companies are now hiring people that have an online presence and following, with a build in public type of resume. Having a stuffed resume with keywords and expressions like "spearheaded" don't work.