Peter Skaronis

June 30, 2024

Cybersecurity Talent Landscape and LLMs

Interviews have always been a necessary evil of the recruitment process. It is a performance. Nobody is their real self and can easily pretend to be whatever you want them to be for that 30-45 minutes. Adding the twist of Zoom interviews, it has become even more difficult to draw meaningful conclusions after the interaction.

You always get a gut feeling within the first 5 minutes and that is important but assessing the level of understanding a candidate posesses, is an art and a science. I have interviewed candidates throughout my career for various roles but it has been always face to face interviews.

The pandemic made virtual interactions the new standard. In October 2022 the advent of chatGPT, a publicly available Large Language Model changed the world. On the flipside, this elevated the meaning of the phrase "fake it till you make it" to a new stratospheric level.

In the context of interviews, the issue is not entirely candidates but the outdated hiring workflow. For years, your chances of being screened through to an interview has relied on keywords within your resume. So, making sure your resume matches the job title of the role you are applying for helps recruiters that do not have a technical understanding to match you with the job description.
I've had to do this too, until I realized how this game works and learned to deal with recruiters.

Over the past year every industry has been affected by the use of LLMs and hiring and interviewing is no exception. Every few months, the iteration of Large Language Models approach the capabilities of human intellect. The latest models from OpenAI, chatGPT 4o and Claude 3.5 have the reasoning skills of a teenager and at some point AGI will be smarter than 8 billions brains combined.

These are useful and amazing tools that can simplify tasks and amplify our capacity. The one thing they can't do, for the time being at least, is to sound human. The output is mechanical and if you are to read it out verbatim, pretending that these are your thoughts, then you sound like a smart assistant reading a wikipedia article. In the past year companies have been using AI to triage resumes and candidates have been leveraging AI to pass through that triage.

There is nothing wrong with utilizing LLMs where it makes sense, but if you are trying to build a whole other persona through LLMs, that manufacture a resume for you based on a job description with no basis in reality, then this is misrepresentation. It is actually a felony in 11 states in the US, in the UK and other countries.
Over the past 2 months, I've had the experience of interviewing candidates for various levels of Cybersecurity roles. All the way from the C-Suite to analyst level.

I was shocked to discover that 70% of the candidates were following the same playbook. Specifically, there were 2 playbooks.

The first strategy appeared to be using an LLM to tailor the resume based on the job description and using words like senior, experienced for a work history of 3 years in total.

The second strategy was using a really impressive resume that had a work experience of 15 years and company names like Deloitte, PWC, Accenture. The strategy was to read through the made up resume and then repeat each question with a pause and then start every response with, "Let me answer that for you".

In either case, when I looked up for a Linkedin profile, the work experience matched the resume, but the roles on LinkedIn were in a completely different department than what was listed on the resume. For example someone working as an Assistant Store Manager in a retail company had replaced that with Cybersecurity Analyst but had nothing to do with that.

Some candidates had a resume listing senior roles with the big 4 but no online presence whatsoever.
Having Linkedin is not a pre-requisite but I find it odd for someone having spent 15 years with the big 4 and not have any voice online.

During the interview, the candidates would blatantly read from the screen and in most cases repeat variations of the same answers regardless of the question.
There was no understanding of any of the domains in cybersecurity for the obviously fake resumes, but even the ones that did work in an existing cyber security roles, did not have any understanding outside of their day to day tasks.

The next striking observation was correlation between certifications and actual knowledge. As with most industries, most people are split 50/50 on whether it makes sense to pursue certifications or not. I have obtained various certifications over the years but I don't do that anymore. I haven't stopped developing my knowledge and understanding, just approach it differently.
Some of the candidates I interviewed had a list of certifications, long as my arm, but couldn't explain anything.

One candidate has just passed his CISSP exam. I congratulated him and proceeded to ask what are some common domains across frameworks that we would look to implement policies, processes and controls in a small business.

The answer was, "I know I just passed the exam, but I don't remember any domains."

This relates to a concept called "The Map is not the territory". The phrase was coined by Polish-American scientist and philosopher Alfred Korzybski in 1931. It means that looking at a map of a city, is not the same as walking in it and experiencing it yourself.

The first 3 years of my career in Cybersecurity, I was limiting myself to the tasks and activities of my job. As soon, as I started getting exposed to other functions and started connecting the dots, I started creating a map in my head after walking through all the domains in Cybersecurity. I put in my 10,000 hours and keep going.
Most candidates have a narrow field of vision and understanding of the big picture and in some cases have no desire to do so, so they turn to chatGPT, as a shortcut.

The instant gratification, promised by internet gurus, gives people the false hope of going from 5-figures to 6-figures in 30 days, just because they went through a bootcamp or completed a certification.

Now, the current advice is to not even do that. The tiktok infuencers are showing people that by using LLMs you can pass technical interviews even in industries like aerospace without prior knowledge. I watched someone interviewing for Boeing as a engineer answering questions on material strength and passing the interview.
This is criminally dangerous. Imagine getting a job at Boeing and be the reason that planes start flying out of the sky.

If you haven't heard it already, LLMs lie.

Specifically, they hallucinate. This is all to do with their original programming. The goal in most cases is to provide an answer. It is not to provide a truthful or factual answer but a complete answer, even if the LLM has to manufacture details that are untrue.

As these models evolve and become more capable, we will always look for people to build relationships based on honesty, integrity and character. You cannot build these things by repeating word spit out from code. Have more honest conversations. It's ok to say I don't know this. Be honest. Be human. People appreciate that.

Put in the hours. Ask chatGPT questions but then go and actually work on a hands-on project. Write a policy document. Use VMs to create a lab. Build a domain, configure group policies and harden the OS following CIS guides.

ChatGPT can help with brainstorming project ideas, but you have to actually work on the project. Create something you can talk about and write about. Figure out what interests you and then apply for jobs in that field.

Don't waste other people's time. Do something you are proud of talking about to others.

About Peter Skaronis

 Hey! I'm Peter, Cybersecurity Consultant, Polymath and the CEO at Techimpossible.
I 'm currently working on developing the Cybersecurity Notes and Cybersecurity Templates.

Subscribe below to follow my thinking, on cybersecurity, learning, and Technology. Thanks for visiting, thanks for reading.

XLinkedInSubstackUnsplash