Advertisement

How to AI-proof the cybersecurity workforce

Generative AI can enhance digital security, but it can’t — and shouldn’t — replace humans that are essential to fight malicious hackers.
Illustration shows the ChatGPT logo on a smartphone in Washington, DC, on March 15. (Photo by Olivier Douliery/AFP via Getty Images)

Automation is hardly new. Ever since the Industrial Revolution, jobs have been transformed, created and eliminated because of it. Now, automation in the form of artificial intelligence is coming for the tech sector — and specifically cybersecurity.

The excitement over AI in cybersecurity was on full display at the annual gathering of infosec professionals in San Francisco known as the RSA Conference. At this year’s event, multiple keynotes focused on the potential for AI to efficiently hunt for digital risks and automate threat response protocols. AI also promises to alleviate the stresses associated with many cybersecurity jobs, such as first responders. But just as there’s potential, there are downsides. As AI tools inevitably begin to scale and tackle more complex cybersecurity problems, the impact on the workforce is troublesome — and dangerous. 

We cannot let the potential of AI overshadow the value of human cybersecurity professionals. While AI excels at pattern recognition tasks such as detecting malware attacks, machines cannot take into account the context for why an attack may be happening. AI can be amazing at automating some aspects of reasoning, but algorithms cannot replace people when it comes to the creativity required to find unique solutions. Chatbots can’t replicate all the human competencies that are crucial within cybersecurity. So, without a measured — and cautious — approach to AI, our sector risks moving toward insecurity.

While it’s reassuring to see a growing conversation about the potential dangers of AI and efforts to put in place some common sense guardrails to regulate its deployment, such as President Biden’s meeting this week with Big Tech critics in San Francisco, there’s still not enough focus on the potentially devastating impact that AI tools could have on the American workforce.

Advertisement

Goldman Sachs estimates that in the U.S. and Europe, about one-fourth of current work can be substituted by generative AI. It’s unlikely that entire job functions will be eliminated, but less people will be needed to maintain a baseline level of work. Moreover, there is research that posits that high-skilled jobs may be impacted more because AI’s predictive capabilities mimic the analytical and optimization skills core to many higher skilled jobs. Within cybersecurity, these can include individuals across a number of functions such as the SOC analysts to efficiently aggregate suspicious activity data, or red teamers who code and test for vulnerabilities.

What needs more attention beyond the job numbers are the economic impacts on the cybersecurity workforce. Empirical evidence examining wage changes and automation between 1980 and 2016 suggests that about 50% of wage changes are due to task displacement and actually exacerbate wage inequality. The study is not sector-specific, but if leading cybersecurity firms are touting AI’s potential to efficiently conduct tasks such as automated threat detection protocols, then cybersecurity will not be insulated from these changes. 

We also need to consider the impacts on diversity. There have been commendable efforts the past several years to lower barriers to entry into cybersecurity, including scholarship programs that cut the cost of entering the field, and professional associations such as Black Girls Hack or Women in Cybersecurity that help foster belongingness and retention in the sector. The National Cybersecurity Strategy further underscores how central diversity is for the workforce to long-term cybersecurity. But we are at a critical crossroads as layoffs across sectors, especially in tech, are cutting diversity, equity and inclusion efforts. If history suggests that job displacement by automation is on the horizon, AI can further slow our hard-earned progress. 

It’s imperative that investors and advocates of the cyber workforce consider the potential ramifications of AI, including on its least represented members. Luckily the U.S. has a growing ecosystem of cyber workforce development programs designed to usher individuals into the cybersecurity sector that we can reframe workforce priorities rather than inventing a new wheel.

But more needs to be done to make cybersecurity workers AI-proof. For starters, many of the new cyber educational efforts can focus on soft skills that cannot be automated. Generative AI can automate many tasks, but skills such as creativity, emotional intelligence and intuition are hard to replace. Whether in designing training curriculum or hiring practices, emphasize these skills to ensure your cybersecurity staff can solve tough problems, but also have the capabilities to complement the challenges and potentials of AI. 

Advertisement

Several large tech companies have professional development tracks that upskill their staff and other associations provide additional training and certifications at a premium but there’s opportunities for other nonprofits to expand their programming to include AI. Nonprofit organizations who have a stellar track record for technical training have an opportunity to step in and build equitable pathways for cybersecurity workers to continue their technical careers, and there is space for philanthropies and corporations to invest in developing these programs. 

We also need to rethink what it means to have a “cybersecurity career.”Cybersecurity extends beyond patching vulnerabilities and detecting threats. Policy analysts now contextualize strings of cyberattacks within a wider geopolitical conflict. Developers contribute their lived experiences to designing tech solutions to society’s pressing challenges. While extending our definition of a cybersecurity expert, we need to ensure these professionals are communicating. Programs such as the #ShareTheMicInCyber Fellowship or TechCongress focus on bridging the gap between technical experts in cybersecurity and technology to inform better policymaking.

There is no doubt that generative AI will have a transformative impact. We have the opportunity to prepare the cyber workforce for a future just as promising, and we need to start now.

Bridget Chan is the program manager at New America for the #ShareTheMicInCyber Fellowship, a program advancing DEI in cybersecurity.

Latest Podcasts