In the cybersecurity war between organizations and adversaries, we too often convince ourselves that we are pitting “technology versus technology.” But, actually, the battle is about people versus people.
Technology alone cannot hunt, identify, assess and defeat attacks, because it lacks human-supplied authentic intelligence. With this intelligence, people and machines work together to outsmart and outmaneuver the adversary, seeking to hold the high ground, in effect by proactively monitoring network logs and host data, patrolling the external and internal networks and defending endpoint devices from becoming paths for exploitation.
Is anyone fully prepared to deal with a cyberattack?
Unfortunately, the adversary recognizes the indispensable people factor and takes advantage, realizing that IT departments are overwhelmed, information management systems are inadequate and solely technology-based solutions, while perhaps attractive on paper, are insufficient.
Nearly three of five information security professionals admit they are worried about whether their organization is prepared to deal with a global cyberattack, according to research from Bitdefender. About one-half say these concerns keep them awake at night, and 53 percent are considering leaving their jobs due to a lack of staffing and budget, and the accountability directed at them despite limited support.
The upshot: The key component in this equation is and always will be people.
“Cybersecurity is a battle of knowledge, capabilities and resources. If the adversary has more of these than the IT team, then the adversary wins. It’s that simple”.
Some only need to be right once
Further, the adversary has the advantage of only having to be right one time to successfully exploit an environment.
Conversely, defenders have to be right all the time in order to stop or quickly mitigate all exploit attempts.
Technology, therefore, is the great enabler toward that effort, though on its own without human intervention, it is simply not enough.
AI is transforming “everything” – true or false?
In an era of unprecedented data analytics and computing horsepower, it is tempting to assume that scaling-up more advanced software can overtake the slower-growing ranks of security professionals.
This is reflected in many business leaders’ fixation on somehow unleashing artificial intelligence (AI) and other automation processes on cyber risks, understandably expecting AI to turn the tide against data breaches no differently than it fights supply chain inefficiencies or excess inventory.
Two-thirds of senior IT executives think that AI will help identify critical threats, and more than one-half say they have invested in a “high utilization” of AI for this purpose, according to findings from the Capgemini Research Institute.
Subsequently, there is a lot of buzz now about AI “transforming” everything … “AI will solve the security talent shortage,” or “Automation will make most attacks ‘obsolete.’”
There is persistent talk that software is on a trajectory to begin replacing humans entirely – but AI will not do this in the foreseeable future, simply because it cannot.
“Boiled down, AI amounts to rules-driven and carefully tuned analysis of variables that can be most-easily scored. That is, reliant on human-derived data categories, coded actions and thresholds”.
AI can compute equations faster than a person, but software struggles to spot and act on variables and activity outside its carefully curated vantage point.
Software vendors make claims, for example, that AI will stop “99 percent” of threats. This may be true if we assume the developers are being honest about a very forgiving data set – but an interdiction rate that high also usually means their “solution” will “stop” pretty much everything – including legitimate, business-supporting traffic.
If we use automation to simply “move” the burden of false-positives and productivity-sapping disruptions from security operations centers (SOCs) to computer programs, this will not alleviate long standing challenges for security pros like enabling the business, taming numbing alert fatigue and proving cybersecurity ROI.
Tools are not everything – people are
As opposed to focusing strictly on tools, tech decision-makers must acquire and internally recruit managers who can take stock of current security priorities and help apply inevitable automation and AI advances and principles for the greatest effect.
This is an all hands on deck effort, with security teams I know continually looking to deputize business-knowledgeable risk management talent from within and revisit what they need from existing partners and service providers to align in data-driven security efforts.
This type of strategic human-led leadership of rapidly evolving security technology and data in what I call a “human security multiplier” approach, because it looks at iterative AI and automation gains as a means to an end, not the debate or destinations, themselves.
The human element serves an essential purpose because cybersecurity is no different than any other business function – it highly depends upon the expertise of the people behind it.
“CISOs need the most knowledgeable experts who live and breathe cybersecurity day in and day out to maximize the efficiency and effectiveness of ongoing advancements in tech innovation”.
This is where the aforementioned, vital authentic intelligence enters the discussion – human experts who use technology as a tool to protect data, defend systems and manage risk.
As a result, they establish a robust posture for optimal threat hunting and response, digital forensics, application testing, penetration testing, business continuity, disaster recovery and even integrated compliance solutions.
The human security multiplier approach
Parallels to the “multiplier” approach can be seen in action in many places, like in the U.S. military: Advanced fighter aircraft like the F-35, for example, is designed to perform more multi-role, networked missions, meaning fewer planes are theoretically required to perform the same missions.
At the tactical level, researchers are exploring how soldiers could use exoskeletons to extend their range and endurance, potentially giving an individual the capabilities of many in the field. In these other cases, a force multiplier frees skilled personnel to shift to new skills and imperatives.
But human minds who might otherwise be flying and scouting are now able to tackle the Pentagon’s other modern challenges, like inventing new ways to integrate and act on data from networked vehicle and personnel platforms or adapting to emerging national security priorities in a changing world.
“Within our organizations, the human security multiplier approach will best protect us because, on the adversarial side, we can be certain that “people” remain the force recycling, inventing and launching threats”.
What’s more, these people can circumvent any technology we set out to stop them. If we set up a fence, they will build a ladder to climb over it. If we top the fence with razor wire, they will come back with a taller ladder – and thicker gloves, too.
If we place cameras behind the razor wire, they will find a path to tunnel under the fence, and so on. For all their computational power and scale, machines on their own cannot yet “outthink” persistent attackers.
Yet, if we set automated concepts and AI’s potential as a force multiplier to take better stock of our urgent cybersecurity challenges and progress toward objectives, we can still upgrade defenses by freeing and empowering businesses’ invaluable security pros, business unit leaders, legal officers and other cyber risk stakeholders to tackle the important human imagination and intangible questions of risk tolerance, business transformation and what this demands of more adaptive, flexible and automated data defenses.