Cybersecurity firm CrowdStrike and Meta have released CyberSOCEval, an open-source benchmark suite designed to evaluate the performance of AI models in security operations centers (SOCs). The tool aims to help businesses select the right AI-powered cybersecurity solutions by providing a standardized way to test their capabilities in key security tasks.
The challenge of choosing the right AI security tool
As AI becomes integrated into a growing number of cybersecurity products, security professionals face the challenge of choosing from a wide array of options with varying costs and capabilities. CyberSOCEval addresses this by offering a structured method for testing large language models (LLMs) on core SOC functions, including incident response, threat analysis, and malware detection.
Without clear benchmarks, it’s difficult to know which systems, use cases, and performance standards deliver a true AI advantage against real-world attacks.
By standardizing these evaluations, the benchmark allows organizations to objectively measure how different AI models perform in realistic scenarios, helping them identify the tools that best fit their operational needs.
How CyberSOCEval benefits both businesses and developers
For businesses, the benchmark provides clear, comparable data on model performance. For AI developers, it offers valuable insights into how enterprise clients use their models for cybersecurity. This feedback can guide future improvements, helping creators refine their models to better handle specific industry jargon or complex threat intelligence. The framework is designed to be adaptable, allowing for the inclusion of new tests as threats like zero-day exploits emerge.
The release of CyberSOCEval comes amid a digital arms race where both attackers and defenders are leveraging AI. A survey by Mastercard and the Financial Times Longitude found that financial services firms have saved millions of dollars by using AI-powered tools to combat AI-enabled fraud, demonstrating the tangible benefits of effective defensive AI.
An open-source approach to improving security
Meta’s involvement in the project aligns with its history of supporting open-source AI development, such as its Llama models. By making CyberSOCEval an open-source tool, the companies encourage community collaboration to improve and expand the benchmarks over time. This approach aims to accelerate industry-wide progress in defending against advanced, AI-based threats.
With these benchmarks in place, and open for the security and AI community to further improve, we can more quickly work as an industry to unlock the potential of AI in protecting against advanced attacks, including AI-based threats.
CyberSOCEval is available now on GitHub, where users can download the suite to run evaluations on their preferred LLMs. The repository includes documentation, sample datasets, and instructions for integrating the tests into existing security platforms.