Technical News

Open source AI from CrowdStrike and NVIDIA gives businesses an edge against machine-speed attacks.

Every SOC leader knows this feeling: drowning in alerts, blind to the real threat, stuck playing defense in a war fought at AI speed.

CrowdStrike and NVIDIA are now flipping the script. Armed with autonomous agents powered by Charlotte AI and NVIDIA Nemotron models, security teams don’t just react; they return fire to attackers before their next move. Welcome to the new cybersecurity arms race. Combining the many strengths of open source with agentic AI will shift the balance of power against adversarial AI.

CrowdStrike and the NVIDIA agentic ecosystem combine Charlotte AI AgentWorks, NVIDIA Nemotron open models, NVIDIA NeMo Data Designer synthetic data, NVIDIA Nemo Agent Toolkit, and NVIDIA NIM microservices.

"This collaboration redefines security operations by enabling analysts to create and deploy specialized AI agents at scale, leveraging trusted enterprise-grade security with Nemotron models," writes Bryan Catanzaro, vice president of applied deep learning research at NVIDIA.

The partnership is designed to enable autonomous agents to learn quickly, reducing risks, threats and false positives. Achieving this alleviates a heavy burden on SOC leaders and their teams, who struggle with data fatigue due to inaccurate data almost daily.

The announcement at GTC in Washington, DC, marks the arrival of machine-speed defense that can finally compete with machine-speed attacks.

Transforming elite analyst expertise into machine-scale data sets

The partnership is differentiated by how AI agents are designed to continuously aggregate telemetry data, including insights from CrowdStrike Falcon Complete Managed Detection and Response analysts.

"What we can do is take the intelligence, the data, the experience of our Falcon Complete analysts and turn those experts into data sets. Transform data sets into AI models and then be able to create agents based, really, on all of the makeup and experience that we’ve built within the company so that our customers can still benefit at scale from those agents," said Daniel Bernard, CrowdStrike’s chief commercial officer, during a recent briefing.

By capitalizing on the strengths of NVIDIA Nemotron open models, organizations will be able to ensure their autonomous agents continually learn by training on datasets from Falcon Complete, the world’s largest MDR service managing millions of sorting decisions each month.

CrowdStrike has previous experience triaging AI detection, to the point of launching a service that extends this capability to its entire customer base. Charlotte AI Detection Triage, designed to integrate with existing security workflows and continually adapt to evolving threats, automates alert assessment with over 98% accuracy and reduces manual triage by over 40 hours per week.

Elia Zaitsev, Chief Technology Officer of CrowdStrike, explaining how Charlotte AI Detection Triage is able to deliver this level of performance, told VentureBeat: "We couldn’t have done this without the support of our Falcon Complete team. They triage within their workflow, manually processing millions of detections. The high-quality human-annotated dataset they provide allowed us to achieve over 98% accuracy."

The lessons learned from Charlotte AI Detection Triage apply directly to the NVIDIA partnership, further increasing the value it has the potential to bring to SOCs who need help dealing with the deluge of alerts.

Open source is table stakes for this partnership to work

NVIDIA’s Nemotron open models address what many security leaders identify as the most critical barrier to AI adoption in regulated environments: lack of clarity on how the model works, how it is weighted, and how secure it is.

Justin Boitano, Vice President, Enterprise and Edge Computing at NVIDIA, speaking on behalf of NVIDIA at a recent press briefing, explained: "Open models are the starting point for people trying to develop their own specialized knowledge in a domain. You ultimately want to own the intellectual property. Not everyone wants to export their data, then import or pay for the information they consume. Many sovereign countries and many companies in regulated sectors want to maintain the confidentiality and security of all this data."

John Morello, CTO and co-founder of Gutsy (now Minimus), told VentureBeat that "The open source nature of Google’s BERT open source language model allows Gutsy to customize and train its model for specific security use cases while maintaining privacy and efficiency." Morello pointed out that practitioners cite "more transparency and better guarantees of data confidentiality, as well as high availability of expertise and more integration options in their architectures, as the main reasons to opt for open source."

Control the balance of power of the opposing AI

Cisco’s DJ Sampath, senior vice president of Cisco’s AI Software and Platforms Group, articulated the imperative for open source security models industry-wide. during a recent interview with VentureBeat: "The reality is that attackers also have access to open source models. The goal is to equip as many defenders as possible with robust models to enhance security."

Sampath explained that when Cisco released Foundation-Sec-8B, its open source security model, at RSAC 2025, it was motivated by a sense of responsibility: "Funding for open source projects has stalled and there is a growing need for sustainable funding sources within the community. It is the responsibility of companies to provide these models while allowing communities to engage with AI from a defensive perspective."

The commitment to transparency extends to the most sensitive aspects of AI development. When concerns arose about DeepSeek R1 training data and its potential compromise, NVIDIA responded decisively.

As Boitano explained to VentureBeat, "Government agencies were very concerned. They wanted the reasoning capabilities of DeepSeek, but they were obviously a little concerned about what could be trained in the DeepSeek model, which actually inspired us to completely open-source everything in the Nemotron models, including the reasoning datasets."

For practitioners managing open source security at scale, this transparency is at the heart of their business. Itamar Sher, CEO of Seal Security, highlighted at VentureBeat that "open source models provide transparency," although he noted that "managing their cycles and compliance remains an important concern." Sher’s company uses generative AI to automate the remediation of vulnerabilities in open source software, and as a recognized CVE Naming Authority (CNA), Seal can identify, document and assign vulnerabilities, improving security across the entire ecosystem.

A key objective of the partnership: bringing intelligence to the edge

"Bringing intelligence closer to where the data is and where decisions are made will be a major step forward for security operations teams across the industry." » underlined Boitano. This edge deployment capability is particularly critical for government agencies with fragmented and often legacy IT environments.

VentureBeat asked Boitano how initial discussions went with government agencies briefed on the partnership and its design goals before work began. "The feeling among the agencies we spoke with is that they still feel, unfortunately, that they are behind in adopting these technologies," Boitano explained. "The answer was: do whatever you can to help us secure endpoints. It was a tedious and time-consuming process to bring open models to these higher lateral networks, you know."

NVIDIA and CrowdStrike did the groundwork, including STIG hardening, FIPS encryption, air-gap compatibility, and removing barriers that delayed adoption of the open model on top networks. The NVIDIA AI Factory for Government reference design provides comprehensive guidance for deploying AI agents in federal and high-assurance organizations while meeting the most stringent security requirements.

As Boitano explains, the urgency is existential: "Having an AI defense running in your domain that can look for and detect these anomalies, then alert and respond much more quickly, is just the natural consequence. This is at this point the only way to protect against the speed of the AI."

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button