Empowering defenders through our Cybersecurity Grant Program

OpenAI News
Empowering defenders through our Cybersecurity Grant Program

We’re sharing more about the work we have sponsored in the last year under our Cybersecurity Grant Program⁠.

In 2023, we launched the Cybersecurity Grant Program with a bold vision: to equip cyber defenders with the most advanced AI models and to empower groundbreaking research at the nexus of cybersecurity and artificial intelligence. The enthusiastic response from the community has exceeded our expectations—we’ve received over 600 applications—underscoring the critical need for and impact of meaningful discourse and research dialogue between OpenAI and the cybersecurity community.

## Selected projects

Since its inception, the program has supported a diverse array of projects. We are excited to highlight a few of them.

_Wagner Lab from UC Berkeley_

Professor David Wagner’s security research lab at UC Berkeley is pioneering techniques aimed at defending against prompt-injection attacks in large language models (LLMs). The group is working with OpenAI to enhance the trustworthiness of these models and protect them against cybersecurity threats.

Albert Heinle, co-founder and CTO at Coguard⁠(opens in a new window), uses AI to reduce software misconfiguration, a common cause of security incidents. Software configuration is complex, which is compounded when connecting software to networks and clusters. Current software solutions rely on outdated rules-based policies. AI can help automate the detection of misconfigurations and keep them updated.

Mithril has developed a proof-of-concept to fortify inference infrastructure for LLMs, including open-source tools to deploy AI models on GPUs with secure enclaves based on Trusted Platform Modules (TPMs). This project aims to demonstrate that data can be sent to AI providers without any data exposure, even to administrators. Their work is available publicly on GitHub⁠(opens in a new window), and as a whitepaper detailing their architecture⁠(opens in a new window).

_Gabriel Bernadett-Shapiro_

An individual grantee, Gabriel Bernadett-Shapiro, created the AI OSINT workshop and AI Security Starter Kit, offering technical training on the basics of LLMs and free tools for students, journalists, investigators and information-security professionals. In particular, Gabriel has emphasized affiliated training for international atrocity crime investigators and intelligence studies students at Johns Hopkins University to help ensure they have the best tools to leverage AI in both critical and challenging environments.

_Breuer Lab at Dartmouth_

Neural networks are vulnerable to attacks where adversaries reconstruct private training data by interacting with the model. Defending against these attacks typically requires costly tradeoffs in terms of model accuracy and training time. Professor Adam Breuer’s⁠(opens in a new window) Lab at Dartmouth is developing new defense techniques that prevent these attacks without compromising accuracy or efficiency.

_Security Lab Boston University (SeclaBU)_

Identifying and reasoning about code vulnerabilities is an important and active area of research. Ph.D candidate Saad Ullah, Professor Gianluca Stringhini from SeclaBU⁠(opens in a new window) and Professor Ayse Coskun from Peac Lab⁠(opens in a new window) at Boston University are working to improve the ability of LLMs to detect and fix vulnerabilities in code. This research could enable cyber defenders to catch and prevent code exploits before they are used maliciously.

_CY-PHY Security Lab from the University of Santa Cruz (UCSC)_

Professor Alvaro Cardenas⁠(opens in a new window)’ Research Group from UCSC is exploring how we can use foundation models to design agents that respond autonomously to computer network intruders, otherwise known as autonomous cyber defense agents. The project intends to compare the advantages and disadvantages of foundation models with their counterparts trained using reinforcement learning (RL) and, subsequently, how they can work together to improve network security and the triage of threat information.

_MIT Computer Science Artificial Intelligence Laboratory (MIT CSAIL)_

Stephen Moskal, Erik Hemberg and Una-May O’Reilly from MIT Computer Science Artificial Intelligence Laboratory⁠(opens in a new window) are exploring how to automate the decision process and perform actionable responses using prompt engineering approaches in a plan-act-report loop for red-teaming. Additionally, the group is exploring LLM-Agent capabilities in Capture-the-Flag (CTF) challenges - exercises aimed at discovering vulnerabilities in a controlled environment.

## Empowering defenders with ChatGPT

ChatGPT has emerged as one of the most popular and frequently used tools by cybersecurity professionals. Among the most common uses for cyber defenders include translating and rephrasing technical jargon or log events into simpler language, writing code to analyze artifacts during investigations, creating log parsers, and summarizing an incident status within strict time constraints.

To amplify its benefits, we've granted free access to ChatGPT Plus to many in the cybersecurity community, seeing this as a key opportunity to enhance AI adoption in cyber defense.

We will continue offering free ChatGPT Plus accounts and are extending this initiative to provide ChatGPT Team and Enterprise. Our expansion begins with our partners at the Research and Education Network for Uganda (RENU)⁠(opens in a new window).

If you share our vision for a secure and innovative AI-driven future, we invite you to submit your proposals and join us in our aim towards enhancing defensive cybersecurity technologies.

Submit your proposal here⁠

Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research

Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex

Safety * Safety Approach * Security & Privacy * Trust & Transparency

ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)

Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)

API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)

For Business * Business Overview * Solutions * Contact Sales

Company * About Us * Our Charter * Foundation * Careers * Brand

Support * Help Center(opens in a new window)

More * News * Stories * Livestreams * Podcast * RSS

Terms & Policies * Terms of Use * Privacy Policy * Other Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

Originally published on OpenAI News.