This year’s call has 3 areas of primary interest where frontier AI is central to the research:
• Novel Applications of AI for Privacy, Safety, and Security — Transforming existing protections or envisioning new protections with AI. Topics might include:
o Vulnerability analysis and fuzzing
o Network monitoring
o Preventing scams and fraud
o Detection and response
o System hardening
o Compliance
• Improving AI Tooling and Benchmarks for Privacy, Safety, and Security — Novel research into tooling, agentic capabilities, and benchmarks that demonstrate fundamental advancements in AI capabilities. Topics might include:
o Architecture designs to integrate cybersecurity tools and improve reasoning (such as MCP, A2A).
o Privacy, safety, and security benchmarks to measure progress on critical capabilities.
• Mitigating Adversarial Usage of AI for Harming Privacy, Safety, and Security — Ensuring that AI models benefit defenders and not attackers. Topics might include:
o Investigations of AI being used in real-world attacks, such as scams, deepfakes, hacking, and more.
o Investigations of potential harmful applications of AI.
o Investigations of how to improve the safeguards of AI, such as watermarking, alignment, and more.
If submissions do not have a focus on AI, but are relevant to privacy, safety, or security, they should be submitted to our Trust, Safety, Security, and Privacy Research open call.
Further details can be found here: https://research.google/programs-and-events/ai-for-privacy-safety-and-security/
Currently accepting applications. Apply Now.