Secure AI Framework (SAIF): A Conceptual Framework for Secure AI Systems

AI is advancing rapidly, and it's important that effective risk management strategies evolve along with it. The Secure AI Framework (SAIF) is a conceptual framework for secure AI systems designed to help achieve this evolution.

As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical.

SAIF is designed to help mitigate risks specific to AI systems, like stealing the model, data poisoning of the training data, injecting malicious inputs through prompt injection, and extracting confidential information in the training data.

The SAIF Framework

SAIF has six core elements:

1. Expand strong security foundations to the AI ecosystem

This includes leveraging secure-by-default infrastructure protections and expertise built over the last two decades to protect AI systems, applications, and users. At the same time, develop organizational expertise to keep pace with advances in AI and start to scale and adapt infrastructure protections in the context of AI and evolving threat models. For example, injection techniques like SQL injection have existed for some time, and organizations can adapt mitigations, such as input sanitization and limiting, to help better defend against prompt injection-style attacks.

2. Extend detection and response to bring AI into an organization's threat universe

Timeliness is critical in detecting and responding to AI-related cyber incidents, and extending threat intelligence and other capabilities to an organization improves both. For organizations, this includes monitoring inputs and outputs of generative AI systems to detect anomalies and using threat intelligence to anticipate attacks. This effort typically requires collaboration with trust and safety, threat intelligence, and counter abuse teams.

3. Automate defenses to keep pace with existing and new threats

The latest AI innovations can improve the scale and speed of response efforts to security incidents. Adversaries will likely use AI to scale their impact, so it is important to use AI and its current and emerging capabilities to stay nimble and cost effective in protecting against them.

4. Harmonize platform level controls to ensure consistent security across the organization

Consistency across control frameworks can support AI risk mitigation and scale protections across different platforms and tools to ensure that the best protections are available to all AI applications in a scalable and cost efficient manner. At Google, this includes extending secure-by-default protections to AI platforms like Vertex AI and Security AI Workbench, and building controls and protections into the software development lifecycle. Capabilities that address general use cases, like Perspective API, can help the entire organization benefit from state of the art protections.

5. Adapt controls to adjust mitigations and create faster feedback loops for AI deployment

Constant testing of implementations through continuous learning can ensure detection and protection capabilities address the changing threat environment. This includes techniques like reinforcement learning based on incidents and user feedback and involves steps such as updating training data sets, fine-tuning models to respond strategically to attacks and allowing the software that is used to build models to embed further security in context (e.g. detecting anomalous behavior). Organizations can also conduct regular red team exercises to improve safety assurance for AI-powered products and capabilities.

6. Contextualize AI system risks in surrounding business processes

Lastly, conducting end-to-end risk assessments related to how organizations will deploy AI can help inform decisions. This includes an assessment of the end-to-end business risk, such as data lineage, validation and operational behavior monitoring for certain types of applications. In addition, organizations should construct automated checks to validate AI performance.

Additional Resources

A practitioner's guide to implementing SAIF. This guide provide high-level practical considerations on how organizations could go about building the SAIF approach into their existing or new adoptions of AI.

Why Red Teams Play a Central Role in Helping Organizations Secure AI Systems is an in-depth report exploring one critical capability being deployed to support the SAIF framework: Red teaming. This includes three important areas:

  1. What red teaming is and why it is important
  2. What types of attacks red teams simulate
  3. Lessons we have learned that we can share with others