Stay organized with collections
Save and categorize content based on your preferences.
The advent of large, generative models
introduces new challenges to implementing Responsible AI practices due to their
potentially open-ended output capabilities and many potential downstream uses. In addition to the AI Principles, Google has a Generative AI Prohibited Use Policy
and Generative AI Toolkit for Developers.
Google also offers guidance about generative AI models on:
Assessing AI technologies for fairness, accountability, safety, and privacy is
key to building AI responsibly. These checks should be incorporated into every
stage of the product lifecycle to ensure the development of safe, equitable, and
reliable products for all.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-02-25 UTC."],[[["Generative AI models present new challenges to Responsible AI due to their open-ended output and varied uses, prompting the need for guidelines like Google's Generative AI Prohibited Use Policy and Toolkit for Developers."],["Google provides further resources on crucial aspects of generative AI, including safety, fairness, prompt engineering, and adversarial testing."],["Building AI responsibly requires thorough assessment of fairness, accountability, safety, and privacy throughout the entire product lifecycle."],["Google emphasizes the importance of Responsible AI and offers additional resources like the AI Principles, Generative AI information, and toolkits for developers."]]],[]]