Contrast Security open source AI policy to protect privacy and security

September 12, 2023

Contrast Security, a provider of application security testing, has open sourced an AI policy designed to help organizations manage privacy and security risks when using Generative AI and Large Language Models (LLMs).

The policy addresses several key concerns, like avoiding situations where ownership and intellectual property (IP) rights of software cannot be disputed later on.

It also guards against the creation or use of AI-generated code that may include harmful elements, and prohibits employees from using public AI systems to learn from the organization’s or third-party proprietary data.

Additionally, it prevents unauthorized or underprivileged individuals from accessing sensitive or confidential data.

The policy is open-source and available for anyone to use or adapt. It is designed as a foundation for CISOs, security experts, compliance teams, and risk professionals who are either new to this field or require a readily available policy framework for their organizations.

The sources for this piece include an article in SDTimes.

Top Stories

Related Articles

December 23, 2025 Spotify says it has identified the user account behind what it describes as “unlawful” scraping of its more...

December 12, 2025 Google is rolling out fully managed MCP servers so AI agents can plug directly into services like more...

December 10, 2025 Chinese developers now hold most of the top positions on major community leaderboards that track the performance more...

August 25, 2025 xAI has announced that they are issuing and Open-Source version of Grok 2.5 — a move likely intended more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn