Contrast Security open source AI policy to protect privacy and security

Contrast Security, a provider of application security testing, has open sourced an AI policy designed to help organizations manage privacy and security risks when using Generative AI and Large Language Models (LLMs).

The policy addresses several key concerns, like avoiding situations where ownership and intellectual property (IP) rights of software cannot be disputed later on.

It also guards against the creation or use of AI-generated code that may include harmful elements, and prohibits employees from using public AI systems to learn from the organization’s or third-party proprietary data.

Additionally, it prevents unauthorized or underprivileged individuals from accessing sensitive or confidential data.

The policy is open-source and available for anyone to use or adapt. It is designed as a foundation for CISOs, security experts, compliance teams, and risk professionals who are either new to this field or require a readily available policy framework for their organizations.

The sources for this piece include an article in SDTimes.

Top Stories

Related Articles

March 30, 2025 Cloudflare has released an open-source tool called OPKSSH (OpenPubkey SSH), which allows developers and IT teams to use more...

March 16, 2025 Windows 10 will cease receiving security updates after October 2025, and this means charities and non-profit organizations face more...

June 25, 2024 Mozilla Corporation, along with three of its executives, is facing a lawsuit in the US for alleged disability more...

June 3, 2024 LLM360, in collaboration with MBZUAI and Petuum, has unveiled K2-65B, a cutting-edge large language model (LLM) boasting 65 more...

Jim Love

Jim Is and author and pud cast host with over 40 years in technology.