Alphabet warns employees on risks associated with chatbots

June 16, 2023

Alphabet, Google’s parent firm, has informed employees not to submit sensitive information into AI chatbots, citing long-standing policy on safeguarding information. It also instructed developers to avoid utilizing chatbot-generated computer code directly.

Alphabet’s warnings are motivated by concerns about chatbot security and dependability. Chatbots are trained on vast amounts of data, from which they might learn sensitive information. Furthermore, chatbots may be programmed to produce code, which can be used to generate harmful and undesirable code ideas, but it still benefits programmers. Google has stated that it intends to be open about the limits of their technology.

Alphabet is not the first company that has issued a warning to employees about the dangers of chatbots. There have been several allegations of chatbots disclosing personal information or producing dangerous malware. These occurrences have prompted worries about the security and dependability of chatbots, prompting businesses to take precautions.

The sources for this piece include an article in Reuters.

Top Stories

Related Articles

December 23, 2025 Editor's Notes: This is the first of two articles reflecting on the year by Yogi Schulz. Schulz' more...

December 23, 2025 Google parent company Alphabet said Monday that it will acquire Intersect Power for $4.75 billion in cash more...

December 22, 2025 Artificial intelligence dominated global search behaviour in 2025, with Google’s own AI assistant, Gemini, emerging as the more...

December 22, 2025 OpenAI has hired the former head of Shopify’s core product organization to lead its next phase of more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn