April 27, 2023
Nvidia has launched an open-source software called NeMo Guardrails to ensure the safety and accuracy of large language models (LLMs). NeMo Guardrails will prevent LLMs from providing incorrect information, going off-topic, and connecting to dangerous applications.
The software has been developed to address concerns about the erratic behavior of AI models and can be used with any LLM to assist corporate application developers in creating new rules quickly. The program works by intercepting inquiries before the chatbot can react with incorrect information and, if necessary, forcing the AI to respond with “I don’t know.”
Nvidia has ensured that NeMo Guardrails can be used by all software developers, regardless of their experience in machine learning or data science. The company also intends to continue developing and improving the software in the future.
It allows AI developers to set up three types of boundaries for AI models: Topical, Safety, and Security Guardrails. The topical guardrails prevent the AI application from exploring topics that are not necessary or desirable for the intended use, while safety guardrails ensure that AI applications respond with accurate and appropriate information.
Meanwhile, the safety guardrails can reinforce bans on inappropriate language and credible source citations. NeMo Guardrails is accessible on GitHub, and Nvidia will support it through the Nvidia AI Enterprise platform and Nvidia AI Foundations cloud services.
The sources for this piece include articles in TheRegister and ZDNET.
