MLCommons releases new benchmark for measuring LLM safety – works by supplying an LLM with over 24,000 prompts and harmful content created for safety evaluation purposes
We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.OkayPrivacy policy