Toxicity Language

Features

1 min read

Toxicity Language

Toxic content can include hate speech, harassment, threats, and other forms of harmful or inappropriate language. The Toxicity Language Validator identifies toxic text generated by LLMs. If the scanner identifies toxic text higher than the specified threshold, it removes toxic sentences from the LLM outputs.



Toxicity Language

© 2024 Wisecube AI

© 2024 Wisecube AI

© 2024 Wisecube AI