Hush is a text classification API that detects harmful and toxic content in any text. Integrate it into any platform to automatically filter out unwanted content at scale.
Classifications are returned in milliseconds, making it suitable for real-time content pipelines at any scale.
Goes beyond keyword matching to understand the intent and tone behind any piece of text.
A single REST API call is all it takes. Works with any language or framework that can send an HTTP request.
Type any text into the box and see Hush classify it in real time. Paste a comment, message, or any string to test the API response.
Everything you need to integrate Hush into your platform. Our REST API is simple, predictable, and incredibly fast.
Requests are limited to 5 per minute per IP address. Exceeding this returns a 429 response.
/models/hush-preview:03-2026/v1/predict
Analyze a text message for toxicity. Returns a boolean classification indicating whether the message is toxic or safe.
/health
Check whether the Hush API is online and running.
import requests url = "https://api.openproject.co.zw/models/hush-preview:03-2026/v1/predict" data = { "text": "This community is amazing!" } response = requests.post(url, json=data) result = response.json() if result['is_toxic']: print("Message flagged.") else: print("Message is safe to send!")
const response = await fetch( "https://api.openproject.co.zw/models/hush-preview:03-2026/v1/predict", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ text: "This community is amazing!" }) } ); const result = await response.json(); if (result.is_toxic) { console.log("Message flagged."); } else { console.log("Message is safe to send!"); }
curl -X POST https://api.openproject.co.zw/models/hush-preview:03-2026/v1/predict \ -H "Content-Type: application/json" \ -d '{"text": "This community is amazing!"}'