Classify and filter
toxic content instantly

Hush is a text classification API that detects harmful and toxic content in any text. Integrate it into any platform to automatically filter out unwanted content at scale.

Low Latency

Classifications are returned in milliseconds, making it suitable for real-time content pipelines at any scale.

Context Aware

Goes beyond keyword matching to understand the intent and tone behind any piece of text.

Easy to Integrate

A single REST API call is all it takes. Works with any language or framework that can send an HTTP request.

Give it a try!

Type any text into the box and see Hush classify it in real time. Paste a comment, message, or any string to test the API response.

Real-time classification via the live API
Integrate in minutes with a single API call

API Documentation

Everything you need to integrate Hush into your platform. Our REST API is simple, predictable, and incredibly fast.

Getting Started

  • Base URL
  • Rate Limits
  • Error Codes

Endpoints

  • POST /v1/predict
  • GET /health

Rate Limit

Requests are limited to 5 per minute per IP address. Exceeding this returns a 429 response.

Base URL

https://api.openproject.co.zw
POST /models/hush-preview:03-2026/v1/predict

Analyze a text message for toxicity. Returns a boolean classification indicating whether the message is toxic or safe.

Request Headers
"Content-Type": "application/json"
Request Body
{
  "text": "You are doing a great job!"
}
Response 200 OK
{
  "is_toxic": false,
  "prediction": "non-toxic"
}
Error Responses
400 — Missing or empty "text" field
"error": "No text provided" }
429 — Rate limit exceeded (5 requests/min)
"error": "Rate limit exceeded" }
GET /health

Check whether the Hush API is online and running.

Response 200 OK
{
  "status": "healthy",
  "service": "hush"
}

Integration Examples

python
import requests

url  = "https://api.openproject.co.zw/models/hush-preview:03-2026/v1/predict"
data = { "text": "This community is amazing!" }

response = requests.post(url, json=data)
result   = response.json()

if result['is_toxic']:
    print("Message flagged.")
else:
    print("Message is safe to send!")
javascript
const response = await fetch(
    "https://api.openproject.co.zw/models/hush-preview:03-2026/v1/predict",
    {
        method:  "POST",
        headers: { "Content-Type": "application/json" },
        body:    JSON.stringify({ text: "This community is amazing!" })
    }
);

const result = await response.json();

if (result.is_toxic) {
    console.log("Message flagged.");
} else {
    console.log("Message is safe to send!");
}
curl
curl -X POST https://api.openproject.co.zw/models/hush-preview:03-2026/v1/predict \
  -H "Content-Type: application/json" \
  -d '{"text": "This community is amazing!"}'