IDENTIF.Ai Text Detection API
Zero‑shot AI text detection via lightweight perplexity analysis. This service exposes HTTP endpoints for single and batch detection, health, and Prometheus metrics.
https://api.identif.ai/
Authorization: Bearer <key> or X-API-Key: <key>gpt2 (configurable)application/jsonQuickstart
cURL
curl -s -X POST "https://<host>/api/v1/detect" \
-H "Authorization: Bearer <API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"text": "This is a sample paragraph...",
"threshold": 0.9
}'
JavaScript (fetch)
const res = await fetch("https://<host>/api/v1/detect", {
method: "POST",
headers: {
"Authorization": `Bearer ${API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({ text: inputText, threshold: 0.9 })
});
const data = await res.json();
Python
import requests
r = requests.post(
"https://<host>/api/v1/detect",
headers={"Authorization": f"Bearer {API_KEY}"},
json={"text": "...", "threshold": 0.9}
)
print(r.json())
threshold closer to 1.0 is stricter; default is 0.9.
Authentication
Provide a static API key using either header. If both are present, Authorization takes precedence.
| Header | Example | Notes |
|---|---|---|
| Authorization | Bearer dev-key-12345 | Preferred |
| X-API-Key | dev-key-12345 | Alternative |
Default/dev key is dev-key-12345 unless API_KEY env var is set.
Rate Limits
Simple in‑memory limiter keyed by client IP.
| Setting | Env | Default |
|---|---|---|
| Max requests | RATE_LIMIT_REQUESTS | 100 |
| Window (seconds) | RATE_LIMIT_WINDOW | 60 |
Exceeding the limit returns 429 Too Many Requests.
Endpoints
/
Root info.
Response
{
"service": "IDENTIF.Ai Text Detection API",
"version": "1.0.0",
"status": "running"
}
/health
Health and readiness.
| Field | Type | Description |
|---|---|---|
| status | string | healthy | unhealthy |
| model_loaded | boolean | True when model is in cache |
| device | string | cuda | cpu |
| uptime_seconds | number | Process uptime |
| version | string | API version |
/metrics
Prometheus exposition format.
Content type: text/plain; version=0.0.4
/api/v1/detect
Detect whether a single text is AI‑generated.
Auth
Requires API key & passes rate limit.
Request Body
{
"text": "string (1..50000 chars)",
"threshold": 0.9, // optional, 0.0..1.0 (lower score = AI)
"return_probabilities": false // optional (reserved)
}
Success Response 200
{
"verdict": "ai_generated" | "human_written",
"confidence": 0.42,
"score": 0.37, // lower = more likely AI
"is_ai_generated": true,
"model_used": "IDENTIF.Ai",
"processing_time_ms": 12.8,
"metadata": {
"text_length": 1234,
"word_count": 215,
"threshold": 0.9
}
}
Errors
- 401 Missing/invalid API key
- 429 Rate limit exceeded
- 503 Model not loaded
- 500 Detection failed
/api/v1/detect/batch
Batch detection for multiple texts.
Request Body
{
"texts": ["...", "..."], // 1..100 items, non‑empty strings
"threshold": 0.9,
"return_probabilities": false
}
Success Response 200
{
"results": [
{
"verdict": "ai_generated",
"confidence": 0.31,
"score": 0.25,
"is_ai_generated": true,
"model_used": "IDENTIF.Ai",
"processing_time_ms": 10.1,
"metadata": { "text_length": 141, "word_count": 25, "threshold": 0.9 }
},
{ /* ... */ }
],
"total_items": 2,
"processing_time_ms": 21.5,
"summary": {
"ai_generated_count": 1,
"human_written_count": 1,
"error_count": 0,
"ai_percentage": 50.0
}
}
Errors
- 401 Missing/invalid API key
- 429 Rate limit exceeded
- 503 Model not loaded
- 500 Batch detection failed
Schemas
TextDetectionRequest
| text | string (required, 1..50000) |
|---|---|
| threshold | number (0..1, default 0.9) |
| return_probabilities | boolean (default false) |
BatchTextDetectionRequest
| texts | array<string> (1..100, trimmed) |
|---|---|
| threshold | number (0..1, default 0.9) |
| return_probabilities | boolean (default false) |
DetectionResult
| verdict | ai_generated | human_written |
|---|---|
| confidence | number (0..1) |
| score | number (0..1, lower = AI) |
| is_ai_generated | boolean |
| model_used | string ("IDENTIF.Ai") |
| processing_time_ms | number |
| metadata | object (length, words, threshold) |
BatchDetectionResult
| results | array<DetectionResult | error> |
|---|---|
| total_items | integer |
| processing_time_ms | number |
| summary | object (counts & percentage) |
HealthResponse
| status | string |
|---|---|
| model_loaded | boolean |
| device | string |
| uptime_seconds | number |
| version | string |
Error Format
Errors use a consistent JSON envelope:
{
"error": "message",
"status_code": 401,
"timestamp": "2025-01-01T12:34:56.789Z"
}
| Status | When |
|---|---|
| 400 | Validation errors (e.g., empty text) |
| 401 | Missing/invalid API key |
| 429 | Rate limit exceeded |
| 500 | Unhandled server error |
| 503 | Model not loaded |
Prometheus Metrics
| Metric | Type | Labels | Description |
|---|---|---|---|
IDENTIF.Ai_requests_total |
counter | endpoint, status |
Total number of requests per endpoint (success/error) |
IDENTIF.Ai_request_duration_seconds |
histogram | endpoint |
Request duration |
IDENTIF.Ai_detections_total |
counter | verdict |
Count of detection outcomes |
Scrape /metrics with your Prometheus server.
Deploy & Configuration
Environment Variables
| API_KEY | Static auth key (default dev-key-12345) |
|---|---|
| MODEL_NAME | HF model id (default gpt2) |
| DEVICE | cuda | cpu (auto if unset) |
| CORS_ORIGINS | Comma‑separated list (default *) |
| PORT | Default 5002 |
| HOST | Default 0.0.0.0 |
| RELOAD | true | false (dev hot reload) |
| WORKERS | Uvicorn workers (default 1) |
| LOG_LEVEL | critical|error|warning|info|debug|trace |
| RATE_LIMIT_REQUESTS | Default 100 |
| RATE_LIMIT_WINDOW | Default 60 |
Run Locally
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
export API_KEY=dev-key-12345
uvicorn main:app --host 0.0.0.0 --port 5002 --reload
Health & Startup
On startup the app loads the tokenizer/model and caches them globally. /health returns model_loaded=true when ready.
MODEL_NAME to larger LMs.
Changelog
- v1.0.0 – Initial release of detection endpoints, health, and Prometheus metrics.