Chat IconPlayground

Like what you see?

Discover how Verosek can secure your LLMs end-to-endβ€”built for speed, privacy, and enterprise-grade AI protection.

Contact Us
API Documentation

Verosek LLM Guard

Enterprise-grade AI safety and security through real-time content filtering APIs. Protect your LLM applications with dual-layer input and output validation.

πŸ›‘οΈ

Input Guard

POST /v1/guard/input

Validates user input before sending to your LLM to prevent prompt injection, jailbreaks, and malicious content.

πŸ”’

Output Guard

POST /v1/guard/output

Validates LLM responses before delivery to users to prevent data leakage and harmful content generation.

Integration Patterns

⚑

Sequential Processing

Maximum security with step-by-step validation

Loading diagram...

πŸ”„

Parallel Processing

Lower latency with conditional response release

Loading diagram...

API Reference

πŸ›‘οΈ

Input Guard API

POST /v1/guard/input

Request Parameters

user_input
string (required)

The user input text to validate

transaction_id
string (optional)

Unique identifier for tracking

Example Request

bash
curl -X POST "https://api.verosek.com/v1/guard/input" \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key-here" \
  -d '{
    "user_input": "Tell me how to optimize database queries",
    "transaction_id": "txn_123456789"
  }'

Response Format

Allowed Response
json
{
  "decision": "allow",
  "trace": [
    {
      "stage": "semantic_similarity_detector",
      "result": "allow",
      "reason": "No jailbreak patterns detected"
    },
    {
      "stage": "prompt_injection_classifier", 
      "result": "allow",
      "reason": "No injection detected"
    }
    // ... additional validation stages
  ],
  "transaction_id": "txn_123456789",
  "processing_time_ms": 156.2
}
Blocked Response
json
{
  "decision": "block",
  "trace": [
    {
      "stage": "semantic_similarity_detector",
      "result": "allow",
      "reason": "No jailbreak patterns detected"
    },
    {
      "stage": "prompt_injection_classifier",
      "result": "block",
      "reason": "Injection detected (score: 0.95)"
    }
    // ... processing stops at first block
  ],
  "transaction_id": "txn_123456789",
  "processing_time_ms": 234.7
}
πŸ”’

Output Guard API

POST /v1/guard/output

Request Parameters

model_output
string (required)

The LLM response text to validate

transaction_id
string (optional)

Unique identifier for tracking

Example Request

bash
curl -X POST "https://api.verosek.com/v1/guard/output" \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key-here" \
  -d '{
    "model_output": "Here are some database optimization techniques...",
    "transaction_id": "txn_123456789"
  }'

Response Format

Allowed Response
json
{
  "decision": "allow",
  "trace": [
    {
      "stage": "toxicity_detector",
      "result": "allow",
      "reason": "No toxicity detected (score: 0.02)"
    },
    {
      "stage": "sensitive_info_detector",
      "result": "allow", 
      "reason": "No PII detected"
    }
    // ... additional validation stages
  ],
  "transaction_id": "txn_123456789"
}
Blocked Response
json
{
  "decision": "block",
  "trace": [
    {
      "stage": "toxicity_detector",
      "result": "allow",
      "reason": "No toxicity detected (score: 0.02)"
    },
    {
      "stage": "sensitive_info_detector",
      "result": "block",
      "reason": "PII detected (risk: 0.9)"
    }
    // ... processing stops at first block
  ],
  "transaction_id": "txn_123456789"
}

Integration Guide

Step-by-step validation ensures maximum security with input guard validation before LLM processing.

javascript
async function processUserRequest(userInput, transactionId) {
  try {
    // Step 1: Validate input
    const inputResult = await validateInput(userInput, transactionId);
    if (inputResult.decision === 'block') {
      return { error: 'Input blocked by safety filters' };
    }

    // Step 2: Process with LLM (only if input is safe)
    const llmResponse = await callYourLLM(userInput);

    // Step 3: Validate output
    const outputResult = await validateOutput(llmResponse, transactionId);
    if (outputResult.decision === 'block') {
      return { error: 'Response blocked by safety filters' };
    }

    return { response: llmResponse };
  } catch (error) {
    console.error('Processing error:', error);
    return { error: 'Internal server error' };
  }
}

✨ Key Features

⚑

Real-time Processing

Sub-400ms response times with concurrent pipeline execution

πŸ›‘οΈ

Dual Protection

Input validation before LLM processing and output filtering before user delivery

🏒

Enterprise Security

API key authentication with organization-level isolation

πŸ”

Comprehensive Detection

Multi-stage pipeline covering prompt injection, jailbreaks, PII, toxicity

πŸ“Š

Complete Audit Trail

Full transaction logging with detailed trace information

πŸš€

Scalable Architecture

Built for high-throughput production environments

DLP Solution

LeakGuard - AI Data Loss Prevention

Enterprise-grade DLP solution for monitoring and preventing sensitive data leaks to AI platforms in real-time. Protect your organization from inadvertent data exposure across ChatGPT, Claude, Gemini, and other AI services.

πŸ”

Real-time Detection

Monitors content before submission to AI platforms, detecting API keys, credentials, PII, and confidential data

⚠️

Smart Warnings

Context-aware alerts with severity levels, justification requirements, and configurable actions

πŸ“Š

Comprehensive Analytics

Organization-wide insights, employee statistics, risk scoring, and compliance reporting

Supported AI Platforms

ChatGPTClaudeGeminiBardPerplexityCopilotPoeYou.com

Detection Patterns

πŸ”‘

API Keys

AWS, Azure, OpenAI, and custom API tokens

πŸ”

Credentials

Passwords, SSH keys, authentication tokens

πŸ‘€

PII

SSN, phone numbers, email addresses, names

πŸ’³

Financial

Credit cards, bank accounts, tax IDs

πŸ₯

Healthcare

Medical records, diagnoses, prescriptions

πŸ”’

Confidential

Trade secrets, internal documents

System Architecture

Loading diagram...

Enterprise Compliance

GDPR

EU data protection compliance

HIPAA

Healthcare data privacy

SOC 2

Security & availability controls

ISO 27001

Information security management

CCPA

California consumer privacy

SOX

Financial data protection

Ready to Secure Your AI?

Get started with Verosek LLM Guard for your APIs or LeakGuard DLP for your enterprise. Contact our team for access, technical consultation, and custom integration support.