Like what you see?
Discover how Verosek can secure your LLMs end-to-endβbuilt for speed, privacy, and enterprise-grade AI protection.
Contact UsVerosek LLM Guard
Enterprise-grade AI safety and security through real-time content filtering APIs. Protect your LLM applications with dual-layer input and output validation.
Input Guard
POST /v1/guard/inputValidates user input before sending to your LLM to prevent prompt injection, jailbreaks, and malicious content.
Output Guard
POST /v1/guard/outputValidates LLM responses before delivery to users to prevent data leakage and harmful content generation.
Integration Patterns
Sequential Processing
Maximum security with step-by-step validation
Loading diagram...
Parallel Processing
Lower latency with conditional response release
Loading diagram...
API Reference
Input Guard API
POST /v1/guard/inputRequest Parameters
user_inputThe user input text to validate
transaction_idUnique identifier for tracking
Example Request
curl -X POST "https://api.verosek.com/v1/guard/input" \
-H "Content-Type: application/json" \
-H "X-API-Key: your-api-key-here" \
-d '{
"user_input": "Tell me how to optimize database queries",
"transaction_id": "txn_123456789"
}'Response Format
{
"decision": "allow",
"trace": [
{
"stage": "semantic_similarity_detector",
"result": "allow",
"reason": "No jailbreak patterns detected"
},
{
"stage": "prompt_injection_classifier",
"result": "allow",
"reason": "No injection detected"
}
// ... additional validation stages
],
"transaction_id": "txn_123456789",
"processing_time_ms": 156.2
}{
"decision": "block",
"trace": [
{
"stage": "semantic_similarity_detector",
"result": "allow",
"reason": "No jailbreak patterns detected"
},
{
"stage": "prompt_injection_classifier",
"result": "block",
"reason": "Injection detected (score: 0.95)"
}
// ... processing stops at first block
],
"transaction_id": "txn_123456789",
"processing_time_ms": 234.7
}Output Guard API
POST /v1/guard/outputRequest Parameters
model_outputThe LLM response text to validate
transaction_idUnique identifier for tracking
Example Request
curl -X POST "https://api.verosek.com/v1/guard/output" \
-H "Content-Type: application/json" \
-H "X-API-Key: your-api-key-here" \
-d '{
"model_output": "Here are some database optimization techniques...",
"transaction_id": "txn_123456789"
}'Response Format
{
"decision": "allow",
"trace": [
{
"stage": "toxicity_detector",
"result": "allow",
"reason": "No toxicity detected (score: 0.02)"
},
{
"stage": "sensitive_info_detector",
"result": "allow",
"reason": "No PII detected"
}
// ... additional validation stages
],
"transaction_id": "txn_123456789"
}{
"decision": "block",
"trace": [
{
"stage": "toxicity_detector",
"result": "allow",
"reason": "No toxicity detected (score: 0.02)"
},
{
"stage": "sensitive_info_detector",
"result": "block",
"reason": "PII detected (risk: 0.9)"
}
// ... processing stops at first block
],
"transaction_id": "txn_123456789"
}Integration Guide
Step-by-step validation ensures maximum security with input guard validation before LLM processing.
async function processUserRequest(userInput, transactionId) {
try {
// Step 1: Validate input
const inputResult = await validateInput(userInput, transactionId);
if (inputResult.decision === 'block') {
return { error: 'Input blocked by safety filters' };
}
// Step 2: Process with LLM (only if input is safe)
const llmResponse = await callYourLLM(userInput);
// Step 3: Validate output
const outputResult = await validateOutput(llmResponse, transactionId);
if (outputResult.decision === 'block') {
return { error: 'Response blocked by safety filters' };
}
return { response: llmResponse };
} catch (error) {
console.error('Processing error:', error);
return { error: 'Internal server error' };
}
}β¨ Key Features
Real-time Processing
Sub-400ms response times with concurrent pipeline execution
Dual Protection
Input validation before LLM processing and output filtering before user delivery
Enterprise Security
API key authentication with organization-level isolation
Comprehensive Detection
Multi-stage pipeline covering prompt injection, jailbreaks, PII, toxicity
Complete Audit Trail
Full transaction logging with detailed trace information
Scalable Architecture
Built for high-throughput production environments
LeakGuard - AI Data Loss Prevention
Enterprise-grade DLP solution for monitoring and preventing sensitive data leaks to AI platforms in real-time. Protect your organization from inadvertent data exposure across ChatGPT, Claude, Gemini, and other AI services.
Real-time Detection
Monitors content before submission to AI platforms, detecting API keys, credentials, PII, and confidential data
Smart Warnings
Context-aware alerts with severity levels, justification requirements, and configurable actions
Comprehensive Analytics
Organization-wide insights, employee statistics, risk scoring, and compliance reporting
Supported AI Platforms
Detection Patterns
API Keys
AWS, Azure, OpenAI, and custom API tokens
Credentials
Passwords, SSH keys, authentication tokens
PII
SSN, phone numbers, email addresses, names
Financial
Credit cards, bank accounts, tax IDs
Healthcare
Medical records, diagnoses, prescriptions
Confidential
Trade secrets, internal documents
System Architecture
Loading diagram...
Enterprise Compliance
EU data protection compliance
Healthcare data privacy
Security & availability controls
Information security management
California consumer privacy
Financial data protection
Ready to Secure Your AI?
Get started with Verosek LLM Guard for your APIs or LeakGuard DLP for your enterprise. Contact our team for access, technical consultation, and custom integration support.