Your LLMs Monitored and Protected

Monitor your LLMs for model drift and protect your LLMs from prompt injection attacks.

Trusted by the fastest growing AI-teams looking to mitigate AI risk.

Power AI Automation - Assure Consistent Results

Use our filters to assure that your LLM outputs are consistent, providing your customers with the best experience.

1. Prompt Engineering

Configure your OpenAI model or bring your own model with a golden prompt.

2. Configure Your Filters

Select the filters you want to use to assure consistent results.

3. Get Alerts Quick

Get alerts when your model produces an unexpected output.

Word and Character Counts

Word and character count filters make sure your customers get results that are not too long or too short.

Vulgarity Filtering

Make sure your generated text is not vulgar and does not contain profanity.

Contains Terms Filtering

Make sure that your model always outputs relevant terms and does not leave out important information.

Exact Match Filtering

Perform AI auditing by making sure your model outputs the exact text you want.

New Filters Coming Soon

We are constantly adding new filters to make help you maintain model consistency for your customers.

Semantic Similarity

When given target text, make sure your model outputs text that is similar enough for your use case.

Bolster Your AI Security - Protect Your Models and Brand

Use prompt injection detection to protect your use case from being abused.

1. Setup Prompt Injection Detection

Create your configuration with the prompt injection filter.

2. Integrate Into Your Application

Call our API to get a legitness score before you call your model.

3. Stay Protected and Avoid Token Costs

Build your own logic to handle low legitness scores and attempted attacks.

Prompt Injection Detection

Simple Pricing

We offer a pay-as-you-go pricing model with no contracts or lock-in.

Essential

Free

Free for 2,500 API calls per month.

Pro

$0.001 per API call

$0.001 per API call after 2,500 API calls per month.

Enterprise

Custom

Contact us for custom pricing for enterprise use cases.

Frequently Asked Questions

We are here to help. If you have any questions, please contact us at info@sentinellm.com

Who is this for?
Does this work with OpenAI or ChatGPT?
We don't use ChatGPT, can we still use SentinelLM?
What do I get with this?
What is prompt injection detection?
What is the benefit of Synthetic Monitoring?
In the context of pricing, what is an API call?

Increase Your AI Security and Assure Consistent Results