Core Commitment

The Trust Protocol

At Lindr, we believe that observability is built on a foundation of absolute transparency. When you route your production traffic through our gateway, you are trusting us with your most critical data. Here is how we honor that trust.

Zero-Knowledge Proxying

Our gateway is designed to be a "pass-through" entity. We never store the body of your LLM prompts unless you explicitly enable full session logging. By default, our engine extracts linguistic signals in-memory and discards the raw text immediately after analysis.

Deterministic Evaluation

Most evaluation platforms use "LLM-as-a-judge", which is expensive, non-deterministic, and biased. Lindr uses a multi-signal ensemble of lexical, syntactic, and semantic algorithms. If you send the same response twice, you will get the exact same score. No hidden LLM bias.

Enterprise Sovereignty

We provide clear audit logs for every action taken by our engine. You can see exactly which signal (Lexical, Syntactic, or Semantic) contributed to a drift alert, allowing your engineering team to verify our findings independently.

Security at Every Layer

  • AES-256 Encryption for all API keys
  • TLS 1.3 for all Gateway traffic
  • Automatic PII masking available
  • SOC2 Type II compliance (In-progress)
  • Optional self-hosted analysis workers
  • Isolated database tenants per Org

"Observability without transparency is just another black box. Lindr is here to turn the lights on."