Bring your AI app
to production.

The platform to monitor, manage and improve your LLM apps.
Integrate now — it's free
free up to 1k events / day

Log & debug LLM agents

Log all your prompts and results, see how agents are performing in production.
  • Traces & error stack traces
  • Instant search & filters
  • Live tail
  • Label data for fine-tuning
Log & debug LLM agents

Stay on top of costs

Monitor requests and costs segmented by user and model.
  • Auto cost & token calculations
  • Optimize costs
  • Costs by user, prompts & agents
Stay on top of costs

Debug complex agents

Replay agent executions with traces and find out what went wrong and where.
  • Trace agent executions
  • Error stack traces
  • Filter & instant search
Debug complex agents

Run Benchmarks

Experiment with prompts and models to find the best performing ones.
  • Fully no-code
  • AI-powered assertions & tests
  • Test open-source models
Run Benchmarks

Replay user chats

Record user conversations and identify gaps in your chatbot's knowledge.
  • Capture user feedback
  • React hooks
  • Dev-friendly frontend integration
Replay user chats

Iterate on prompts

Create templates and collaborate on prompts with non-technical teammates.
  • Clean your source-code
  • Versioning
  • A/B testing
Iterate on prompts
1500+ AI developers at leading companies build better LLM apps
VponVoiceflowIslandsbankiTextyessOrangeParagon OneSUPAAccentureHCSSDHLNational University of SingaporeVponVoiceflowIslandsbankiTextyessOrangeParagon OneSUPAAccentureHCSSDHLNational University of Singapore

Lunary is easy to integrate

import lunary
from openai import OpenAI
client = OpenAI()
lunary.monitor(client)
chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Hello"}],
)

Secure hosting for your data

Self-host on your servers
Keep the data within your company for maximum security. Learn more →
Inspect our source code
Our source code is completely open source and available on GitHub. Learn more →
Data hosted in the EU
Our Cloud offering's servers are located in Europe to comply with GDPR.
SOC 2 and ISO 27001
Lunary will obtain SOC 2 Type 2 and ISO 27001 certification by June 2024. Learn more →

Join our Open Source community.

Everything we build is 100% open-source. Check out the source-code or our ship logs.
lunary: 841

What our users say

Developers also love these

Live TailNew logs and runs are shown in real time.
Instant SearchSearch across all your data in milliseconds.
Label dataTag your data to easily filter and fine-tune.
Self HostSet up on your own servers for maximum data security.
PromptsStore, version, and collaborate on prompts.
APIExtend your LLM workflow with our complete API.
AlertsSet up alerts to be notified of outlier results and errors.
Frontend SDKTrack chats and feedback directly in your frontend.
PlaygroundTest your prompts on 20+ models.

Get started in minutes.

Run it on your own or get started in minutes with our hosted version.
Open Source
Self Hostable
Evaluations
Alerts
Public API
Exports
Prompt Templates
Chat Replays
Agent Tracing
Metrics
Feedback Tracking
LangChain Support
Open Source
Self Hostable
Evaluations
Alerts
Public API
Exports
Prompt Templates
Chat Replays
Agent Tracing
Metrics
Feedback Tracking
LangChain Support