Langfuse Roadmap
Langfuse is open source and we want to be fully transparent what we’re working on and what’s next. This roadmap is a living document and we’ll update it as we make progress.
Your feedback is highly appreciated. Feel like something is missing? Add new ideas on GitHub or vote on existing ones. Both are a great way to contribute to Langfuse and help us understand what is important to you.
🚀 Released
10 most recent changelog items:
- New Sidebar
- Event input and output masking
- Amazon Bedrock support for LLM Playground and Evaluations
- Langfuse LLM-as-a-judge now supports any (tool-calling) LLM
- Annotation Queues
- Aggregated and Color-coded Latency and Costs on Traces
- Documentation now integrates with GitHub Discussions (Support and Feature Requests)
- Langfuse on AWS Marketplace
- DSPy Integration Example
- Link prompts to Langchain executions
Subscribe to our mailing list to get occasional email updates about new features.
🚧 In progress
- Langfuse v3.0: preparing Langfuse for the next level of scale using an OLAP database, a queue and another container. Parts of it are already available in Langfuse Cloud and once migration is complete, self-hosting will be upgraded as well. Learn more in this GitHub Discussion.
- Export traces and sessions from Langfuse dashboard (CSV, JSON)
- Improved tables across the Langfuse UI to display all relevant information and be more user-friendly.
- Move to SDK references generated from docstrings to improve the developer experience (Intellisense) and reduce the risk of errors.
- Improve cost tracking of multi-modal LLMs and more complex pricing models (e.g. Anthropic/Google Context Caching, Google Vertex pricing)
- In-UI prompt and model evaluation/benchmarking based on Langfuse-managed custom evaluators.
🔮 Planned
- Webhooks to subscribe to changes within your Langfuse project.
- Datasets: make them usable in CI (e.g GitHub Actions).
- Comments on prompt versions.
- Improved datasets UI/UX.
- Add non-LLM evaluators to online evaluation within Langfuse UI.
- Revamped context-aware JS integration to to remove the need for nesting of tracing calls, similar to Python decorator.
- Better support for multi-modal traces that use base64 encoded images.
⚠️ Upcoming breaking changes
- Self-hosting: Langfuse v3.0 will add additional containers and database for improve scalability. Learn more in this GitHub Discussion.
- OpenAI integration, dropping support of
openai < 1.0.0
to greatly simplify the integration and improve the developer experience of everyone onopenai >= 1
. No timeline on this yet as many libraries still depend on the old version.
🙏 Feature requests and bug reports
The best way to support Langfuse is to share your feedback, report bugs, and upvote on ideas suggested by others.