From fb33b3a432dc9b699e6a70e4f6107deb77674fee Mon Sep 17 00:00:00 2001 From: Annabell Schaefer Date: Tue, 10 Mar 2026 16:57:22 +0100 Subject: [PATCH 1/5] add mental model for adotion + make quickstart more visible --- pages/docs/index.mdx | 71 ++++++++++++++++++++++++++------------------ 1 file changed, 42 insertions(+), 29 deletions(-) diff --git a/pages/docs/index.mdx b/pages/docs/index.mdx index ec313c6b8b..cb3dbf4725 100644 --- a/pages/docs/index.mdx +++ b/pages/docs/index.mdx @@ -1,22 +1,56 @@ --- title: Langfuse Documentation -description: Langfuse is an open source LLM engineering platform. It includes observability, analytics, and experimentation features. +description: Langfuse is an open source LLM engineering platform for tracing, prompts, evaluations, and analytics. --- # Langfuse Overview -Langfuse is an **open-source LLM engineering platform** ([GitHub](https://github.com/langfuse/langfuse)) that helps teams collaboratively debug, analyze, and iterate on their LLM applications. All platform features are natively integrated to accelerate the development workflow. Langfuse is open, self-hostable, and extensible ([_why langfuse?_](/why)). +Langfuse is an **open-source LLM engineering platform** ([GitHub](https://github.com/langfuse/langfuse)) that helps teams trace, manage prompts, evaluate quality, and improve their LLM applications. Langfuse is open, self-hostable, and extensible ([_why Langfuse?_](/why)). + +Langfuse combines three core workflows: + +- **Observability:** Trace your application to understand behavior in production. +- **Prompt Management:** Version and ship prompts as production assets. +- **Evaluation:** Measure quality with scores, datasets, and experiments. + +If you are new to Langfuse, the goal should be to get to your first trace or first managed prompt quickly. Once that is in place, you can expand into the other workflows over time. import { FeatureOverview } from "@/components/FeatureOverview"; import { TextQuote, GitPullRequestArrow, ThumbsUp, - FlaskConical, } from "lucide-react"; +## Quickstarts [#quickstarts] + +Choose the path that best fits what you want to do first: + + + } + title="Get First Trace" + href="/docs/observability/get-started" + arrow + /> + } + title="Manage First Prompt" + href="/docs/prompt-management/get-started" + arrow + /> + } + title="Set Up Evals" + href="/docs/evaluation/overview" + arrow + /> + + +If you prefer end-to-end examples, go to [Guides](/guides). It collects walkthroughs, tutorials, videos, and cookbook examples for tracing, prompt management, and evaluation. + ## Observability [#observability] [Observability](/docs/observability/overview) is essential for understanding and debugging LLM applications. Unlike traditional software, LLM applications involve complex, non-deterministic interactions that can be challenging to monitor and debug. Langfuse provides comprehensive tracing capabilities that help you understand exactly what's happening in your application. @@ -82,7 +116,11 @@ import EvaluationOverview from "@/components-mdx/evaluation-overview-gifs.mdx"; ## Where to start? -Setting up the full process of online tracing, prompt management, production evaluations to identify issues, and offline evaluations on datasets requires some time. This guide is meant to help you figure out what is most important for your use case. +Start with the workflow that matches your immediate goal, then expand over time: + +- Add tracing if you need visibility into application behavior. +- Add prompt management if prompts need versioning and release controls. +- Add evaluations when you want to measure quality systematically. _Simplified lifecycle from PoC to production:_ @@ -103,31 +141,6 @@ _Simplified lifecycle from PoC to production:_ lifecycle](/images/docs/features-dark.png) -## Quickstarts - -Get up and running with Langfuse in minutes. Choose the path that best fits your current needs: - - - } - title="Integrate LLM Application/Agent Tracing" - href="/docs/observability/get-started" - arrow - /> - } - title="Integrate Prompt Management" - href="/docs/prompt-management/get-started" - arrow - /> - } - title="Setup Evaluations" - href="/docs/evaluation/overview" - arrow - /> - - ## Why Langfuse? - **Open source:** Fully open source with public API for custom integrations From 11e751d45fd7d757a2cd0daae6dd2587a989e385 Mon Sep 17 00:00:00 2001 From: Annabell Schaefer Date: Tue, 10 Mar 2026 16:59:53 +0100 Subject: [PATCH 2/5] original version --- pages/docs/index.mdx | 71 ++++++++++++++++++-------------------------- 1 file changed, 29 insertions(+), 42 deletions(-) diff --git a/pages/docs/index.mdx b/pages/docs/index.mdx index cb3dbf4725..ec313c6b8b 100644 --- a/pages/docs/index.mdx +++ b/pages/docs/index.mdx @@ -1,56 +1,22 @@ --- title: Langfuse Documentation -description: Langfuse is an open source LLM engineering platform for tracing, prompts, evaluations, and analytics. +description: Langfuse is an open source LLM engineering platform. It includes observability, analytics, and experimentation features. --- # Langfuse Overview -Langfuse is an **open-source LLM engineering platform** ([GitHub](https://github.com/langfuse/langfuse)) that helps teams trace, manage prompts, evaluate quality, and improve their LLM applications. Langfuse is open, self-hostable, and extensible ([_why Langfuse?_](/why)). - -Langfuse combines three core workflows: - -- **Observability:** Trace your application to understand behavior in production. -- **Prompt Management:** Version and ship prompts as production assets. -- **Evaluation:** Measure quality with scores, datasets, and experiments. - -If you are new to Langfuse, the goal should be to get to your first trace or first managed prompt quickly. Once that is in place, you can expand into the other workflows over time. +Langfuse is an **open-source LLM engineering platform** ([GitHub](https://github.com/langfuse/langfuse)) that helps teams collaboratively debug, analyze, and iterate on their LLM applications. All platform features are natively integrated to accelerate the development workflow. Langfuse is open, self-hostable, and extensible ([_why langfuse?_](/why)). import { FeatureOverview } from "@/components/FeatureOverview"; import { TextQuote, GitPullRequestArrow, ThumbsUp, + FlaskConical, } from "lucide-react"; -## Quickstarts [#quickstarts] - -Choose the path that best fits what you want to do first: - - - } - title="Get First Trace" - href="/docs/observability/get-started" - arrow - /> - } - title="Manage First Prompt" - href="/docs/prompt-management/get-started" - arrow - /> - } - title="Set Up Evals" - href="/docs/evaluation/overview" - arrow - /> - - -If you prefer end-to-end examples, go to [Guides](/guides). It collects walkthroughs, tutorials, videos, and cookbook examples for tracing, prompt management, and evaluation. - ## Observability [#observability] [Observability](/docs/observability/overview) is essential for understanding and debugging LLM applications. Unlike traditional software, LLM applications involve complex, non-deterministic interactions that can be challenging to monitor and debug. Langfuse provides comprehensive tracing capabilities that help you understand exactly what's happening in your application. @@ -116,11 +82,7 @@ import EvaluationOverview from "@/components-mdx/evaluation-overview-gifs.mdx"; ## Where to start? -Start with the workflow that matches your immediate goal, then expand over time: - -- Add tracing if you need visibility into application behavior. -- Add prompt management if prompts need versioning and release controls. -- Add evaluations when you want to measure quality systematically. +Setting up the full process of online tracing, prompt management, production evaluations to identify issues, and offline evaluations on datasets requires some time. This guide is meant to help you figure out what is most important for your use case. _Simplified lifecycle from PoC to production:_ @@ -141,6 +103,31 @@ _Simplified lifecycle from PoC to production:_ lifecycle](/images/docs/features-dark.png) +## Quickstarts + +Get up and running with Langfuse in minutes. Choose the path that best fits your current needs: + + + } + title="Integrate LLM Application/Agent Tracing" + href="/docs/observability/get-started" + arrow + /> + } + title="Integrate Prompt Management" + href="/docs/prompt-management/get-started" + arrow + /> + } + title="Setup Evaluations" + href="/docs/evaluation/overview" + arrow + /> + + ## Why Langfuse? - **Open source:** Fully open source with public API for custom integrations From b19fad761a67eea6b1952b7c325b248c717526d0 Mon Sep 17 00:00:00 2001 From: Annabell Schaefer Date: Tue, 10 Mar 2026 17:02:56 +0100 Subject: [PATCH 3/5] minimal edit proposal --- pages/docs/index.mdx | 71 ++++++++++++++++++++++++++------------------ 1 file changed, 42 insertions(+), 29 deletions(-) diff --git a/pages/docs/index.mdx b/pages/docs/index.mdx index ec313c6b8b..570c6f6f8a 100644 --- a/pages/docs/index.mdx +++ b/pages/docs/index.mdx @@ -1,22 +1,56 @@ --- title: Langfuse Documentation -description: Langfuse is an open source LLM engineering platform. It includes observability, analytics, and experimentation features. +description: Langfuse is an open source LLM engineering platform for tracing, prompts, evaluations, and analytics. --- # Langfuse Overview -Langfuse is an **open-source LLM engineering platform** ([GitHub](https://github.com/langfuse/langfuse)) that helps teams collaboratively debug, analyze, and iterate on their LLM applications. All platform features are natively integrated to accelerate the development workflow. Langfuse is open, self-hostable, and extensible ([_why langfuse?_](/why)). +Langfuse is an **open-source LLM engineering platform** ([GitHub](https://github.com/langfuse/langfuse)) that helps teams trace, manage prompts, evaluate quality, and improve their LLM applications. Langfuse is open, self-hostable, and extensible ([_why Langfuse?_](/why)). + +Langfuse combines three core workflows: + +- **Observability:** Trace your application to understand behavior in production. +- **Prompt Management:** Version and ship prompts as production assets. +- **Evaluation:** Measure quality with scores, datasets, and experiments. + +Most teams start with tracing or prompt management, then add evaluation as they expand their setup. import { FeatureOverview } from "@/components/FeatureOverview"; import { TextQuote, GitPullRequestArrow, ThumbsUp, - FlaskConical, } from "lucide-react"; +## Quickstarts [#quickstarts] + +Choose the path that best fits what you want to do first: + + + } + title="Get First Trace" + href="/docs/observability/get-started" + arrow + /> + } + title="Manage First Prompt" + href="/docs/prompt-management/get-started" + arrow + /> + } + title="Set Up Evals" + href="/docs/evaluation/overview" + arrow + /> + + +If you prefer end-to-end examples, go to [Guides](/guides). It collects walkthroughs, tutorials, videos, and cookbook examples for tracing, prompt management, and evaluation. + ## Observability [#observability] [Observability](/docs/observability/overview) is essential for understanding and debugging LLM applications. Unlike traditional software, LLM applications involve complex, non-deterministic interactions that can be challenging to monitor and debug. Langfuse provides comprehensive tracing capabilities that help you understand exactly what's happening in your application. @@ -82,7 +116,11 @@ import EvaluationOverview from "@/components-mdx/evaluation-overview-gifs.mdx"; ## Where to start? -Setting up the full process of online tracing, prompt management, production evaluations to identify issues, and offline evaluations on datasets requires some time. This guide is meant to help you figure out what is most important for your use case. +Start with the workflow that matches your immediate goal, then expand over time: + +- Add tracing if you need visibility into application behavior. +- Add prompt management if prompts need versioning and release controls. +- Add evaluations when you want to measure quality systematically. _Simplified lifecycle from PoC to production:_ @@ -103,31 +141,6 @@ _Simplified lifecycle from PoC to production:_ lifecycle](/images/docs/features-dark.png) -## Quickstarts - -Get up and running with Langfuse in minutes. Choose the path that best fits your current needs: - - - } - title="Integrate LLM Application/Agent Tracing" - href="/docs/observability/get-started" - arrow - /> - } - title="Integrate Prompt Management" - href="/docs/prompt-management/get-started" - arrow - /> - } - title="Setup Evaluations" - href="/docs/evaluation/overview" - arrow - /> - - ## Why Langfuse? - **Open source:** Fully open source with public API for custom integrations From 98086453e8a201912edb15b98e834973e9b9462d Mon Sep 17 00:00:00 2001 From: Annabell Schaefer Date: Tue, 10 Mar 2026 17:27:34 +0100 Subject: [PATCH 4/5] slight reordering --- pages/docs/index.mdx | 49 +++++++++++++++++++++++--------------------- 1 file changed, 26 insertions(+), 23 deletions(-) diff --git a/pages/docs/index.mdx b/pages/docs/index.mdx index 570c6f6f8a..47728874ab 100644 --- a/pages/docs/index.mdx +++ b/pages/docs/index.mdx @@ -3,47 +3,29 @@ title: Langfuse Documentation description: Langfuse is an open source LLM engineering platform for tracing, prompts, evaluations, and analytics. --- -# Langfuse Overview +# Get started with Langfuse Langfuse is an **open-source LLM engineering platform** ([GitHub](https://github.com/langfuse/langfuse)) that helps teams trace, manage prompts, evaluate quality, and improve their LLM applications. Langfuse is open, self-hostable, and extensible ([_why Langfuse?_](/why)). -Langfuse combines three core workflows: - -- **Observability:** Trace your application to understand behavior in production. -- **Prompt Management:** Version and ship prompts as production assets. -- **Evaluation:** Measure quality with scores, datasets, and experiments. - -Most teams start with tracing or prompt management, then add evaluation as they expand their setup. - -import { FeatureOverview } from "@/components/FeatureOverview"; -import { - TextQuote, - GitPullRequestArrow, - ThumbsUp, -} from "lucide-react"; - - - +To get started, [sign up](https://cloud.langfuse.com) for a Langfuse account. ## Quickstarts [#quickstarts] -Choose the path that best fits what you want to do first: - } - title="Get First Trace" + title="Log Your First Trace" href="/docs/observability/get-started" arrow /> } - title="Manage First Prompt" + title="Manage Your First Prompt" href="/docs/prompt-management/get-started" arrow /> } - title="Set Up Evals" + title="Run Your First Eval" href="/docs/evaluation/overview" arrow /> @@ -51,6 +33,27 @@ Choose the path that best fits what you want to do first: If you prefer end-to-end examples, go to [Guides](/guides). It collects walkthroughs, tutorials, videos, and cookbook examples for tracing, prompt management, and evaluation. +## Langfuse Overview + +Langfuse combines three core workflows: + +- **Observability:** Trace your application to understand behavior in production. +- **Prompt Management:** Version and ship prompts as production assets. +- **Evaluation:** Measure quality with scores, datasets, and experiments. + +Most teams start with tracing or prompt management, then add evaluation as they expand their setup. + +import { FeatureOverview } from "@/components/FeatureOverview"; +import { + TextQuote, + GitPullRequestArrow, + ThumbsUp, +} from "lucide-react"; + + + + + ## Observability [#observability] [Observability](/docs/observability/overview) is essential for understanding and debugging LLM applications. Unlike traditional software, LLM applications involve complex, non-deterministic interactions that can be challenging to monitor and debug. Langfuse provides comprehensive tracing capabilities that help you understand exactly what's happening in your application. From 67b3a27cb79c669c26db62f3cb2b7d747638cb0a Mon Sep 17 00:00:00 2001 From: Annabell Schaefer Date: Tue, 10 Mar 2026 17:31:51 +0100 Subject: [PATCH 5/5] small assertiveness change --- pages/docs/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/docs/index.mdx b/pages/docs/index.mdx index 47728874ab..6ffaa070d2 100644 --- a/pages/docs/index.mdx +++ b/pages/docs/index.mdx @@ -5,7 +5,7 @@ description: Langfuse is an open source LLM engineering platform for tracing, pr # Get started with Langfuse -Langfuse is an **open-source LLM engineering platform** ([GitHub](https://github.com/langfuse/langfuse)) that helps teams trace, manage prompts, evaluate quality, and improve their LLM applications. Langfuse is open, self-hostable, and extensible ([_why Langfuse?_](/why)). +Langfuse is the leading **open-source LLM engineering platform** ([GitHub](https://github.com/langfuse/langfuse)) that helps teams trace, manage prompts, evaluate quality, and improve their LLM applications. Langfuse is open, self-hostable, and extensible ([_why Langfuse?_](/why)). To get started, [sign up](https://cloud.langfuse.com) for a Langfuse account. ## Quickstarts [#quickstarts]