diff --git a/config.toml b/config.toml index cbad9feedc..6fedc4cef8 100644 --- a/config.toml +++ b/config.toml @@ -78,6 +78,7 @@ ignoreFiles = [] # pre = "" # post = "
Ongoing" # url = "/events/gsoc-2025/" + [[menu.main]] name = "Docs" weight = -102 diff --git a/content/en/_index.html b/content/en/_index.html index 5bfad5c681..45eb275793 100644 --- a/content/en/_index.html +++ b/content/en/_index.html @@ -3,37 +3,37 @@ +++ -{{}} -
-
- - Get Started - - - - Contribute - - - - GenAI - - -
- - - -
+{{% blocks/cover title="Cloud-native scaled. Kubernetes empowered. Automate, orchestrate, and accelerate your ML dataflows with Kubeflow." subtitle="Kubeflow is an open-source ecosystem of tools that teams can deploy ad hoc or integrate as the complete Kubeflow platform. The mission is simple: empower practicioners to focus on models, not manifests, so everyone can stop wrestling with infrastructure and start tackling the ML frontier together." image_anchor="center" color="dark" %}} + -{{}} +{{% /blocks/cover %}} -
+

Open Source. Battle-Tested. Community Built.

-
+
- +
- + -
+ +
-

- What is Kubeflow? +

+ An Ecosystem of Tools AND a Cohesive Platform.

-

- Kubeflow is the foundation of tools for AI Platforms on Kubernetes. -

- AI platform teams can build on top of Kubeflow by using each project independently or deploying the - entire AI reference platform to meet their specific needs. The Kubeflow AI reference platform is - composable, modular, portable, and scalable, backed by an ecosystem of Kubernetes-native -
projects for each stage of - the AI lifecycle. -

- Deploy Kubeflow anywhere you run Kubernetes. -

+
+

+ Born from a mission to bridge two distinct worlds—cloud-native infrastructure ("Kube") and machine learning + workflows ("Flow")—Kubeflow was created to solve the "Platform Problem." As AI demands scale, modern ML efforts + are often spent on wrestling with a disconnected portfolio of tools that create friction instead of value. +

+ +

+ This unifying mission is alive today in Kubeflow's ecosystem of maintainers and expert contributors, working + closely together on a shared vision of interoperability across projects like Pipelines, Notebooks, Trainer, and + more. With Kubeflow, teams can deploy ML workloads on any Kubernetes cluster, tailoring the environment to meet + their exact machine learning needs. By uniting the Kubernetes and ML communities, the result is clear: Data + Scientists get the centralized experience they need, and Platform Engineers can confidently rely on Kubeflow to + abstract their infrastructure complexity away. + + In the end, the Kubeflow community works tirelessly to reduce operational friction so we all can focus on what + matters: our AI applications. +

+
+

Scroll to learn more, or dive straight into our documentation.

+ Explore the Kubeflow Docs +
+
-
+
+
+
+

+ What Workloads Run On Kubeflow? +

+
+

+ The Kubeflow ecosystem builds specialized components that teams across the community rely on to tackle + everything from traditional predictive models to complex generative AI: +

+
    +
  • Develop (Notebooks): Spin up interactive, collaborative environments to + write code and experiment.
  • +
  • Train (Kubeflow Trainer): Fine-tune LLMs and train distributed foundation + models at scale.
  • +
  • Optimize (Katib): Automate hyperparameter tuning and optimize advanced + architectures like agentic RAG.
  • +
  • Process (Spark Operator): Prepare and process massive datasets natively on + Kubernetes.
  • +
  • Track (Dashboard): Monitor complex experiments and visualize runs through a + centralized UI.
  • +
  • Serve (KServe): Deploy, monitor, and scale generative and predictive models + directly to production.
  • +
  • Manage (Model Registry): Index, version, and manage ML models and artifact + metadata to bridge the gap between experimentation and production.
  • +
  • Orchestrate (Pipelines): Unify the entire ML lifecycle, enabling teams to + build, share, and iterate on reusable components.
  • +
+ + +
+

+ Kubeflow has plenty of use cases! You can learn more about our GenAI use cases, as well as check out + how our community uses Kubeflow on the Kubeflow blog! +

+ +

+ Have a great Kubeflow use case? Join our community calls or open a GitHub PR to get your team's story featured on the official website. +

+
+ +

+ Explore the individual project tiles below to learn more about each tool, or keep scrolling to launch + your first deployment. +

+ +
+
+
+
+
+ + Spark Operator Logo + +
+
Kubeflow Spark Operator
+

+ Kubeflow Spark + Operator + aims to make specifying and running Spark applications as easy and idiomatic as running other workloads on + Kubernetes. +

+
+
+
+ + Kubeflow Notebooks Logo + +
+
Kubeflow Notebooks
+

+ Kubeflow Notebooks provide + interactive development environments for AI, ML, and Data workloads on Kubernetes. +

+
+
+
+ + Kubeflow Trainer logo + +
+
Kubeflow Trainer
+

+ Kubeflow Trainer is a + Kubernetes-native project for LLM fine-tuning and enabling scalable, distributed training across a wide + range + of AI frameworks, including PyTorch, HuggingFace, DeepSpeed, MLX, JAX, XGBoost, and others. +

+
+
+
+ + Katib Logo + +
+
Kubeflow Katib
+

+ Kubeflow Katib is a + Kubernetes-native project for automated machine learning (AutoML) with support for hyperparameter tuning, + early stopping, and neural architecture search. +

+
+
+
+
+
+
+ + KServe Logo + +
+
Kubeflow KServe
+

+ KServe is a standardized + distributed generative and predictive AI inference platform for scalable, multi-framework deployment on + Kubernetes. +

+
+
+
+ + Kubeflow Model Registry logo + +
+
Kubeflow Model Registry
+

+ Kubeflow Model + Registry + is a cloud-native component that provides a single pane of glass for ML model developers to index and manage + models, versions, and ML artifact metadata. It fills a gap between model experimentation and production + activities. +

+
+
+
+ + Kubeflow Pipelines Logo + +
+
Kubeflow Pipelines
+

+ Kubeflow Pipelines (KFP) + is + a platform for building then deploying portable and scalable machine learning workflows using Kubernetes. +

+
+
+
+ + Dashboard Icon + +
+
Kubeflow Dashboard
+

+ Kubeflow Central + Dashboard is our hub which connects the authenticated web interfaces of Kubeflow and other ecosystem + components. +

+
+
+
+
+
+ +
+
+ +
+
+

+ All unified by the Kubeflow + SDK—a unified set of Pythonic APIs designed to accelerate + time-to-value + without requiring deep Kubernetes expertise. +

+
+
+
Experience the ease of the Kubeflow SDK
+ + pip install -U kubeflow + +
+
+
+
+
+ +
+
+

+ Getting Started with Kubeflow +

+
+

+ Building real AI products requires tools that adapt to unique workflows. Ready to get more models into + production? Our Getting Started guide outlines deployment paths tailored perfectly to various architectural + needs. Adopters can build a highly customized, production-ready MLOps foundation using robust community + manifests, or leverage a packaged, conformant distribution for a more opinionated deployment. Kubeflow is + commited to giving Platform Engineers the infrastructure control they require while ensuring they meet the needs + of their data science and machine learning counterparts. +

+
+ +
+
+

+ "We believe Kubeflow is the gold standard for AI/ML on Kubernetes. We are committed to ensuring it empowers + the next decade of AI and ML workloads." +

+
+ + Read the Getting Started Guide + + +
+
+
+

Trusted by @@ -98,224 +356,60 @@

- -
- +
- -
- -
-
-
-
-

- Kubeflow Projects -

-
-
- - Spark Operator Logo - -
-
Kubeflow Spark Operator
-

- Kubeflow Spark Operator aims to make specifying and running Spark applications as easy and idiomatic as running other workloads on Kubernetes. -

-
-
-
- - Kubeflow Notebooks Logo - -
-
Kubeflow Notebooks
-

- Kubeflow Notebooks runs interactive development environments for AI, ML, and Data workloads on Kubernetes. -

-
-
-
- - Kubeflow Trainer logo - -
-
Kubeflow Trainer
-

- Kubeflow Trainer is a Kubernetes-native project for LLMs fine-tuning and enabling scalable, distributed training across - a wide range of AI frameworks, including PyTorch, HuggingFace, DeepSpeed, MLX, JAX, XGBoost, and others. -

-
-
-
- - Katib Logo - -
-
Kubeflow Katib
-

- Kubeflow Katib is a Kubernetes-native project for automated machine learning (AutoML) with support for hyperparameter tuning, early stopping and neural architecture search. -

-
-
+
+

+ Want to see your logo here? Open an issue or reach out on the adopters repo, + open a PR, or create an issue on the website GitHub. +

-
-
-
- - KServe Logo - -
-
Kubeflow KServe
-

- KServe is a standardized distributed generative and predictive AI inference platform for scalable, multi-framework deployment on Kubernetes. -

-
-
-
- - Kubeflow Model Registry logo - -
-
Kubeflow Model Registry
-

- Kubeflow Model Registry is a cloud-native component that provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. - It fills a gap between model experimentation and production activities. -

-
-
-
- - Kubeflow Pipelines Logo - -
-
Kubeflow Pipelines
-

- Kubeflow Pipelines (KFP) is a platform for building then deploying portable and scalable machine learning workflows using Kubernetes. -

-
-
-
- - Dashboard Icon - -
-
Kubeflow Dashboard
-

- Kubeflow Central Dashboard is our hub which connects the authenticated web interfaces of Kubeflow and other ecosystem components. -

-
+ + +
+

+ Join our Community +

+
+

+ We are an open and welcoming community of software developers, data scientists, and organizations! +

+

+ Check out the weekly + community calls, get involved in discussions on the mailing list or + chat with others on the Slack Workspace! +

-
-
- - -
-
- -
-
- -
-
-

- Join our Community -

-

- We are an open and welcoming community of software developers, data scientists, and organizations! - Check out the weekly community calls, get involved in discussions on the mailing list or chat with others on the Slack Workspace! -

- - +  Cloud Native Computing Foundation Logo +  Cloud Native Computing Foundation Logo
We are a Cloud Native Computing Foundation project.
-
+
\ No newline at end of file