diff --git a/website/docs/fundamentals/workloads/horizontal-pod-autoscaler/test-hpa.md b/website/docs/fundamentals/workloads/horizontal-pod-autoscaler/test-hpa.md index d87d87f7af..cc60ea03dd 100644 --- a/website/docs/fundamentals/workloads/horizontal-pod-autoscaler/test-hpa.md +++ b/website/docs/fundamentals/workloads/horizontal-pod-autoscaler/test-hpa.md @@ -3,7 +3,7 @@ title: "Generate load" sidebar_position: 20 --- -To observe HPA scale out in response to the policy we have configured we need to generate some load on our application. We'll do that by calling the home page of the workload with [hey](https://github.com/rakyll/hey). +To observe HPA scale out in response to the policy we have configured we need to generate some load on our application. We'll do that by calling the home page of the workload with [oha](https://github.com/hatoo/oha). The command below will run the load generator with: @@ -13,8 +13,8 @@ The command below will run the load generator with: ```bash hook=hpa-pod-scaleout hookTimeout=330 $ kubectl run load-generator \ - --image=williamyeh/hey:latest \ - --restart=Never -- -c 10 -q 5 -z 60m http://ui.ui.svc/home + --image=ghcr.io/hatoo/oha:latest \ + --restart=Never -- -c 10 -q 5 -z 60m --no-tui http://ui.ui.svc/home ``` Now that we have requests hitting our application we can watch the HPA resource to follow its progress: diff --git a/website/docs/fundamentals/workloads/keda/test-keda.md b/website/docs/fundamentals/workloads/keda/test-keda.md index a98d7002d2..d6556a8a8b 100644 --- a/website/docs/fundamentals/workloads/keda/test-keda.md +++ b/website/docs/fundamentals/workloads/keda/test-keda.md @@ -3,7 +3,7 @@ title: "Generate load" sidebar_position: 20 --- -To observe KEDA scale the deployment in response to the KEDA `ScaledObject` we have configured, we need to generate some load on our application. We'll do that by calling the home page of the workload with [hey](https://github.com/rakyll/hey). +To observe KEDA scale the deployment in response to the KEDA `ScaledObject` we have configured, we need to generate some load on our application. We'll do that by calling the home page of the workload with [oha](https://github.com/hatoo/oha). The command below will run the load generator with: @@ -14,8 +14,8 @@ The command below will run the load generator with: ```bash hook=keda-pod-scaleout hookTimeout=330 $ export ALB_HOSTNAME=$(kubectl get ingress ui -n ui -o yaml | yq .status.loadBalancer.ingress[0].hostname) $ kubectl run load-generator \ - --image=williamyeh/hey:latest \ - --restart=Never -- -c 3 -q 5 -z 10m http://$ALB_HOSTNAME/home + --image=ghcr.io/hatoo/oha:latest \ + --restart=Never -- -c 3 -q 5 -z 10m --no-tui http://$ALB_HOSTNAME/home ``` Based on the `ScaledObject`, KEDA creates an HPA resource and provides the required metrics to allow the HPA to scale the workload. Now that we have requests hitting our application we can watch the HPA resource to follow its progress: