Skip to content

Commit d15bf00

Browse files
author
Yasuyuki Takeo
committed
Locust 2.4.1 compatible
0 parents  commit d15bf00

31 files changed

+1462
-0
lines changed

.gitignore

+16
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
.env
2+
*.out
3+
Makefile
4+
.terraform
5+
.terraform*
6+
*.tfstate*
7+
.idea
8+
.vscode
9+
error.json
10+
log.txt
11+
kubeconfig
12+
gcloud_conf.sh
13+
locust_connect.sh
14+
terraform.png
15+
locust/venv
16+
__pycache__

README.md

+138
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,138 @@
1+
# Locust Load Testing
2+
This repository is for setting up load testing environment on GKE with terraform.
3+
4+
![diagram](docs/diagram.png?raw=true "Diagram")
5+
# Pre Request
6+
- gcloud >= Google Cloud SDK 349.0.0
7+
- kubernetes-cli >= 1.22.1
8+
- terraform >= 1.0.5
9+
- python >= 3.9 (To generate diagram)
10+
# How to Set Up on GKE
11+
## Configure Makefile
12+
Copy `Makefile.example` and fill out attributes below:
13+
| value | description |
14+
|:-- |:--|
15+
| PROJECT_ID | GCP Project ID |
16+
| CLUSTER_NAME | Cluster base name. Due to the cluster deletion takes time, this tool add a random texts at the end of the base cluster name |
17+
| REGION | GCP Region name |
18+
| ZONE | GCP Zone name |
19+
| MACHINE_TYPE | Machine type of loading machines. Please see [machine types](https://cloud.google.com/compute/docs/general-purpose-machines) for more details |
20+
| CREDENTIALS | The full path to the Service Account JSON file. |
21+
| SERVICE_ACCOUNT_EMAIL | Service Account Email. Eg. `[User name]@[Project name].iam.gserviceaccount.com` |
22+
| TARGET_HOST | Target host URL |
23+
24+
## Set Up Google Kubernetes Cluster (GKE)
25+
1. Navigate to `deploy` folder.
26+
27+
```
28+
make init_all
29+
```
30+
to set up `terraform`
31+
1. Run
32+
```
33+
make build
34+
```
35+
to set up a GKE cluster and initialize and `gcloud` command pointing to the created GKE cluster.
36+
1. Run
37+
```
38+
make a_locust
39+
```
40+
to set up `locust` and required config maps (storing load test scripts) for performance testing.
41+
1. Run
42+
```
43+
make locust
44+
```
45+
This will do port forwarding to the local. Then you can access to `Locust Master` with `localhost:8089`.
46+
1. Stop `make locust` and Run
47+
```
48+
make refresh
49+
```
50+
This will refresh the Locust Cluster with updated `main.py` script file and `values.yaml` content. Once the Locust Cluster up and running, connect the master with `make locust`
51+
## Tear Down GKE Cluster
52+
Run
53+
```
54+
make d_all
55+
```
56+
57+
## Update Code for Load Testing
58+
At each load testing scripts update, workers need to be redeployed to read the latest config maps where testing scripts are stored according to the Kubernetes specification. This way allows you to update with one command.
59+
60+
1. If you are already connecting the load cluster with `make locust`, Ctrl+C to stop it.
61+
1. All code is stored under `locust` directory. `main.py` is the main logic, and libraries are under the `lib` directory.
62+
1. Once code is updated, run
63+
```
64+
make refresh
65+
```
66+
to reload `ConfigMap` and Locust clusters to read the updated config map.
67+
1. Run `make locust` again to connect the load cluster.
68+
69+
## How to Adjust Balance of Workers and Users
70+
To generate the load at a lower cost, you may want to use as few workers as possible. This is a sample step on how to adjust the number of users and workers appropriately.
71+
72+
In the case of generating 10000 RPS, here are the steps that I tried.
73+
74+
1. Enable HPA, start from 10 workers with 2000 users, and see how much load the Locust cluster can generate. In this case, Locust generated 3000 RPS and saturated there. No CPU errors are observed in Cloud Logging, which implies CPU is still not pushed to the limit.
75+
1. Assuming 3 times more users would generate 10000 RPS. Change users to 6000 and run `make refresh` to restore `ConfigMap` and Locust pods.
76+
1. You observed workers automatically scaled to 15 and the load reached higher than 10000 RPS.
77+
1. Adjust the initial worker to `15` in the `values.yaml` and `make refresh` to update the Locust pods.
78+
79+
## Reference Settings
80+
In the case where you use `spike_load.py` to generate **10000RPS** with the Locust Cluster on GKE, here is the reference configuration.
81+
82+
`spike_load.py` hatches users at once and hold requests until all users are spawned **in each worker** (not across all workers).
83+
84+
| parameters | description |
85+
|:-- | :-- |
86+
| Machine type of locust worker (`MACHINE_TYPE` in `Makefile`) | e2-standard-2 |
87+
| Replicas for worker (line 66 of `values.yaml`) | 15 |
88+
| User amount (line 15 of `spike_load.ph`, `user_amount`) | 10000 |
89+
90+
With this settings,
91+
- The first second RPS is around 600
92+
- It'll reach 10000RPS in 15 to 20 seconds, and go higher. You may want to pace the access with `constant_pacing` function if you exactly target 10000RPS and dwell (stay) for a while.
93+
94+
In `spike_load.py`, the below line configure the dwell load time. This code means dwell 120 seconds with amount of user_amount users. Adust dwell time accordingly.
95+
```spike_load.py
96+
targets_with_times = Step(user_amount, 120)
97+
```
98+
99+
# How to Run Locally
100+
You may want to iterate try and error quickly while building a testing script. Loading the testing script every time on GKE is quite troublesome. For the development phase, you can leverage Docker to run a small cluster locally.
101+
102+
Spin up the small locust cluster, run
103+
```
104+
docker-compose up --build --scale worker=1
105+
```
106+
and you can access to the master from `localhost:8089`
107+
108+
# Tips
109+
110+
## Test Script Locally first and move on the production.
111+
112+
Locust stops with exceptions when syntax errors are included in the loading script. For a faster turnaround, you may want to make sure the script works correctly at the local first and move on to the production.
113+
## Help of Commands
114+
Run `make help`
115+
## How to Access Locust Master Manually
116+
1. Go to GCP console > `Services & Ingress`
117+
1. Open `locust-cluster`, scroll down to `Ports`
118+
1. Click `PORT FORWARDING` button of `master-p3`, with port `8089` row
119+
1. A dialog will be popped up and displays the port forwarding code in there. Copy & Paste onto the terminal, and run.
120+
1. You can access the `locust-cluster` master pod with `localhost:8080` from your browser.
121+
## How to Configure gcloud for The GKE Cluster by Default
122+
This can be done just run `make build`, but also separately as below:
123+
1. Build cluster with
124+
```
125+
make build_cluster
126+
```
127+
128+
1. Run
129+
```
130+
make gcloud_init
131+
```
132+
This command will configure your `gcloud` environment pointing to the newly created GKE cluster.
133+
## How to Generate Diagram
134+
1. Install `Diagrams` following [this step](https://diagrams.mingrammer.com/docs/getting-started/installation).
135+
1. Go to `docs` directory and run `python diagram.py`
136+
137+
## How to Enable Autoscaling
138+
Autoscaling is depending on Kubernetes's [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-the-horizontal-pod-autoscaler-work)(HPA). To enable HPA, Kubernetes manifest needs to include `resource` to sepecify the pod's resource allocation so that Kubernetes can manage the pods based on the CPU usage.

deploy/Makefile.example

+138
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,138 @@
1+
# EXEC_DIR should be passed in the parameter
2+
# ex: make EXEC_DIR=./projects/us_central/setup a
3+
# Environments required to run Terraform
4+
PROJECT_ID=<Project ID is here>
5+
# Cluster name must be shorter than 40 characters
6+
CLUSTER_NAME=<Cluster Name is here>
7+
CREDENTIALS="<Service Account JSON file path (MUST BE absolute path)>"
8+
SERVICE_ACCOUNT_EMAIL="<service account e-mail address is here>"
9+
REGION=<Region is here>
10+
ZONE=${REGION}-b
11+
# https://cloud.google.com/compute/docs/general-purpose-machines
12+
# recommended MACHINE_TYPE is e2-standard-2
13+
MACHINE_TYPE=<Machine type is here>
14+
TARGET_HOST=<Target Host is here>
15+
16+
RED=`tput setaf 1`
17+
ORG_PATH := ${CURDIR}
18+
19+
ENVS = \
20+
export TF_VAR_PROJECT_ID=$(PROJECT_ID); \
21+
export TF_VAR_GOOGLE_APPLICATION_CREDENTIALS=$(CREDENTIALS); \
22+
export TF_VAR_CLUSTER_NAME=$(CLUSTER_NAME); \
23+
export TF_VAR_REGION=$(REGION); \
24+
export TF_VAR_ZONE=$(ZONE); \
25+
export TF_VAR_MACHINE_TYPE=$(MACHINE_TYPE); \
26+
export TF_VAR_SERVICE_ACCOUNT_EMAIL=$(SERVICE_ACCOUNT_EMAIL); \
27+
export TF_VAR_TARGET_HOST=$(TARGET_HOST); \
28+
29+
30+
# Clean up all environment at once
31+
.PHONY: clean_all
32+
clean_all: ## Clean up all environment. Remove all states and terraform cache files.
33+
printf "${RED}Clean up 0-build-cluster\n\n"; \
34+
cd projects/distributed-load-testing-using-kubernetes-locust/0-build-cluster; rm -fR terraform.* .terraform*; cd ${ORG_PATH}; \
35+
printf "${RED}Clean up 1-build-monitoring\n\n"; \
36+
cd projects/distributed-load-testing-using-kubernetes-locust/1-build-monitoring; rm -fR terraform.* .terraform*; cd ${ORG_PATH}; \
37+
printf "${RED}Clean up 2-deploy-locust\n\n"; \
38+
cd projects/distributed-load-testing-using-kubernetes-locust/2-deploy-locust; rm -fR terraform.* .terraform*; cd ${ORG_PATH}; \
39+
printf "${RED}Done\n"; \
40+
41+
42+
# Setup all environment at once
43+
.PHONY: init_all
44+
init_all: ## Initialize all environment
45+
printf "${RED}Init 0-build-cluster\n\n"; \
46+
cd projects/distributed-load-testing-using-kubernetes-locust/0-build-cluster; terraform init -upgrade; cd ${ORG_PATH}; \
47+
printf "${RED}Init 1-build-monitoring\n\n"; \
48+
cd projects/distributed-load-testing-using-kubernetes-locust/1-build-monitoring; terraform init -upgrade; cd ${ORG_PATH}; \
49+
printf "${RED}Init 2-deploy-locust\n\n"; \
50+
cd projects/distributed-load-testing-using-kubernetes-locust/2-deploy-locust; terraform init -upgrade; cd ${ORG_PATH}; \
51+
printf "${RED}Done\n"; \
52+
53+
.PHONY: gcloud_init
54+
gcloud_init: ## Init gcloud command
55+
./gcloud_init.sh "${PROJECT_ID}" "${ZONE}"; \
56+
projects/distributed-load-testing-using-kubernetes-locust/0-build-cluster/gcloud_conf.sh; \
57+
58+
# Build Cluster
59+
.PHONY: build_cluster
60+
build_cluster: ## Build performance testing environment on GKE
61+
printf "${RED}Building 0-build-cluster\n\n"; \
62+
cd projects/distributed-load-testing-using-kubernetes-locust/0-build-cluster; ${ENVS} terraform apply -refresh=false -auto-approve; cd ${ORG_PATH}; \
63+
# printf "${RED}Building 1-build-monitoring\n\n"; \
64+
# cd projects/distributed-load-testing-using-kubernetes-locust/1-build-monitoring; ${ENVS} terraform apply -refresh=false -auto-approve; cd ${ORG_PATH}; \
65+
printf "${RED}Done\n"; \
66+
67+
.PHONY: build
68+
build: build_cluster gcloud_init ## Build performance testing environment on GKE
69+
70+
# Deploy locust
71+
.PHONY: a_locust
72+
a_locust: ## Deploy the locust, grafana and influxdb to the GKE
73+
printf "${RED}Building 2-deploy-locust\n\n"; \
74+
cd projects/distributed-load-testing-using-kubernetes-locust/2-deploy-locust; ${ENVS} terraform apply -refresh=false -auto-approve; cd ${ORG_PATH}; \
75+
printf "${RED}Done\n"; \
76+
77+
# Delete locust environment at once
78+
.PHONY: d_locust
79+
d_locust: ## Delete locust, grafana and influxdb from GKE
80+
printf "${RED}Tearing down 2-deploy-locust\n\n"; \
81+
cd projects/distributed-load-testing-using-kubernetes-locust/2-deploy-locust; ${ENVS} terraform destroy -auto-approve; cd ${ORG_PATH}; \
82+
printf "${RED}Done\n"; \
83+
84+
# Plan all environment at once
85+
.PHONY: p_all
86+
p_all: ## plan all terraform states
87+
printf "${RED}Planning 0-build-cluster\n\n"; \
88+
cd projects/distributed-load-testing-using-kubernetes-locust/0-build-cluster; ${ENVS} terraform plan; cd ${ORG_PATH}; \
89+
printf "${RED}Planning 1-build-monitoring\n\n"; \
90+
cd projects/distributed-load-testing-using-kubernetes-locust/1-build-monitoring; ${ENVS} terraform plan; cd ${ORG_PATH}; \
91+
printf "${RED}Planning 2-deploy-locust\n\n"; \
92+
cd projects/distributed-load-testing-using-kubernetes-locust/2-deploy-locust; ${ENVS} terraform plan; cd ${ORG_PATH}; \
93+
printf "${RED}Done\n"; \
94+
95+
# Delete all environment at once
96+
.PHONY: d_all
97+
d_all: ## Delete all environment
98+
printf "${RED}Tearing down 2-deploy-locust\n\n"; \
99+
cd projects/distributed-load-testing-using-kubernetes-locust/2-deploy-locust; ${ENVS} terraform destroy -auto-approve; cd ${ORG_PATH}; \
100+
printf "${RED}Tearing down 1-build-monitoring\n\n"; \
101+
cd projects/distributed-load-testing-using-kubernetes-locust/1-build-monitoring; ${ENVS} terraform destroy -auto-approve; cd ${ORG_PATH}; \
102+
printf "${RED}Tearing down 0-build-cluster\n\n"; \
103+
cd projects/distributed-load-testing-using-kubernetes-locust/0-build-cluster; ${ENVS} terraform destroy -auto-approve; cd ${ORG_PATH}; \
104+
printf "${RED}Done\n"; \
105+
106+
.PHONY: locust
107+
locust: ## connect to the locust
108+
projects/distributed-load-testing-using-kubernetes-locust/0-build-cluster/locust_connect.sh; \
109+
110+
.PHONY: refresh
111+
refresh: d_locust a_locust ## refresh locust config map and apply to locust cluster
112+
113+
# format
114+
.PHONY: f
115+
f: ## terraform fmt at the directory where tf files exists
116+
terraform fmt -recursive
117+
118+
# Delete
119+
.PHONY: d
120+
d: ## terraform destroy at the directory where tf files exists. ex: make d CONFIG=<target directory full path>
121+
cd $(CONFIG); \
122+
${ENVS} terraform destroy -auto-approve
123+
124+
# Apply
125+
.PHONY: a
126+
a: ## terraform apply at the directory where tf files exists. ex: make a CONFIG=<target directory full path>
127+
cd $(CONFIG); \
128+
${ENVS} terraform apply -refresh=false -auto-approve
129+
130+
# Plan
131+
.PHONY: p
132+
p: ## terraform plan at the directory where tf files exists. ex: make p CONFIG=<target directory full path>
133+
cd $(CONFIG); \
134+
${ENVS} terraform plan
135+
136+
.PHONY: help
137+
help: ## Display this help screen
138+
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'

deploy/gcloud_init.sh

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
#/bin/bash -x
2+
PROJECT=$1
3+
ZONE=$2
4+
gcloud config set compute/zone ${ZONE}
5+
gcloud config set project ${PROJECT}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
terraform {
2+
required_providers {
3+
google = {
4+
source = "hashicorp/google"
5+
version = "3.90.0"
6+
}
7+
}
8+
}
9+
10+
provider "google" {
11+
project = var.PROJECT_ID
12+
credentials = file(var.GOOGLE_APPLICATION_CREDENTIALS)
13+
14+
region = var.REGION
15+
zone = var.ZONE
16+
}
17+
18+
provider "google-beta" {
19+
project = var.PROJECT_ID
20+
credentials = file(var.GOOGLE_APPLICATION_CREDENTIALS)
21+
22+
region = var.REGION
23+
zone = var.ZONE
24+
}
25+
module "gke" {
26+
source = "../../../usecases/gke_cluster"
27+
cluster_name = var.CLUSTER_NAME
28+
project_id = var.PROJECT_ID
29+
region = var.REGION
30+
zone = var.ZONE
31+
service_account_email = var.SERVICE_ACCOUNT_EMAIL
32+
machine_type = var.MACHINE_TYPE
33+
}
34+
35+
# kubeconfig file for loading secret information into modules from different terraform commands
36+
# How to retrive kubeconfig
37+
# https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/blob/master/examples/simple_regional_with_kubeconfig/outputs.tf
38+
resource "local_file" "kubeconfigfile" {
39+
content = module.gke.kubeconfig_raw
40+
filename = "${path.module}/kubeconfig"
41+
}
42+
43+
# Build a script to configure gcloud command for the newly created GKE cluster.
44+
resource "local_file" "cluster_name" {
45+
content = <<EOT
46+
#/bin/bash -x
47+
gcloud container clusters get-credentials ${module.gke.cluster_name} --zone=${module.gke.cluster_region}
48+
EOT
49+
filename = "${path.module}/gcloud_conf.sh"
50+
file_permission = "0755"
51+
}
52+
53+
# Generate short cut script for forwarding Locust master port to the local machine
54+
resource "local_file" "locust_connect_sh" {
55+
content = <<EOT
56+
#/bin/bash -x
57+
# locust master port forwarding
58+
gcloud container clusters get-credentials ${module.gke.cluster_name} --region ${module.gke.cluster_region} --project ${var.PROJECT_ID} \
59+
&& kubectl port-forward $(kubectl get pod --selector="app.kubernetes.io/instance=locust-cluster,app.kubernetes.io/name=locust,component=master,load_test=locust-cluster" --output jsonpath='{.items[0].metadata.name}') 8089:8089
60+
EOT
61+
filename = "${path.module}/locust_connect.sh"
62+
file_permission = "0755"
63+
}

0 commit comments

Comments
 (0)