Repackaged script

master
Meliurwen 4 years ago
commit 01ee3ff94a
Signed by: meliurwen
GPG Key ID: 818A8B35E9F1CE10
  1. 21
      LICENSE
  2. 50
      README.md
  3. 51
      iperf3-k8s.sh
  4. 80
      iperf3.yaml
  5. 15
      network-policy.yaml

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2018 Pharb
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

@ -0,0 +1,50 @@
# iperf3-k8s
Simple wrapper around iperf3 to measure network bandwidth from all nodes of a Kubernetes cluster.
## How to use
*Make sure you are using the correct cluster context before running this script: `kubectl config current-context`*
```sh
./iperf3.sh
```
Any options supported by iperf3 can be added, e.g.:
```sh
./iperf3.sh -t 2
```
### NetworkPolicies
If you need NetworkPolicies you can install it:
```sh
kubectl apply -f network-policy.yaml
```
And cleanup afterwards:
```sh
kubectl delete -f network-policy.yaml
```
## How it works
The script will run an iperf3 client inside a pod on every cluster node including the Kubernetes master.
Each iperf3 client will then sequentially run the same benchmark against the iperf3 server running on the Kubernetes master.
All required Kubernetes resources will be created and removed after the benchmark finished successfully.
This has been tested with v1.9.6, v1.10.3 and v1.11.6 of Kubernetes.
The latest version of this Docker image is used to run iperf3:
[https://hub.docker.com/r/networkstatic/iperf3/](https://hub.docker.com/r/networkstatic/iperf3/)
Details on how to use iperf3 can be found here:
[https://github.com/esnet/iperf](https://github.com/esnet/iperf)
## Thanks
Thanks to [Pharb](https://github.com/Pharb) for the code. This is a repackaged version of his [project](https://github.com/Pharb/kubernetes-iperf3) adapted for my use cases.

@ -0,0 +1,51 @@
#!/usr/bin/env bash
set -eu
cd $(dirname $0)
## <setup>
kubectl create -f iperf3.yaml
until $(kubectl get pods -l app=iperf3-server -o jsonpath='{.items[0].status.containerStatuses[0].ready}'); do
echo "Waiting for iperf3 server to start..."
sleep 5
done
echo "Server is running"
echo
CLIENTS=$(kubectl get pods -l app=iperf3-client -o name | cut -d'/' -f2)
for POD in ${CLIENTS}; do
until $(kubectl get pod "${POD}" -o jsonpath='{.status.containerStatuses[0].ready}'); do
echo "Waiting for ${POD} to start..."
sleep 5
done
done
echo "All clients are running"
echo
kubectl get pod -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,IP-NODE:.status.hostIP,IP-POD:status.podIP
echo
## </setup>
## <run>
CLIENTS=$(kubectl get pods -l app=iperf3-client -o name | cut -d'/' -f2)
for POD in ${CLIENTS}; do
HOST=$(kubectl get pod "${POD}" -o jsonpath='{.status.hostIP}')
kubectl exec -it "${POD}" -- iperf3 -c iperf3-server -T "Client on ${HOST}" "$@"
echo
done
## </run>
## <clean>
kubectl delete --cascade -f iperf3.yaml
## </clean>

@ -0,0 +1,80 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf3-server-deployment
labels:
app: iperf3-server
spec:
replicas: 1
selector:
matchLabels:
app: iperf3-server
template:
metadata:
labels:
app: iperf3-server
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: kubernetes.io/role
operator: In
values:
- master
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: iperf3-server
image: networkstatic/iperf3
args: ['-s']
ports:
- containerPort: 5201
name: server
terminationGracePeriodSeconds: 0
---
apiVersion: v1
kind: Service
metadata:
name: iperf3-server
spec:
selector:
app: iperf3-server
ports:
- protocol: TCP
port: 5201
targetPort: server
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: iperf3-clients
labels:
app: iperf3-client
spec:
selector:
matchLabels:
app: iperf3-client
template:
metadata:
labels:
app: iperf3-client
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: iperf3-client
image: networkstatic/iperf3
command: ['/bin/sh', '-c', 'sleep infinity']
# To benchmark manually: kubectl exec iperf3-clients-jlfxq -- /bin/sh -c 'iperf3 -c iperf3-server'
terminationGracePeriodSeconds: 0

@ -0,0 +1,15 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: iperf3-server-access
spec:
podSelector:
matchLabels:
app: iperf3-server
ingress:
- ports:
- port: 5201
from:
- podSelector:
matchLabels:
app: iperf3-client
Loading…
Cancel
Save