Using vagrant file to build a kubernetes cluster which consists of 1 master(also as node) and 3 nodes. You don't have to create complicated ca files or configuration.
Because I want to setup the etcd, apiserver, controller, scheduler without docker container.
We will create a Kubernetes 1.9.1+ cluster with 3 nodes which contains the components below:
| IP | Hostname | Componets |
|---|---|---|
| 172.17.8.101 | node1 | kube-apiserver, kube-controller-manager, kube-scheduler, etcd, kubelet, docker, flannel, dashboard |
| 172.17.8.102 | node2 | kubelet, docker, flannel、traefik |
| 172.17.8.103 | node3 | kubelet, docker, flannel |
The default setting will create the private network from 172.17.8.101 to 172.17.8.103 for nodes, and it will use the host's DHCP for the public ip.
The kubernetes service's vip range is 10.254.0.0/16.
The container network range is 170.33.0.0/16 owned by flanneld with host-gw backend.
kube-proxy will use ipvs mode.
- Host server with 8G+ mem(More is better), 60G disk, 8 core cpu at lease
- Vagrant 2.0+
- Virtualbox 5.0+
- Across GFW to download the kubernetes files (For China users)
- MacOS/Linux (Windows is not supported)
Required
- CoreDNS
- Dashboard
- Traefik
Optional
- Heapster + InfluxDB + Grafana
- ElasticSearch + Fluentd + Kibana
- Istio service mesh
- Helm
- Vistio
Download kubernetes binary release first and move them to this git repo.
git clone https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster.git
cd kubernetes-vagrant-centos-cluster
vagrant upBefore you run vagrant up make sure this repo directory include the flowing files:
- kubernetes-client-linux-amd64.tar.gz
- kubernetes-server-linux-amd64.tar.gz
Wait about 10 minutes the kubernetes cluster will be setup automatically.
Note
If you have difficult to vagrant up the cluster because of have no way to downlaod the centos/7 box, you can download the box and add it first.
Add centos/7 box manually
wget -c http://cloud.centos.org/centos/7/vagrant/x86_64/images/CentOS-7-x86_64-Vagrant-1801_02.VirtualBox.box
vagrant box add CentOS-7-x86_64-Vagrant-1801_02.VirtualBox.box --name centos/7The next time you run vagrant up, vagrant will import the local box automatically.
There are 3 ways to access the kubernetes cluster.
local
Copy conf/admin.kubeconfig to ~/.kube/config, using kubectl CLI to access the cluster.
mkdir -p ~/.kube
cp conf/admin.kubeconfig ~/.kube/configWe recommend this way.
VM
Login to the virtual machine to access and debug the cluster.
vagrant ssh node1
sudo -i
kubectl get nodesKubernetes dashbaord
Kubernetes dashboard URL: https://172.17.8.101:8443
Get the token:
kubectl -n kube-system describe secret `kubectl -n kube-system get secret|grep admin-token|cut -d " " -f1`|grep "token:"|tr -s " "|cut -d " " -f2Note: You can see the token message from vagrant up logs.
Heapster monitoring
Run this command on you local machine.
kubectl apply -f addon/heapster/Append the following item to you local /etc/hosts file.
172.17.8.102 grafana.jimmysong.ioOpen the URL in your browser: http://grafana.jimmysong.io
Treafik ingress
Run this command on you local machine.
kubectl apply -f addon/traefik-ingressAppend the following item to you local /etc/hosts file.
172.17.8.102 traefik.jimmysong.ioTraefik UI URL: http://traefik.jimmysong.io
EFK
Run this command on your local machine.
kubectl apply -f addon/heapster/Note: Powerful CPU and memory allocation required. At least 4G per virtual machine.
Helm
Run this command on your local machine.
hack/deploy-helm.shWe use istio as the default service mesh.
Installation
kubectl apply -f addon/istio/Run sample
kubectl apply -n default -f <(istioctl kube-inject -f yaml/istio-bookinfo/bookinfo.yaml)
istioctl create -f yaml/istio-bookinfo/bookinfo-gateway.yamlAdd the following items into /etc/hosts in your local machine.
172.17.8.102 grafana.istio.jimmysong.io
172.17.8.102 servicegraph.istio.jimmysong.io
We can see the services from the following URLs.
Note: JAEGER_PORT equal to kubectl -n istio-system get svc tracing -o jsonpath='{.spec.ports[0].nodePort}' and GATEWAY_PORT equal to kubectl -n istio-system get svc istio-ingressgateway -o jsonpath='{.spec.ports[0].nodePort}'.
More detail see https://istio.io/docs/guides/bookinfo.html
Vizceral is an open source project released by Netflix to monitor network traffic between applications and clusters in near real time. Vistio is an adaptation of Vizceral for Istio and mesh monitoring. It utilizes metrics generated by Istio Mixer which are then fed into Prometheus. Vistio queries Prometheus and stores that data locally to allow for the replaying of traffic.
Run the following commands in you local machine.
# Deploy vistio via kubectl
kubectl apply -f addon/vistio/
# Expose vistio-api
kubectl -n default port-forward $(kubectl -n default get pod -l app=vistio-api -o jsonpath='{.items[0].metadata.name}') 9091:9091 &
# Expose vistio in another terminal window
kubectl -n default port-forward $(kubectl -n default get pod -l app=vistio-web -o jsonpath='{.items[0].metadata.name}') 8080:8080 &If everything up until now is working you should be able to load the Vistio UI in your browser http://localhost:8080
More details see Vistio — Visualize your Istio Mesh Using Netflix’s Vizceral.
Except for special claim, execute the following commands under the current git repo root directory.
Suspend the current state of VMs.
vagrant suspendResume the last state of VMs.
vagrant resumeNote: every time you resume the VMs you will find that the machine time is still at you last time you suspended it. So consider to halt the VMs and restart them.
Halt the VMs and up them again.
vagrant halt
vagrant up
# login to node1
vagrant ssh node1
# run the prosivision scripts
/vagrant/hack/k8s-init.sh
exit
# login to node2
vagrant ssh node2
# run the prosivision scripts
/vagrant/hack/k8s-init.sh
exit
# login to node3
vagrant ssh node3
# run the prosivision scripts
/vagrant/hack/k8s-init.sh
sudo -i
cd /vagrant/hack
./deploy-base-services.sh
exitNow you have provisioned the base kubernetes environments and you can login to kubernetes dashboard, run the following command at the root of this repo to get the admin token.
hack/get-dashboard-token.shFollowing the hint to login.
Clean up the VMs.
vagrant destroy
rm -rf .vagrantOnly use for development and test, don't use it in production environment.
