The kubespray pack is located here: https://github.com/kubernetes-sigs/kubespray
You should clone it. All below mentions to files within the directory containing kubespray package. I cloned it and renamed it to src.
So first thing to do is to move to kubespray directory and then follow the below instructions:
cd srcYou need to have installed on the host machine few packages that are available via Python-pip. The following command runs in kubespray directory:
pip install -r requirements.txtFile location: ansible.cfg
Add/modify following:
- pipelining
- log_path
- provilege_escalation section
[ssh_connection]
pipelining=False
[defaults]
log_path = ./ansible.log
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=FalseIf you work on Windows and use Cygwin as shell interface then you should modify in ansible.cfg by commenting the current ssh_args and replacing it as below:
#ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
ssh_args = -C -o ControlMaster=no
File location: Vagrantfile
In order to generate in Virtualbox the cluster you need to update few parameters:
- number of nodes (e.g. 3, odd number to comply with the needs of etcd)
- name prefix (e.g. "k8s" it will create k8s-1, k8s-2, k8s-3, etc.)
- memory (e.g. 4GB)
- number of cpu's (e.g. 2 cpu)
- subnet (e.g. "192.168.10"; it will create the ip's 101, 102, 103, etc.)
- os (e.g. "ubuntu1604", according to the keys in SUPPORTED_OS)
I chose the following values that created
$num_instances = 3
$instance_name_prefix = "k8s"
$vm_memory = 4096
$vm_cpus = 2
$subnet = "192.168.10"
$os = "ubuntu1604"You should make a copy of the sample directory and make your changes in your copy.
cp -r inventory/sample inventory/myclusterFile location: inventory/mycluster/inventory.ini
In order to specify the role of each node you need to modify several sections:
If you chose 3 instances In Vagrantfile then for node[1-3] should be specified the ip addresses according to the subnet in Vagrantfile like below:
[all]
node1 ansible_host=192.168.10.101 ip=192.168.10.101 etcd_member_name=etcd1
node2 ansible_host=192.168.10.102 ip=192.168.10.102 etcd_member_name=etcd2
node3 ansible_host=192.168.10.103 ip=192.168.10.103 etcd_member_name=etcd3Choose one master node out of the 3 nodes of the cluster:
[kube-master]
node1Choose an odd number (2k+1) of nodes where etcd will run:
[etcd]
node1
node2
node3Choose the worker nodes. They may be separate from the master nodes or master nodes may be also worker (but with less resources)
[kube-node]
node2
node3Another cluster architecture may be with 4 nodes:
- 4 nodes total (e.g. node[1-4]) out of which:
- 1 master node (e.g. node1)
- 3 ectd nodes (e.g. node[1-3])
- 3 workers (e.g. node[2-4])
Another cluster architecture may be with 5 nodes:
- 5 nodes total (e.g. node[1-5]) out of which:
- 2 master nodes (e.g. node[1-2])
- 5 ectd nodes (e.g. node[1-5])
- 3 workers (e.g. node[3-5])
File location: inventory/mycluster/group_vars/all/all.yml
If you run the cluster behind a proxy then you must specify this. You may need to modify the following assuming your proxy is http://192.168.56.200:3128:
http_proxy: "http://192.168.56.200:3128"
https_proxy: "http://192.168.56.200:3128"You may also need to change the no_proxy parameter.
File location: inventory/mycluster/group_vars/k8s-cluster/addons.yml
In order to be available to use the Dashboard you need to change the parameter:
dashboard_enabled: trueIn order to make available the services deployed in your Kubespray cluster you need to deploy the Nginx Ingress.
ingress_nginx_enabled: true
ingress_nginx_host_network: trueIn order to get metrics from the cluster you need to change the parameter:
metrics_server_enabled: trueFile location: inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
You may choose between several plugins like: calico, flannel and few other. The default is calico.
kube_network_plugin: calicoAccording to the network plugin chosen, you may want to update specific parameters in the corresponding config file. The default works fine.
For calico you need to modify the file inventory/mycluster/group_vars/k8s-cluster/k8s-net-calico.yml
This is the suffix of the services.
cluster_name: cluster.localThis subnets must be unused in your network.
kube_service_addresses: 100.64.0.0/18
kube_pods_subnet: 100.64.64.0/18There are two options to create the cluster:
This option will:
- create the servers in Virtualbox
- start the installation of cluster components on each node according to the configuration files modified previously
In this case you simply run the following command that will create the servers in Virtualbox and install the Kubernets components:
vagrant upThis option assumes:
- the servers are already created
- the servers have operating system compliant with Kubernetes packages (see SUPPORTED_OS in kubespray/Vagrantfile)
- the inventory file is modified considering ip addresses already alocated to the servers
- Python is already installed on all servers as the installation is done with Ansible
In this case you run the following command that will install the Kubernets components:
ansible-playbook -i inventory/mycluster/inventory.ini cluster.ymlLogin to each cluster node and execute the commands:
sudo su -
apt update && apt install chrony -y
systemctl start chrony
systemctl enable chrony
timedatectlEdit the DaemonSet ingress-nginx-controller and add the following argument to the element args of container specification:
--report-node-internal-ip-addressThis option will help to display for each ingress the IP address of the endpoint.
If you specified the http_proxy and https_proxy but you want to modify, you need to:
- login to each cluster node, either master or worker node
- edit the file
/etc/systemd/system/docker.service.d/http-proxy.conf - restart docker service
- edit the file
/etc/apt/apt.conf
If you want to remove the proxy configuration then you should delete the above mentioned files on each cluster node.
If new nodes are needed in the cluster then you should create them with same OS as the existin cluster nodes. It would be wise to have the same OS on all cluster nodes.
Update the inventory file inventory/mycluster/inventory.ini and include the new node in all relevant groups: all, kube-master, etcd, kube-node.
Ensure that you have connectivity to new cluster nodes for Ansible and run:
ansible-playbook -i inventory/mycluster/inventory.ini scale.ymlIn order to remove nodeX and nodeY from the cluster run:
ansible-playbook -i inventory/mycluster/inventory.ini remove-node.yml -e "node=nodeX,nodeY"If a node is not reachable by ssh, add -e "reset_nodes=no".
You may need to update the inventory file inventory/mycluster/inventory.ini and comment or delete the removed nodes from all relevant groups: all, kube-master, etcd, kube-node.
ansible-playbook -i inventory/mycluster/inventory.ini upgrade-cluster.yml -e kube_version=v1.6.0Additional information here.
You may need to add the user that connects to cluster nodes via Ansible to group docker.
If you connect with vagrant user then apply the following, otherwise change the username:
sudo usermod -aG docker vagrantIdentify the secret kubernetes-dashboard-token and get the value that will be used in browser:
kubectl -n kube-system describe secrets $(kubectl -n kube-system get secrets|grep kubernetes-dashboard-token|awk '{print $1}')|grep ^token:|awk '{print $2}'We assume that you need to isolate the cluster and to allow access only for administration purpose and to the exposed services.
Installation in our case for Ubuntu 16.04:
apt install iptables-persistentAfter the installation you must create the Iptables config file.
For Ubuntu 16.04 the config file is /etc/iptables/rules.v4:
cat >/etc/iptables/rules.v4 <<EOF
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -s 100.64.0.0/17 -p tcp -j ACCEPT
-A INPUT -s 100.64.0.0/17 -p udp -j ACCEPT
-A INPUT -s 192.168.10.0/24 -p tcp -j ACCEPT
-A INPUT -s 192.168.10.0/24 -p udp -j ACCEPT
-A INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
EOF
chmod 600 /etc/iptables/rules.v4
ln -s /etc/iptables/rules.v4 /etc/network/iptables.up.rulesThe CIDR's allowed in the firewall of each K8s cluster server are:
- 100.64.0.0/17 - the subnet including
kube_service_addressesandkube_pods_subnetdefined above - 192.168.10.0/24 - the subnet used in Vagrant to create the cluster nodes
For Ubuntu 16.04 the Iptables starts with:
iptables-applyThe above procedure to install and configure Iptables must be applied to all K8s cluster nodes.
After the Iptables was configured and started on all K8s nodes, all nodes should be rebooted.
In this way the cluster will be restarted with the new firewall rules included in the ones that are applied by Kubernetes.