以下是学习笔记,来源于Internet和我的测试和总结。
Kubernetes 是一套开源软件系统,提供一套完整的自动应用,扩展和管理应用程序的支持。最初由Google设计并捐赠给Cloud Native Computing Foundation(今属Linux基金会)来使用。参见:https://zh.wikipedia.org/wiki/Kubernetes, Kubernetes (commonly stylized as K8s) is an open-source container-orchestration system for automating deployment, scaling and management of containerized applications. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts". It works with a range of container tools, including Docker.
举个例子,像我前文说的我想创建一个webserver提供一个数据库的查询服务。传统做法是租用一个系统服务商提供的网络服务器,或自己架设一个网络服务器,租用一个domain域名。要么花钱请服务商维护服务器,要么自己维护。问题是不知哪天因为何种原因服务器会当机,比如有bug造成crash,或者硬盘满了,或者升级服务(包括软件的更新,硬件的扩容),种种原因会造成服务中断,甚至数据丢失。在一些重要服务这是不可接受的。另外当因用户数的增加和减少而需要调整节点数时,物理地增加和减少节点都不是简单易行的。特别是对一个提供大量不同服务的供应商,经常要调整不同的服务种类和数量,不停从不同的节点安装或卸除软件,效率很低,又容易出错。这时Kubernetes可以成为你的好帮手。
A Kubernetes cluster consists of two types of resources: a Master and a group of nodes.The Master is responsible for managing the cluster. The master coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates. A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. The services on a node include the container runtime, kubelet and kube-proxy. Kubelet is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as Docker or rkt. Pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), with shared storage/network (an unique cluster IP address), and a specification for how to run the containers. A pod’s contents are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host” - it contains one or more application containers which are relatively tightly coupled — in a pre-container world, being executed on the same physical or virtual machine would mean being executed on the same logical host. While Kubernetes supports more container runtimes than just Docker, Docker is the most commonly known runtime, and it helps to describe pods in Docker terms. The shared context of a pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation - the same things that isolate a Docker container. Within a pod’s context, the individual applications may have further sub-isolations applied. Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory. Containers in different pods have distinct IP addresses and can not communicate by IPC without special configuration. These containers usually communicate with each other via Pod IP addresses.
Every Kubernetes Node runs at least:
Kubelet, a process responsible for communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine.
A container runtime (like Docker, rkt) responsible for pulling the container image from a registry, unpacking the container, and running the app
就是说Kubernetes有个主控(Master)和一组节点(nodes)。主控不直接提供对外用户的服务支持,而是主要负责管理协调这些工作节点。当然系统管理员可以通过一些管理界面(GUI或command line console)来管理控制主控。每个node是一个host计算机或虚拟机,可以有一个或多个pod在上面跑。每个pod是一个或一组containers。这些containers共享存储和网络资源。比如我的project需要NodeJs提供前台HTTP Web server,需要mongoDB提供后台数据库支持,NodeJS和MongoDB可以是两个container运行在一个pod,而这一个pod可以运行在一个node上。
Pods provide two kinds of shared resources for their constituent containers: networking and storage. For storage, refer to kubernetes storage volumes:
Docker also has a concept of volumes, though it is somewhat looser and less managed. A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it. Consequently, a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts. Of course, when a Pod ceases to exist, the volume will cease to exist, too. Perhaps more importantly than this, Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously. There are a bunch types volume, such as: local, hostPath, nfs and VMDK.
什么是容器container?为什么要用container? 参见Refer this for why container:
The Old Way to deploy applications was to install the applications on a host using the operating-system package manager. This had the disadvantage of entangling the applications’ executables, configuration, libraries, and lifecycles with each other and with the host OS. One could build immutable virtual-machine images in order to achieve predictable rollouts and rollbacks, but VMs are heavyweight and non-portable.
The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.
Because containers are small and fast, one application can be packed in each container image. This one-to-one application-to-image relationship unlocks the full benefits of containers. With containers, immutable container images can be created at build/release time rather than deployment time, since each application doesn’t need to be composed with the rest of the application stack, nor married to the production infrastructure environment. Generating container images at build/release time enables a consistent environment to be carried from development into production. Similarly, containers are vastly more transparent than VMs, which facilitates monitoring and management. This is especially true when the containers’ process lifecycles are managed by the infrastructure rather than hidden by a process supervisor inside the container. Finally, with a single application per container, managing the containers becomes tantamount to managing deployment of the application.
对比旧的办法,每需要一个服务就安装一台计算机或虚拟机,费时费力费资源,新办法用contain又快又好又省资源。
想要学习使用Kubernetes,使用Kubernetes的在线教程是个起步,最好的办法是自己架设一个Kubernetes的cluster,只需要一台中等的计算机就足够用,Windows, Linux 或 MacOS都可以。
There are different solution for Kubernetes, such as OpenShift Online provides free hosted access for Kubernetes applications; Minikube is a method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn’t require a cloud provider account; And microk8s provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command.
我用了Minikube as start point for me. Run minikube start to start it. And run minikube dashboard to get the dashboard opened with web browser. 虽然Minikube只支持一个node (实际上Master和这唯一的node运行在同一个虚拟机上,设置非常便捷),但不妨碍用户多个pods在这一个node上, 而且只需要很少的资源(我的计算机SSD硬盘容量很小,而且装了很多软件,幸运的是安装运行Minikube并不需要很多空间)。
For local machine solution with Minikube on Windows, I download this v1.12.0 windows/amd64/kubectl.exe and minikube-windows-amd64.exe file, rename it to minikube.exe and add it to path. 在Windows上使用Minikube需要虚拟机支持,我虽然因为Java的事很讨厌Oracle,但甲骨文的VirtualBox还是很不错,而且是开源的,在很多方面远胜VMware。Minikube通过虚拟机来提供Master和nodes,并且这个虚拟机还提供了docker。For the first time run minikube start, it will download a VM iso (shows in VirtualBox as %AppData%\SPB_Data\.minikube\machines\minikube\boot2docker.iso), and takes quite some time (2min) to start:
C:\p>minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
Note minikube can be started with or without admin right. After it is started, with Virtual Box started with same user should show the minikube is Running (记住:as mentioned here, username/passwd is docker/tcuser, same for ssh). IP address is very confusing. Run ifconfig on the VM will get following: docker0@172.17.0.1, eth0@10.0.2.15, eth1@192.168.99.100, lo@127.0.01, and several vethexxx. From host PC I can ping 192.168.99.100, but none of others. Run kubectl get services: kubernetes ClusterIP 10.96.0.1
But cannot ping 10.96.0.1 from the VM. And if login to one pod and run ifconfig there, may see IP like: 172.17.0.5 there. Run kubectl cluster-info get: Kubernetes master is running at https://192.168.99.100:8443
CoreDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:d
如果你像我一样试图在Windows上运行docker, 你可能遇到如下VT-x问题:
If get VT-x is not available error, refer to this SU, could be three reasons: 1) VT-x is not enabled in the BIOS 2) The CPU doesn't support VT-x 3) Hyper-V virtualization is enabled in Windows. If it’s case 3, try: dism.exe /Online /Disable-Feature:Microsoft-Hyper-V and reboot (Windows features: Hyper-V Management Tools is checked, but Hyper-V Platform is unchecked).
Using Windows 10 Home? You won’t be able to run Docker for Windows. When implement Docker on Windows, they opted for Hyper-V as their virtualisation technology. The benefit is crystal clear: excellent performance and a native hypervisor. Unfortunately not all Windows versions ship with Hyper-V, such as Windows 10 Home or Student edition, you are out of luck. But it’s not game over. There are plenty of replacements based on Docker Machine such as Docker Toolbox or minikube. The way Docker Machine works is simple: there’s a virtual machine that runs Linux and Docker. And you connect from your host to the remote Docker daemon in that VM. Minikube is the somehow one of the most interesting VM based on Docker Machine — that’s if you’re into running Kubernetes clusters. In fact, minikube is a VM that runs Docker and Kubernetes. It’s usually used to run Kubernetes only, but you can use it to run Docker containers too. You won’t reach the same speed as Docker for Windows, but you can build and run containers without Hyper-V.
With the latest Windows 10 Pro, and you can install Docker for Windows. Excellent performance and excellent developer experience. But, the hypervisor used by Docker for Windows is extremely powerful — indeed it’s called a Type-1 hypervisor. It’s so powerful that it doesn’t play nicely with weaker hypervisors such as the one in VirtualBox — or Type-2 hypervisors. You can’t have Type-1 and Type-2 hypervisors running at the same time on your machine. Or in other words, if you run Docker for Windows, you won’t be able to start your virtual machines on VirtualBox. You can enable and disable the Hyper-V hypervisor at will, but it requires a restart of your computer.
If you’re frequently switching from containers to VM, perhaps minikube is a more convenient choice. You don’t need to restart your computer when you change from containers to VM. But you don’t benefit from the extra performance or the improved experience.
Lastly, if you’re interested in running Windows containers — aka containers with a base image that inherits from Windows — Docker for Windows is the only option. Sounds like minikube can use either VM or Hyper-V, as shown in below diagram:
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
在Linux上运行Minikube并不一定需要安装VirtualBox。可以安装KVM package。
On Ubuntu, I use sudo snap install kubectl –classic installed kubectl 1.12.2, and run this for installing minikube:
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.30.0/minikube-linux-amd64
chmod +x minikube
sudo cp minikube /usr/local/bin/
rm minikube
For running minikube with KVM Ubuntu 18.04, needs following this before start minikube:
sudo apt install libvirt-clients libvirt-daemon-system qemu-kvm
sudo usermod -a -G libvirt $(whoami)
newgrp libvirt
For Ubuntu 16.04, needs to run this instead:
sudo apt install libvirt-bin qemu-kvm
sudo usermod -a -G libvirtd $(whoami)
newgrp libvirtd
then install kvm2:
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2
sudo install docker-machine-driver-kvm2 /usr/local/bin/
then runs: minikube start --vm-driver kvm2
boot2docker.iso and minikube.rawdisk would be saved to ~/.minikube/machines/minikubes.
kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080 => deploy app
Run minikube dashboard to open the dashboard webpage.
The most common operations can be done with the following kubectl commands:
kubectl get - list resources
kubectl describe - show detailed information about a resource
kubectl logs - print the logs from a container in a pod
kubectl exec - execute a command on a container in a pod
Such as kubectl get node, kubectl get pods. The describe command can be used to get detailed information about most of the kubernetes primitives: node, pods, deployments. Run cmd and open bash on Pod:
kubectl exec $POD_NAME env
kubectl exec -ti $POD_NAME bash
A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. A Service is defined using YAML (preferred) or JSON. Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using :. Superset of ClusterIP.
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used.
kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080 <=run kubectl get services before and after will see a new service.
curl $(minikube ip):$NODE_PORT => access App, $NODE_PORT is the exposed port number(i.e. the one after column) showing with get services output. If exposed with type “LoadBalancer”, the service will be in ‘pending’ state, need to run kubectl service kubernetes-bootcamp to start it.
Container image: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/