Wednesday, January 30, 2019

学习舵轮(kubernetes)和埠(docker) - 安装使用Docker

如前面解释的,container很小,自成一体。如果想build一个Windows的container,则需要在Windows环境下安装Docker。
在Windows环境下安装Docker要注意在我在“学习舵轮(kubernetes)和埠(docker) - Kubernetes”说到的VT-x/Hyper-V的问题。就是说如果你的Windows 10是“Home”版的,你将无法直接安装docker。就算你有Windows 10 Pro或Windows 10 Enterprise,如前文所解释的,你将无法同时安装docker和Virtual Box在同一台计算机上。如想在同一台计算机上实习Kubernetes和docker,最好的选择不要enable Hyper-V,同时也不用安装docker,因为minikube下载安装的虚拟机已经有docker安装了。在虚拟机上用docker build image也非常快。但有一种情况下你可能必须安装Windows版docker,如果你想build Windows container的话。那种情况下,你可能不得不来回切换Hyper-V,不幸的是,每次切换后,都要reboot。

在Linux上安装docker则很容易:如在Ubuntu上可用apt或snap来install package:
sudo snap install docker => install docker 18.06.1-ce. 
To run docker without sudo: sudo groupadd docker
sudo usermod -aG docker your-user.
Then re-login.

在技术上,Docker用Linux OverlayFS的支持,refer to overlayfs-driver。在Linux世界里,有时需要同时用两套文件系统,同名目录可以同时被access,当有冲突时,‘upper’ 文件系统会被优先使用。在OpenWRT里,就用到两个文件系统,一套提供了绝大部分Linux系统文件,包括内核,库,系统工具,module驱动等,不过全部文件是在一个image里,只需很小空间,但因为是从imange映射的,所以是只读的,即用户不可改写;另一套是闪存映射的文件系统(upper fileFS),用户可以读写,两套系统用同样的Linux目录design,这样,当用户需要时,用户可以添加或删除文件,但这只影响闪存映射的文件系统。所以对终端用户这种共享access是透明的。Docker也用到双文件系统,一个是host本身的文件系统,一个是目标系统。举个例子,我的计算机运行Ubuntu 14.04 Kernel 3.19,我可以用docker安装另一套Ubuntu 16.04 Kernel 4.4的另一套文件系统,用来运行给Ubuntu16.04设计的程序。这是一个很奇妙的design,通过这种隔离和共享,我们可以标准化各种container image,然后在各种不同的平台上运行它,只要那个平台上提供dockker支持。就像Java程序可以在任何提供Java虚拟机的平台上运行一样。
With snap, images are stored at /var/snap/docker/common/var-lib-docker/aufs/diff.

Tuesday, January 29, 2019

学习舵轮(kubernetes)和埠(docker) - Docker



以下是学习Docker的笔记,有空加更多中文注释。

正如前文所说的,使用一站式的Kubernetes,离不开docker或其它工具软件来创建container image。前面解释过为什么不为每个服务应用都设置一台计算机,那样的话既占用资源,又费时费力,而且不易扩展调整,复制和复用,维护也很麻烦,成本很高。那么为什么用container 而不用VM?VM和container之间的不同:
  • The VM is a hardware abstraction: it takes physical CPUs and RAM from a host, and divides and shares it across several smaller virtual machines. There is an OS and application running inside the VM, but the virtualization software usually has no real knowledge of that. 
  • A container is an application abstraction: the focus is really on the OS and the application, and not so much the hardware abstraction. Many customers actually use both VMs and containers today in their environments and, in fact, may run containers inside of VMs. 
Docker是什么?又一个great开源软件,如wikipedia介绍的:Docker is used to run software packages called "containers". Containers are isolated from each other and bundle their own application, tools, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating-system kernel and are thus more lightweight than virtual machines. Containers are created from "images" that specify their precise contents. Images are often created by combining and modifying standard images downloaded from public repositories.

Docker网站这样说:Docker unlocks the potential of every organization with a container platform that brings traditional applications and microservices built on Window, Linux and mainframe into an automated and secure supply chain, advancing dev to ops collaboration.
As a result, organizations report a 300 percent improvement in time to market, while reducing operational costs by 50 percent. Inspired by open source innovation and a rich ecosystem of technology and go-to-market partners, Docker’s container platform and services are used by millions of developers and more than 650 Global 10K commercial customers including ADP, GE, MetLife, PayPal and Societe Generale.


Microsoft在这里也有关于container和Docker的讲解:
Containers are an isolated, resource controlled, and portable runtime environment which runs on a host machine or virtual machine. An application or process which runs in a container is packaged with all the required dependencies and configuration files; It’s given the illusion that there are no other processes running outside of its container.
The container’s host provisions a set of resources for the container and the container will use only these resources. As far as the container knows, no other resources exist outside of what it has been given and therefore the container cannot touch resources which may have been provisioned for a neighboring container.
Docker is the vessel by which container images are packaged and delivered. This automated process produces images (effectively templates) which may then be run anywhere—on premises, in the cloud, or on a personal machine—as a container.

开始学习使用docker, 你最好注册一个免费账号在https://hub.docker.com/ =>类似GitHub,这个hub可以提供docker image存储,share images, automate workflows, and more with a docker ID. 免费账号好像只允许创建一个私有项目,但没有public项目的限制,就是说只要是public project,创建多少个都行。这样就鼓励大家共享design,正是开源之风。For free amount, allow to create one private repository for free (no limit of public repository?), and allow to create an organization, and provide many official images. 
To public a new image of my_project, run: docker login ; docker push my_username/my_project:tagname (use exist tagname but not creating new tag here, otherwise got error:tag does not exist).

开始学习使用docker可以从这开始:https://docs.docker.com/get-started/ => more examples and ideas
一些常用命令:
docker container run hello-world => run hello-world
docker image ls => ls image on system
docker run -it ubuntu bash => run an ubuntu container
Alpine Linux is a lightweight Linux distribution so it is quick to pull down and run, making it a popular starting point for many other images: docker container run alpine ls -l => ls then container exit
docker container run -it alpine /bin/sh => running interactive shell
docker container ls -a
docker container start
docker container exec ls
docker container diff => shows change of the container
docker container commit CONTAINER_ID => commit change


docker image tag ourfiglet => tag ‘ourfiglet’ for image committed by above, run: docker container run ourfiglet figlet hello

一些docker的concept:
  • Images - The file system and configuration of our application which are used to create containers. To find out more about a Docker image, run docker image inspect alpine. In the demo above, you used the docker image pull command to download the alpine image. When you run the command docker container run hello-world, it also did a docker image pull behind the scenes to download the hello-world image.
  • Containers - Running instances of Docker images — containers run the actual applications. A container includes an application and all of its dependencies. It shares the kernel with other containers, and runs as an isolated process in user space on the host OS. You created a container using docker run which you did using the alpine image that you downloaded. A list of running containers can be seen using the docker container ls command.
  • Docker daemon - The background service running on the host that manages building, running and distributing Docker containers.
  • Docker client - The command line tool that allows the user to interact with the Docker daemon.
  • Docker Store - Store is, among other things, a registry of Docker images. Think of the registry as a directory of all available Docker images.

创建docker image需要一个特殊的文件,文件名是‘Dockerfile'。这个文件就像GNU的Makefile,或一个project的项目文件。例如我创建了一个如下NodeJS code index.js:
var os = require("os");
var hostname = os.hostname();
console.log("hello from " + hostname);
我可以创建如下Dockerfile:
FROM alpine
RUN apk update && apk add nodejs
COPY . /app
WORKDIR /app
CMD ["node","index.js"]


如上的两个文件,短短8行code就可以用docker创建一个运行NodeJS的linux container:
Run: docker image build -t hello:v0.1 . 
then run: docker container run hello:v0.1 => got: hello from 92d79b6de29f
Image actually is built in layers.
docker image history =>show several intermediate IMAGE with CREATED BY etc. Sequential update and make new image may take some cached layers.
docker image pull alpine => pull alpine image;
docker image inspect alpine => check detail of alpine image;
docker image inspect --format "{{ json .RootFS.Layers }}" alpine => to get layers: ["sha256:60ab55d3379d47c1ba6b6225d59d10e1f52096ee9d5c816e42c635ccc57a5a2b"], as alpine is small, so only one layer here; docker image inspect --format "{{ json .RootFS.Layers }}" => shows more layers.
Applications that create and store data (databases, for example) can store their data in a special kind of Docker object called a volume, so that data can persist and be shared with other containers.
  • Layers - A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the last one is read-only.
  • Dockerfile - A text file that contains all the commands, in order, needed to build a given image. The Dockerfile reference page lists the various commands and format details for Dockerfiles.
  • Volumes - A special Docker container layer that allows data to persist and be shared separately from the container itself. Think of volumes as a way to abstract and manage your persistent data separately from the application itself.
Image names must be unique and are specified in the format /:. Such as “ubuntu” and “alpine”, since there is no repository specified, will pull from a default public repository called “library” which is maintained by us at Docker. And if not specify a tag, the default is to look for a tag named “latest” and use that. The tags generally specify versions (although this is not a requirement).
Docker supplies two tools: Docker Compose and Docker Swarm Mode. The two tools have some similarities but some important differences:
  • Compose is used to control multiple containers on a single system. Much like the Dockerfile we looked at to build an image, there is a text file that describes the application: which images to use, how many instances, the network connections, etc.
  • Swarm Mode tells Docker that you will be running many Docker engines and you want to coordinate operations across all of them. Swarm mode combines the ability to not only define the application architecture, like Compose, but to define and maintain high availability levels, scaling, load balancing, and more. With all this functionality, Swarm mode is used more often in production environments than it’s more simplistic cousin, Compose.
docker swarm init --advertise-addr $(hostname -i) =>create a Swart manager
to add worker: docker swarm join --token from another node.
To add a manager to this swarm, run 'docker swarm join-token manager'
Run docker node ls on the manager to check nodes.
A stack is a group of services that are deployed together: multiple containerized components of an application that run in separate instances. Each individual service can actually be made up of one or more containers, called tasks and then all the tasks & services together make up a stack.
Dockerfile文件的文法 (refer to https://docs.docker.com/engine/reference/builder):
  • FROM specifies the base image to use as the starting point for this new image you’re creating.
  • ENV sets the environment variable to the value .
  • RUN will execute any commands in a new layer on top of the current image and commit the results.
  • COPY copies files from the Docker host into the image, at a known location.
  • EXPOSE documents which ports the application uses.
  • VOLUME creates a mount point with the specified name. Refer to storage
CMD specifies what command to run when a container is started from the image.

学习舵轮(kubernetes)和埠(docker) - Kubernetes

以下是学习笔记,来源于Internet和我的测试和总结。
Kubernetes 是一套开源软件系统,提供一套完整的自动应用,扩展和管理应用程序的支持。最初Google设计并捐赠给Cloud Native Computing Foundation(今属Linux基金会)来使用。参见:https://zh.wikipedia.org/wiki/Kubernetes, Kubernetes (commonly stylized as K8s) is an open-source container-orchestration system for automating deployment, scaling and management of containerized applications. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts". It works with a range of container tools, including Docker
举个例子,像我前文说的我想创建一个webserver提供一个数据库的查询服务。传统做法是租用一个系统服务商提供的网络服务器,或自己架设一个网络服务器,租用一个domain域名。要么花钱请服务商维护服务器,要么自己维护。问题是不知哪天因为何种原因服务器会当机,比如有bug造成crash,或者硬盘满了,或者升级服务(包括软件的更新,硬件的扩容),种种原因会造成服务中断,甚至数据丢失。在一些重要服务这是不可接受的。另外当因用户数的增加和减少而需要调整节点数时,物理地增加和减少节点都不是简单易行的。特别是对一个提供大量不同服务的供应商,经常要调整不同的服务种类和数量,不停从不同的节点安装或卸除软件,效率很低,又容易出错。这时Kubernetes可以成为你的好帮手。

Refer to Kubernetes basic:
A Kubernetes cluster consists of two types of resources: a Master and a group of nodes.The Master is responsible for managing the cluster. The master coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates. A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. The services on a node include the container runtime, kubelet and kube-proxy. Kubelet is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as Docker or rkt. Pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), with shared storage/network (an unique cluster IP address), and a specification for how to run the containers. A pod’s contents are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host” - it contains one or more application containers which are relatively tightly coupled — in a pre-container world, being executed on the same physical or virtual machine would mean being executed on the same logical host. While Kubernetes supports more container runtimes than just Docker, Docker is the most commonly known runtime, and it helps to describe pods in Docker terms. The shared context of a pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation - the same things that isolate a Docker container. Within a pod’s context, the individual applications may have further sub-isolations applied. Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory. Containers in different pods have distinct IP addresses and can not communicate by IPC without special configuration. These containers usually communicate with each other via Pod IP addresses.
Every Kubernetes Node runs at least:
  • Kubelet, a process responsible for communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine.
  • A container runtime (like Docker, rkt) responsible for pulling the container image from a registry, unpacking the container, and running the app
就是说Kubernetes有个主控(Master)和一组节点(nodes)。主控不直接提供对外用户的服务支持,而是主要负责管理协调这些工作节点。当然系统管理员可以通过一些管理界面(GUI或command line console)来管理控制主控。每个node是一个host计算机或虚拟机,可以有一个或多个pod在上面跑。每个pod是一个或一组containers。这些containers共享存储和网络资源。比如我的project需要NodeJs提供前台HTTP Web server,需要mongoDB提供后台数据库支持,NodeJS和MongoDB可以是两个container运行在一个pod,而这一个pod可以运行在一个node上。

Pods provide two kinds of shared resources for their constituent containers: networking and storage. For storage, refer to kubernetes storage volumes:
Docker also has a concept of volumes, though it is somewhat looser and less managed. A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it. Consequently, a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts. Of course, when a Pod ceases to exist, the volume will cease to exist, too. Perhaps more importantly than this, Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously. There are a bunch types volume, such as: local, hostPath, nfs and VMDK.

什么是容器container?为什么要用container? 参见Refer this for why container:
The Old Way to deploy applications was to install the applications on a host using the operating-system package manager. This had the disadvantage of entangling the applications’ executables, configuration, libraries, and lifecycles with each other and with the host OS. One could build immutable virtual-machine images in order to achieve predictable rollouts and rollbacks, but VMs are heavyweight and non-portable.
The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.
Because containers are small and fast, one application can be packed in each container image. This one-to-one application-to-image relationship unlocks the full benefits of containers. With containers, immutable container images can be created at build/release time rather than deployment time, since each application doesn’t need to be composed with the rest of the application stack, nor married to the production infrastructure environment. Generating container images at build/release time enables a consistent environment to be carried from development into production. Similarly, containers are vastly more transparent than VMs, which facilitates monitoring and management. This is especially true when the containers’ process lifecycles are managed by the infrastructure rather than hidden by a process supervisor inside the container. Finally, with a single application per container, managing the containers becomes tantamount to managing deployment of the application.
对比旧的办法,每需要一个服务就安装一台计算机或虚拟机,费时费力费资源,新办法用contain又快又好又省资源。

想要学习使用Kubernetes,使用Kubernetes的在线教程是个起步,最好的办法是自己架设一个Kubernetes的cluster,只需要一台中等的计算机就足够用,Windows, Linux 或 MacOS都可以。
There are different solution for Kubernetes, such as OpenShift Online provides free hosted access for Kubernetes applications; Minikube is a method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn’t require a cloud provider account; And microk8s provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command. 
我用了Minikube as start point for me. Run minikube start to start it. And run minikube dashboard to get the dashboard opened with web browser. 虽然Minikube只支持一个node (实际上Master和这唯一的node运行在同一个虚拟机上,设置非常便捷),但不妨碍用户多个pods在这一个node上, 而且只需要很少的资源(我的计算机SSD硬盘容量很小,而且装了很多软件,幸运的是安装运行Minikube并不需要很多空间)。
For local machine solution with Minikube on Windows, I download this v1.12.0  windows/amd64/kubectl.exe and minikube-windows-amd64.exe file, rename it to minikube.exe and add it to path. 在Windows上使用Minikube需要虚拟机支持,我虽然因为Java的事很讨厌Oracle,但甲骨文的VirtualBox还是很不错,而且是开源的,在很多方面远胜VMware。Minikube通过虚拟机来提供Masternodes,并且这个虚拟机还提供了docker。For the first time run minikube start, it will download a VM iso (shows in VirtualBox as %AppData%\SPB_Data\.minikube\machines\minikube\boot2docker.iso), and takes quite some time (2min) to start:
C:\p>minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.


Note minikube can be started with or without admin right. After it is started, with Virtual Box started with same user should show the minikube is Running (记住:as mentioned here, username/passwd is docker/tcuser, same for ssh). IP address is very confusing. Run ifconfig on the VM will get following: docker0@172.17.0.1, eth0@10.0.2.15, eth1@192.168.99.100, lo@127.0.01, and several vethexxx. From host PC I can ping 192.168.99.100, but none of others. Run kubectl get services: kubernetes   ClusterIP 10.96.0.1
But cannot ping 10.96.0.1 from the VM. And if login to one pod and run ifconfig there, may see IP like: 172.17.0.5 there. Run kubectl cluster-info get: Kubernetes master is running at https://192.168.99.100:8443
CoreDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:d

如果你像我一样试图在Windows上运行docker, 你可能遇到如下VT-x问题:
If get VT-x is not available error, refer to this SU, could be three reasons: 1) VT-x is not enabled in the BIOS 2) The CPU doesn't support VT-x 3) Hyper-V virtualization is enabled in Windows. If it’s case 3, try: dism.exe /Online /Disable-Feature:Microsoft-Hyper-V and reboot (Windows features: Hyper-V Management Tools is checked, but Hyper-V Platform is unchecked).

It is a little confusing about Hyper-V, as many post about Kubernetes or minikube mentioned that Hyper-V needs to be enabled, but this is not completely true. Refer to installing-docker-and-kubernetes-on-windows:
Using Windows 10 Home? You won’t be able to run Docker for Windows. When implement Docker on Windows, they opted for Hyper-V as their virtualisation technology. The benefit is crystal clear: excellent performance and a native hypervisor. Unfortunately not all Windows versions ship with Hyper-V, such as Windows 10 Home or Student edition, you are out of luck. But it’s not game over. There are plenty of replacements based on Docker Machine such as Docker Toolbox or minikube. The way Docker Machine works is simple: there’s a virtual machine that runs Linux and Docker. And you connect from your host to the remote Docker daemon in that VM. Minikube is the somehow one of the most interesting VM based on Docker Machine — that’s if you’re into running Kubernetes clusters. In fact, minikube is a VM that runs Docker and Kubernetes. It’s usually used to run Kubernetes only, but you can use it to run Docker containers too. You won’t reach the same speed as Docker for Windows, but you can build and run containers without Hyper-V.
With the latest Windows 10 Pro, and you can install Docker for Windows. Excellent performance and excellent developer experience. But, the hypervisor used by Docker for Windows is extremely powerful — indeed it’s called a Type-1 hypervisor. It’s so powerful that it doesn’t play nicely with weaker hypervisors such as the one in VirtualBox or Type-2 hypervisors. You can’t have Type-1 and Type-2 hypervisors running at the same time on your machine. Or in other words, if you run Docker for Windows, you won’t be able to start your virtual machines on VirtualBox. You can enable and disable the Hyper-V hypervisor at will, but it requires a restart of your computer.
If you’re frequently switching from containers to VM, perhaps minikube is a more convenient choice. You don’t need to restart your computer when you change from containers to VM. But you don’t benefit from the extra performance or the improved experience.
Lastly, if you’re interested in running Windows containersaka containers with a base image that inherits from WindowsDocker for Windows is the only option. Sounds like minikube can use either VM or Hyper-V, as shown in below diagram:
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

在Linux上运行Minikube并不一定需要安装VirtualBox。可以安装KVM package。
On Ubuntu, I use sudo snap install kubectl –classic installed kubectl 1.12.2, and run this for installing minikube:
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.30.0/minikube-linux-amd64 
chmod +x minikube
sudo cp minikube /usr/local/bin/
rm minikube

For running minikube with KVM Ubuntu 18.04, needs following this before start minikube:
sudo apt install libvirt-clients libvirt-daemon-system qemu-kvm
sudo usermod -a -G libvirt $(whoami)
newgrp libvirt

For Ubuntu 16.04, needs to run this instead:
sudo apt install libvirt-bin qemu-kvm
sudo usermod -a -G libvirtd $(whoami)
newgrp libvirtd

then install kvm2:
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2
sudo install docker-machine-driver-kvm2 /usr/local/bin/
then runs: minikube start --vm-driver kvm2

boot2docker.iso and minikube.rawdisk would be saved to ~/.minikube/machines/minikubes.

kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080 => deploy app
kubectl proxy => create connection to Kubernetes cluster via proxy endpoint: http://localhost:8001
Run minikube dashboard to open the dashboard webpage.
The most common operations can be done with the following kubectl commands:
  • kubectl get - list resources
  • kubectl describe - show detailed information about a resource
  • kubectl logs - print the logs from a container in a pod
  • kubectl exec - execute a command on a container in a pod
Such as kubectl get node, kubectl get pods. The describe command can be used to get detailed information about most of the kubernetes primitives: node, pods, deployments. Run cmd and open bash on Pod:
kubectl exec $POD_NAME env
kubectl exec -ti $POD_NAME bash

A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. A Service is defined using YAML (preferred) or JSON. Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:
  • ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
  • NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using :. Superset of ClusterIP.
  • LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
  • ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used.
kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080 <=run kubectl get services before and after will see a new service.
curl $(minikube ip):$NODE_PORT => access App, $NODE_PORT is the exposed port number(i.e. the one after column) showing with get services output. If exposed with type “LoadBalancer”, the service will be in ‘pending’ state, need to run kubectl service kubernetes-bootcamp to start it.
Container image: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/