Multipass
如何使用可以参考之前的文章multipass使用初体验
搭建部署过程
关闭交换空间
$ swapoff -a
查看交换空间
$ free -h
避免开机启动交换空间
$ vi /etc/fstab
注释掉swap开头的行
关闭防火墙
$ ufw disable
安装docker
自行安装
安装Kubernetes必备工具
- kubeadm
- kubelet
- kubectl
安装系统工具
$ apt-get update && apt-get install -y apt-transport-https
安装CPG证书
$ curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
写入软件源
$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
安装
$ apt-get update && apt-get install -y kubeadm kubelet kubectl
E: The repository 'https://download.docker.com/linux/ubuntu focal Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
清空 /etc/apt/sources.list.d/docker.list
重新安装,没有遇到直接跳过
$ apt-get update && apt-get install -y kubeadm kubelet kubectl
到此以上三台步骤需要全部执行,很重要
设置时区
$ dpkg-reconfigure tzdata
安装 ntpdate
$ apt-get install ntpdate
设置系统时间与网格同步
$ ntpdate edu.ntp.org.cn
参考网址:http://www.ntp.org.cn/pool
将系统时间写入硬件时间
$ hwclock --systohc
修改cloud.cfg
$ vi /etc/cloud/cloud.cfg
# 修改成 true
preserve_hostname: true
multipass
已经给做好了
$ hostnamectl set-hostname master
$ cat >> /etc/hosts<<EOF
192.168.141.110 master
EOF
安装kubernetes
$ cd /usr/local/
$ mkdir kubernetes
$ cd kubernetes
$ mkdir cluster
$ cd cluster
$ kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml
$ vim kubeadm.yml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
imagePullPolicy: IfNotPresent
name: node
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.23.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
修改主机ip
advertiseAddress
修改镜像地址
k8s.gcr.io 为Google镜像地址
imageRepository:registry.aliyuncs.com/google_containers
注意:
kubernetesVersion 版本
#podSubnet :”10.224.0.0/16“
查看需要下载哪些镜像
$ kubeadm config images list --config kubeadm.yml
registry.aliyuscn.com/google_containers/kube-apiserver:v1.23.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6
拉取镜像
$ kubeadm config images pull --config kubeadm.yml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
$ docker images
安装主节点
$ kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log
将其添加到/etc/docker/daemon.json,将dockercgroup驱动程序设置为systemd:
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
此时的daemon.json
{
"registry-mirrors": ["https://c8lfvm3n.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
重新加载daemon.json,重启docker和kubelet
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
$ sudo systemctl restart kubelet
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
遇到端口占用或者是加入失败,执行这个
$ kubeadm reset
$ kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.19.47.43:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:840bc14e93004e5cbffd30bfc4a7ea7a3be3c78eb02f1a24fb75d482da0d6dc5
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node NotReady control-plane,master 9m32s v1.23.3
在其余node节点上执行上面的加入集群的命令
$ kubeadm join 172.19.47.43:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:840bc14e93004e5cbffd30bfc4a7ea7a3be3c78eb02f1a24fb75d482da0d6dc5
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0128 11:22:03.577678 86042 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster
$ kubectl get nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
node NotReady control-plane,master 31m v1.23.3
node1 NotReady <none> 8m28s v1.23.3
node2 NotReady <none> 34s v1.23.3
到此节点已经全部安装完毕,现在需要准备kubernetes网络
安装网络插件 Calico
什么是Calico?
Calico 是一套开源的网络和网络安全方案,用于容器、虚拟机、宿主机之前的网络连接,可以用在kubernetes、OpenShift、DockerEE、OpenStrack等PaaS或IaaS平台上。
Calico 还提供网络安全规则的动态实施,使用Calico的简单策略语言,您可以实现对容器,虚拟机工作负载和裸机主机断点之间通信的细粒度控制
官方文档:http://docs.projectcalico.org
https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart
$ kubectl apply -f http://docs.projectcalico.org/v3.8/manifests/calico.yaml
$ kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
$ kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
$ kubectl get pods --all -namespances
$ watch kubectl get pods -n calico-system
Kubernetes 安装第一个容器
$ kubectl get cs
主节点信息
$ kubectl cluster-info
检查Nodes状态
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node Ready control-plane,master 165m v1.23.3
node1 Ready <none> 142m v1.23.3
node2 Ready <none> 134m v1.23.3
运行第一个容器实例
# 创建一个pod
$ kubectl run nginx --image=nginx -- --port=80
pod/nginx created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 2m13s
# 部署一个nginx
$ kubectl create deployment nginx --image=nginx
pod/nginx created
# 获取部署额容器
$ kubectl get deployment
NAME READY STATUS RESTARTS AGE
nginx-85b98978db-nsp5b 1/1 Running 0 56s
# 负载均衡器发布服务
$ kubectl expose deployment nginx --port=80 --type=LoadBalancer
service/nginx exposed
# 获取服务
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h5m
nginx LoadBalancer 10.96.168.113 <pending> 80:30948/TCP 13s
# 获取服务详情
$ kubectl describe service nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.168.113
IPs: 10.96.168.113
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30948/TCP
Endpoints: 192.168.104.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
访问 http://172.19.46.74:30948/ 验证是否成功
# 删除部署部署
$ kubectl delete deployment nginx
deployment.apps "nginx" deleted
@ 删除部署的服务
$ kubectl delete service nginx
service "nginx" deleted
通过资源配置运行容器
$ kubectl create -f
$ kubectl capply
$ vim ngxin.yml
# API 版本号
apiVersion: apps/v1
# 类型 如:Pod/ReplicationController/Deployment/Service/Ingress
kind: Deployment
# 元数据
metadata:
# Kind 的名称
name: nginx-app
spec:
# 部署的实例数量
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
# 容器标签名称,发布 Service 时,selector 需要和这里对应
app: nginx
spec:
# 配置容器,数组类型,说明可以配置多个容器
containers:
# 容器名称
- name: nginx
# 容器镜像
image: nginx:1.17
# 只有镜像不存在时才会进行镜像拉取
imagePullPolicy: IfNotPresent
# 暴露端口
ports:
# Pod 端口
- containerPort: 80
$ kubectl create -f nginx.yml
deployment.apps/nginx-app created
extensions/v1beta1 API从kubernetes1.6之后以后不再支持,转而使用apps/v1
https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
spec.selector.matchLabels值和spec.template.metadata.lables值完全匹配
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-app-5757b68bb6-jqrsg 1/1 Running 0 3m30s
nginx-app-5757b68bb6-w8xvj 1/1 Running 0 3m30s
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-app 2/2 2 2 3m45s
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8h
没有service
imagePullPolicy拉取策略
- Always:不管镜像是否存在都会进行一次拉取
- Never:不管镜像是否存在都不会进行拉取
- IfNotPresent:只有镜像不存在时才会进行镜像拉取
注意
- 默认为
IfNotPresent
,但:latest
标签的镜像默认为Always
- 拉取镜像时Docker会进行校验,如果镜像中的MD5码没有变,则不会拉取镜像
- 生产环境中应尽量避免使用
:latest
标签,而开发环境中可以借助:latest
标签自动拉取最新的镜像
编辑nginx.yml
# API 版本号
apiVersion: apps/v1
# 类型 如:Pod/ReplicationController/Deployment/Service/Ingress
kind: Deployment
# 元数据
metadata:
# Kind 的名称
name: nginx-app
spec:
# 部署的实例数量
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
# 容器标签名称,发布 Service 时,selector 需要和这里对应
app: nginx
spec:
# 配置容器,数组类型,说明可以配置多个容器
containers:
# 容器名称
- name: nginx
# 容器镜像
image: nginx:1.17
# 只有镜像不存在时才会进行镜像拉取
imagePullPolicy: IfNotPresent
# 暴露端口
ports:
# Pod 端口
- containerPort: 80
---
# API 版本号
apiVersion: v1
# 类型 如:Pod/ReplicationController/Deployment/Service/Ingress
kind: Service
# 元数据
metadata:
# Kind 的名称
name: nginx-http
spec:
# 暴露端口
ports:
# Service 暴露的端口
- port: 80
# Pod 上的端口这里是将 Service 暴露的端口转发到 Pod 端口上
targetPort: 80
type: LoadBalancer
# 标签选择器
selector:
# 需要和上面部署的 Deployment 标签名对应
name: nginx
$ kubectl delete -f nginx.yml
deployment.apps "nginx-app" deleted
$ kubectl create -f nginx.yml
deployment.apps/nginx-app created
service/nginx-http created
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9h
nginx-http LoadBalancer 10.97.228.234 <pending> 80:31641/TCP 114s
$ kubectl delete -f nginx.yml
deployment.apps "nginx-app" deleted
service "nginx-http" deleted
注:以上搭建都是基于
Ubuntu
系统搭建,基于服务器联网环境下进行操作