万事第一步,先搭建环境,K8S集群搭建参考:https://www.cnblogs.com/diantong/p/12187745.html
【第一回】Master-Etcd
【第二回】Master-Controller
Ⅰ、deployment
例1:创建http的yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: httpd
labels:
app: httpd-demo
spec:
replicas: 3
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
例2:创建nginx的yaml
kind: Deployment
metadata:
name: nginxdeploy
spec:
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: nginxpod
image: docker.io/nginx
ports:
- name: http
containerPort: 80
Ⅱ、replicaset
Ⅲ、DaemonSet
一个DaemonSet对象能确保其创建的Pod在集群中的每一台(或指定)Node上都运行一个副本
Ⅳ、Job
Ⅴ、CronJob
【第三回】Master-Scheduler
【第四回】Master-Apiserver
【第五回】Node-KubeProxy
Ⅰ、Service:
Ⅱ、Ingress:
1、在master上执行
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.20.0/deploy/mandatory.yaml wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml #对外提供服务,如果不需要可以不下载
2、下载替换defaultbackend-amd64和nginx-ingress-controller镜像地址,速度也会更快!
[root@master ingress-nginx]# sed -i 's#k8s.gcr.io/defaultbackend-amd64#registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64#g' mandatory.yaml [root@master ingress-nginx]# sed -i 's#quay.io/kubernetes-ingress-controller/nginx-ingress-controller#registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller#g' mandatory.yaml
3、修改service-nodeport.yaml文件,添加NodePort端口,默认为随机端口
[root@master ingress-nginx]# cat service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 30080
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 30443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
4、执行service-nodeport.yaml和mandatory.yaml两个文件
[root@k8s-master ingress-nginx]# kubectl apply -f mandatory.yaml [root@k8s-master ingress-nginx]# kubectl apply -f service-nodeport.yaml
5、打开浏览器验证,正常
6、创建后端服务,这里我们已nginx为服务为例,创建一个nginx和跟nginx对应的service,这里要注意metadata.name要
和后面创建的ingress中的serviceName一致,切记!
[root@master myself]# cat mypod.yaml
apiVersion: v1
kind: Service
metadata:
name: service-nginx
namespace: default
spec:
selector:
app: mynginx
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydepoy
namespace: default
spec:
replicas: 5
selector:
matchLabels:
app: mynginx
template:
metadata:
labels:
app: mynginx
spec:
containers:
- name: mycontainer
image: lizhaoqwe/nginx:v1
imagePullPolicy: IfNotPresent
ports:
- name: nginx
containerPort: 80
有了前端了,也有后端了,那么接下来就该创建ingress规则了
7、创建ingress规则
[root@master myself]# cat ingress-nginx.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-mynginx
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mynginx.fengzi.com
http:
paths:
- path:
backend:
serviceName: service-nginx
servicePort: 80
8、在本地电脑C:\Windows\System32\drivers\etc\hosts添加一条hosts记录(mynginx.fengzi.com 192.168.254.13)然后打开浏览器验证
9、我们可以去查看nginx的配置文件,去查看我们所创建的规则有没有注入到ingress中
#查看ingress-controller中的规则 [root@master myself]# kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-7fccc47f44-sgj6g 1/1 Running 0 140m nginx-ingress-controller-d786fc9d4-4vb5z 1/1 Running 0 140m [root@master myself]# kubectl exec -it nginx-ingress-controller-d786fc9d4-4vb5z -n ingress-nginx -- /bin/bash www-data@nginx-ingress-controller-d786fc9d4-4vb5z:/etc/nginx$ cat nginx.conf
我们可以看到nginx配置文件中已经有了我们所定义的反代规则
ok,成功!!!
【第六回】Node-Kubelet
【第七回】K8s-存储
【第八回】K8s-网络
1.1 Master
1)、Etcd: 保存了整个集群的状态;
2)、Controller: 包含deployment,replicaset,DaemonSet,job,cronjob等
3)、Scheduler: 承接controller创建的pod,为pod选择一个合适的node。kubelet监听到Scheduler调度的pod信,下载镜像,启动容器等
4)、Apiserver: K8S的api接口,如kubectl等;
1.2 Node
1):Kube-proxy: 负责为Service提供cluster内部的服务发现和负载均衡;
2):Kubelet: 管理Kubernetes Master和Node之间的通信; 管理机器上运行的Pods和containers容器 //查看kubelet日志: journalctl -xefu kubelet
【第二回】各组件详细配置使用
2.2 Scheduler:
2.3 Kube-Proxy
1):Service 由于K8S的pod存在生命周期,当pod重启之后ip就会变化,故需要有一个固定的方式来进行访问。方法就是通过pod下的labels定义的标签,如app:myapp,当service内指定了selector的值为pod里面labels的值时,便能通过node的ip进行访问了。主要有三种方法:第一种:ClusterIp 第二种:NodePort 第三种:LoadBalancer。
ClusterIP: 用于为集群内Pod访问时,提供的固定访问地址,默认是自动分配地址,可使用ClusterIP关键字指定固定IP
NodePort:可以直接在外网浏览器能进行访问
例1:创建nginx的service的yaml
kind: Service
apiVersion: v1
metadata:
name: service01
spec:
selector:
app: myapp //pod内部的label标签
ports:
- protocol: TCP
port: 8088
targetPort: 80 //nginx内定义的端口
type: NodePort
2):ingress
【第三回】常见错误总结
1、 K8S集群 NOT READY的解决办法
错误信息:cni config uninitialized
错误现象:执行kubectl describe nodes报错
untime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
解决办法:
docker pull quay.io/coreos/flannel:v0.9.1-amd64
mkdir -p /etc/cni/net.d/
cat <<EOF> /etc/cni/net.d/10-flannel.conf {"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}} EOF mkdir /usr/share/oci-umount/oci-umount.d -p mkdir /run/flannel/ cat <<EOF> /run/flannel/subnet.env FLANNEL_NETWORK=172.100.0.0/16 FLANNEL_SUBNET=172.100.1.0/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true EOF
然后执行命令:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
2、 忘记token怎么加入K8S集群
1:kubeadm token list查看token
2:获取ca证书sha256编码hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null |
openssl dgst -sha256 -hex | sed 's/^.* //'
3:node节点加入
kubeadm join 10.167.11.153:6443 --token o4avtg.65ji6b778nyacw68 --discovery-token-ca-cert-hash sha256:2cc3029123db737f234186636330e87b5510c173c669f513a9c0e0da395515b0