博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Kubernetes(1) kubectl 入门
阅读量:6842 次
发布时间:2019-06-26

本文共 30552 字,大约阅读时间需要 101 分钟。

继上章的搭建实验环境后,本章节主要介绍一些k8s的日常操作命令以及应用。

首先我们会熟悉一下 kubectl 的API 功能,随后我们会手动通过命令行输入的方式创建 Pod Deployment,且给 Deployment 添加 Service 提供对内,对外访问。后面会涉及到一些自动扩容,缩容,升级,降级的操作,由此体现出使用k8s 管理容器服务的方便。

注: iptables 部分需要特定网络知识,作为开发人员只需大概了解即可。在此不做特殊展示。

1 Kubectl

kubectl 是apiserver 为了运行k8s指令的命令行接口。详见: https://kubernetes.io/docs/reference/kubectl/overview/

本文主要涵盖了 kubectl 的基本语法,以及通过简单例子介绍对语法的描述。想要进一步了解每一个命令,以及该命令下的众多选项需要看  的文档。

以下为kubectl 的常见API 汇总

基础命令 (初级):  create         从配置文件或命令行输入创建一个新的k8s 资源  expose         使用replication controller, service, deployment or pod 且创建一个新的Kubernetes Service  run            在集群上运行特定镜像  set            为某k8s 对象设置属性Basic Commands (Intermediate):  explain        展示某k8s 资源的文档  get            显示一个或多个资源  edit           编辑/配置一个资源  delete         通过文件名,命令行输入,资源和名字(或通过表情,选择器)删除某个资源Deploy 部署命令:  rollout        管理k8s资源的 rollout  scale          给 Deployment, ReplicaSet, Replication Controller, or Job 对象设置Size  autoscale      给Deployment, ReplicaSet, or ReplicationController 自动扩容集群管理命令:  certificate    配置 certificate 资源  cluster-info   显示集群信息  top            展示资源的 (CPU/Memory/Storage) 使用  cordon         标记一个节点为不可调用  uncordon       标记某个节点为可用  drain          使某节点进入维护状态  taint          Update the taints on one or more nodes 更新打上 taint 标记的节点Troubleshooting and Debugging 命令:  describe       显示特定某资源的或组资源的详细信息  logs           打印Pod 下某容器的日志信息  attach         进入某运行中的容器  exec           在某容器内运行Linux 命令  port-forward   转发一个或多个本地端口至某 Pod  proxy          给Kubernetes API server 运行一个Proxy服务  cp             拷贝文件或文件夹到特定容器中 或从容器中拷贝至本地  auth           检查授权Advanced Commands:  diff           展示目前版本与理论值版本的偏差  apply          通过资源配置文件或者命令行部署特定资源  patch          通过编辑资源属性patch 该资源  replace        Replace a resource by filename or stdin  wait           实验阶段:在特定条件下等待一个或多个资源  convert        在不同API 版本直接转换配置文件Settings Commands:  label          更新某资源的 Label  annotate       更新某资源的 annotate   completion     Output shell completion code for the specified shell (bash or zsh)Other Commands:  api-resources  打印当前服务器上支持的 API 资源  api-versions   打印当前服务器上支持的 API 资源, in the form of "group/version"   config         更改kubeconfig 文件  plugin         提供与 plugins互动的工具  version        打印客户端与服务器端的版本信息Usage:  kubectl [flags] [options]Use "kubectl  --help" for more information about a given command.Use "kubectl options" for a list of global command-line options (applies to all commands).

 

下面通过实例来具体体会一下这些 API 的用法

例如:使用 kubectl describe node 命令显示k8smaster节点资源信息

kubectl describe node k8smaster
root@k8smaster ~]# kubectl describe node k8smasterName:               k8smasterRoles:              masterLabels:             beta.kubernetes.io/arch=amd64                    beta.kubernetes.io/os=linux                    kubernetes.io/hostname=k8smaster                    node-role.kubernetes.io/master=Annotations:        flannel.alpha.coreos.com/backend-data: {
"VtepMAC":"76:80:68:34:94:6c"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 172.16.0.11 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: trueCreationTimestamp: Wed, 02 Jan 2019 13:30:57 +0100Taints: node-role.kubernetes.io/master:NoScheduleUnschedulable: falseConditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Thu, 03 Jan 2019 12:49:50 +0100 Wed, 02 Jan 2019 13:30:51 +0100 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Thu, 03 Jan 2019 12:49:50 +0100 Wed, 02 Jan 2019 13:30:51 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 03 Jan 2019 12:49:50 +0100 Wed, 02 Jan 2019 13:30:51 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 03 Jan 2019 12:49:50 +0100 Wed, 02 Jan 2019 13:30:51 +0100 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 03 Jan 2019 12:49:50 +0100 Wed, 02 Jan 2019 13:57:40 +0100 KubeletReady kubelet is posting ready statusAddresses: InternalIP: 172.16.0.11 Hostname: k8smasterCapacity: cpu: 2 ephemeral-storage: 17394Mi hugepages-2Mi: 0 memory: 3861508Ki pods: 110Allocatable: cpu: 2 ephemeral-storage: 16415037823 hugepages-2Mi: 0 memory: 3759108Ki pods: 110System Info: Machine ID: 8d2b3fec09894a6eb6e69d45ce7a9996 System UUID: 34014D56-A1A0-0F33-A35B-56A3947191DF Boot ID: e0019567-d852-4000-991c-51e2a1061863 Kernel Version: 3.10.0-957.1.3.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.9.0 Kubelet Version: v1.11.1 Kube-Proxy Version: v1.11.1PodCIDR: 10.244.0.0/24Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-78fcdf6894-5v9g9 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 23h kube-system coredns-78fcdf6894-lpwfw 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 23h kube-system etcd-k8smaster 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22h kube-system kube-apiserver-k8smaster 250m (12%) 0 (0%) 0 (0%) 0 (0%) 22h kube-system kube-controller-manager-k8smaster 200m (10%) 0 (0%) 0 (0%) 0 (0%) 22h kube-system kube-flannel-ds-amd64-n5j7l 100m (5%) 100m (5%) 50Mi (1%) 50Mi (1%) 22h kube-system kube-proxy-rjssr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h kube-system kube-scheduler-k8smaster 100m (5%) 0 (0%) 0 (0%) 0 (0%) 22hAllocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (42%) 100m (5%) memory 190Mi (5%) 390Mi (10%) ephemeral-storage 0 (0%) 0 (0%)Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 4m58s kubelet, k8smaster Starting kubelet. Normal NodeAllocatableEnforced 4m58s kubelet, k8smaster Updated Node Allocatable limit across pods Normal NodeHasSufficientPID 4m57s (x5 over 4m58s) kubelet, k8smaster Node k8smaster status is now: NodeHasSufficientPID Normal NodeHasSufficientDisk 4m55s (x6 over 4m58s) kubelet, k8smaster Node k8smaster status is now: NodeHasSufficientDisk Normal NodeHasSufficientMemory 4m55s (x6 over 4m58s) kubelet, k8smaster Node k8smaster status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 4m55s (x6 over 4m58s) kubelet, k8smaster Node k8smaster status is now: NodeHasNoDiskPressure Normal Starting 4m21s kube-proxy, k8smaster Starting kube-proxy.

 

2 部署/创建一个新 Pod资源

部署一个新 Pod,且让该Pod 在集群内可见。

######################################################

# 启动一个新镜像,且给其配置杠可用备份                                      #

# 创建一个 deployment 或 job 管理容器            #

######################################################

你需要一个 deployment 对象比如 replication controller or replicaset - 这种控制器可以复制 Pods 以提供高可用服务。

下面例子会生成一个 Deployment,自动从docker hub下载nginx:1.14-alpine 镜像,暴露80端口给内部集群资源,且生成5个副本备份:

kubectl run nginx --image=nginx:1.14-alpine --port 80 --replicas=5
[root@k8smaster ~]# kubectl get deploymentNAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEnginx   5         5         5            5           2h[root@k8smaster ~]# kubectl get pods -o wideNAME                     READY   STATUS    RESTARTS   AGE   IP           NODEnginx-79976cbb47-sg2t9   1/1     Running   1          2h    10.244.1.4   k8snode1nginx-79976cbb47-tl5r7   1/1     Running   1          2h    10.244.2.7   k8snode2nginx-79976cbb47-vkzww   1/1     Running   1          2h    10.244.2.6   k8snode2nginx-79976cbb47-wvvtq   1/1     Running   1          2h    10.244.1.5   k8snode1nginx-79976cbb47-x4wjt   1/1     Running   1          2h    10.244.2.5   k8snode2 [root@k8smaster ~]# curl 10.244.1.4    Welcome to nginx!    

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.

Commercial support is available at nginx.com.

Thank you for using nginx.

使用 ifconfig 命令来观察Pod 的IP 地址:

[root@k8smaster ~]# ifconfigcni0: flags=4163
mtu 1450 inet 10.244.0.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::3cd7:71ff:fee7:b4d prefixlen 64 scopeid 0x20
ether 0a:58:0a:f4:00:01 txqueuelen 1000 (Ethernet) RX packets 2778 bytes 180669 (176.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2841 bytes 1052175 (1.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0docker0: flags=4099
mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:8b:97:c3:4b txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0ens33: flags=4163
mtu 1500 inet 172.16.0.11 netmask 255.255.255.0 broadcast 172.16.0.255 inet6 fe80::20c:29ff:fe71:91df prefixlen 64 scopeid 0x20
ether 00:0c:29:71:91:df txqueuelen 1000 (Ethernet) RX packets 6403 bytes 688725 (672.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7824 bytes 7876155 (7.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0flannel.1: flags=4163
mtu 1450 inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::7480:68ff:fe34:946c prefixlen 64 scopeid 0x20
ether 76:80:68:34:94:6c txqueuelen 0 (Ethernet) RX packets 5 bytes 1118 (1.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7 bytes 446 (446.0 B) TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0 ...

通过 flannel 网卡我们可以看到所有 Pod IP 都分配到10.244.0.1/24 区间

 

3 给内部调用暴露新服务,生成新 Service 对象

通常你需要Service 对象给 Deployment 对象提供服务,因为 Deployment 下的Pod可能会由于人为或自然原因停止进程,在这种情况下 虚拟IP (ClusterIP)地址会重新分配。在有Service的条件下,应用只需要与Service 通信即可得到需要资源,而不用去关心新Pod 的虚拟IP 是什么。 

kubectl expose 会暴露一个稳定的 IP给 Deployment 

Possible resources include (case insensitive): pod (po), service (svc), replicationcontroller (rc), deployment (deploy), replicaset (rs)Examples:  # Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000.  kubectl expose rc nginx --port=80 --target-port=8000

port -> 服务端口用于对外提供访问

target-port -> 容器端口用于转发

 

下面我们模拟创建一个新的Service 资源, 资源名为"nginx-service",  "--dry-run=true" 用于模拟

kubectl expose deployment nginx --name=nginx-service --port=80 --target-port=80 --protocol=TCP --dry-run=true

deployment nginx -> 该容器目前已被存储在本地docker registry (创建命令为:kubectl run nginx --image=nginx:1.14-alpine --port 80 --replicas=5)

如果该 nginx 实例没有被创建则会出现下面报错,因为 这个名为nginx-deploy 的 Deployment 尚未被创建:

[root@k8smaster ~]# kubectl expose deployment nginx-deploy --name=mynginx --port=80 --target-port=80 --protocol=TCP --dry-run=trueError from server (NotFound): deployments.extensions "nginx-deploy" not found

一旦 nginx1.14-alpine 被创建 ->你可以用docker images 命令在节点上看到该镜像:

[root@k8snode1 ~]# docker imagesREPOSITORY                    TAG                 IMAGE ID            CREATED             SIZEnginx                         1.14-alpine         c5b6f731fbc0        13 days ago         17.7MBk8s.gcr.io/kube-proxy-amd64   v1.11.1             d5c25579d0ff        5 months ago        97.8MBquay.io/coreos/flannel        v0.10.0-amd64       f0fad859c909        11 months ago       44.6MBk8s.gcr.io/pause              3.1                 da86e6ba6ca1        12 months ago       742kB

 现在我们创建一个真正的 Service对象给 Deployment(刚才的命令去掉 --dry-run)运行下面命令来检查运行情况为健康后,暴露服务:

[root@k8smaster ~]# kubectl get deployment -o wide --show-labelsNAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES              SELECTOR    LABELSnginx   5         5         5            5           1m    nginx        nginx:1.14-alpine   run=nginx   run=nginx [root@k8smaster ~]# kubectl get pods -o wide --show-labelsNAME                     READY   STATUS    RESTARTS   AGE   IP            NODE       LABELSnginx-79976cbb47-2xrhk   1/1     Running   0          1m    10.244.1.7    k8snode1   pod-template-hash=3553276603,run=nginxnginx-79976cbb47-8dqnk   1/1     Running   0          1m    10.244.2.10   k8snode2   pod-template-hash=3553276603,run=nginxnginx-79976cbb47-gprlc   1/1     Running   0          1m    10.244.2.9    k8snode2   pod-template-hash=3553276603,run=nginxnginx-79976cbb47-p247g   1/1     Running   0          1m    10.244.2.8    k8snode2   pod-template-hash=3553276603,run=nginxnginx-79976cbb47-ppbqv   1/1     Running   0          1m    10.244.1.6    k8snode1   pod-template-hash=3553276603,run=nginx [root@k8smaster ~]# kubectl expose deployment nginx --name=nginx-service --port=80 --target-port=80 --protocol=TCP service/nginx-service exposed [root@k8smaster ~]# kubectl get svc -o wide --show-labels NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR    LABELS kubernetes      ClusterIP   10.96.0.1        
        443/TCP   1d   
      component=apiserver,provider=kubernetes nginx-service   ClusterIP   10.109.139.168  
        80/TCP    40s   run=nginx   run=nginx [root@k8smaster ~]# kubectl describe svc nginx Name:              nginx-service Namespace:         default Labels:            run=nginx Annotations:      
Selector:          run=nginx Type:              ClusterIP IP:                10.109.139.168 Port:             
  80/TCP TargetPort:        80/TCP Endpoints:         10.244.1.6:80,10.244.1.7:80,10.244.2.10:80 + 2 more... Session Affinity:  None Events:           
[root@k8smaster ~]# ifconfig cni0: flags=4163
  mtu 1450         inet 10.244.0.1  netmask 255.255.255.0  broadcast 0.0.0.0         inet6 fe80::3cd7:71ff:fee7:b4d  prefixlen 64  scopeid 0x20
        ether 0a:58:0a:f4:00:01  txqueuelen 1000  (Ethernet)         RX packets 22161  bytes 1423430 (1.3 MiB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 22547  bytes 8296119 (7.9 MiB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 docker0: flags=4099
  mtu 1500         inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255         ether 02:42:8b:97:c3:4b  txqueuelen 0  (Ethernet)         RX packets 0  bytes 0 (0.0 B)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 0  bytes 0 (0.0 B)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 ens33: flags=4163
  mtu 1500         inet 172.16.0.11  netmask 255.255.255.0  broadcast 172.16.0.255         inet6 fe80::20c:29ff:fe71:91df  prefixlen 64  scopeid 0x20
        ether 00:0c:29:71:91:df  txqueuelen 1000  (Ethernet)         RX packets 41975  bytes 5478270 (5.2 MiB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 53008  bytes 54186252 (51.6 MiB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 flannel.1: flags=4163
  mtu 1450         inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0         inet6 fe80::7480:68ff:fe34:946c  prefixlen 64  scopeid 0x20
        ether 76:80:68:34:94:6c  txqueuelen 0  (Ethernet)         RX packets 10  bytes 2236 (2.1 KiB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 14  bytes 894 (894.0 B)         TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0 ...

 在篇文章中我们在创建集群时使用了命令:

kubeadm init --ignore-preflight-errors all --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

我们手动定义了 Service IP 分配的空间 --service-cidr=10.96.0.0/12 所以看我们的nginx-service 的虚拟IP 满足该条件:

[root@k8smaster ~]# kubectl get svc --show-labels NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   LABELS kubernetes      ClusterIP   10.96.0.1        
        443/TCP   1d    component=apiserver,provider=kubernetes nginx-service   ClusterIP   10.109.139.168  
        80/TCP    7m    run=nginx
 

* 我们没有定义 external-ip

 

4 Deployment 对象的扩容,缩容

使用Deployment 的目的就是可以利用 k8s 封装好的功能来做到自动化管理容器, 你可以使用 kubectl scale api 来手动控制这个缩容操作:

[root@k8smaster ~]# kubectl get deployment -o wide --show-labelsNAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES              SELECTOR    LABELSnginx   5         5         5            5           14m   nginx        nginx:1.14-alpine   run=nginx   run=nginx [root@k8smaster ~]# kubectl scale --replicas=3 deployment nginxdeployment.extensions/nginx scaled[root@k8smaster ~]# kubectl get deployment -o wide --show-labelsNAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES              SELECTOR    LABELSnginx   3         3         3            3           15m   nginx        nginx:1.14-alpine   run=nginx   run=nginx [root@k8smaster ~]# kubectl get pods -o wide --show-labels NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE       LABELS nginx-79976cbb47-8dqnk   1/1     Running   0          18m   10.244.2.10   k8snode2   pod-template-hash=3553276603,run=nginx nginx-79976cbb47-p247g   1/1     Running   0          18m   10.244.2.8    k8snode2   pod-template-hash=3553276603,run=nginx nginx-79976cbb47-ppbqv   1/1     Running   0          18m   10.244.1.6    k8snode1   pod-template-hash=3553276603,run=nginx

这样我们就轻松的将容器复制副本的数量从 5 降为 3

 

5 滚动式升级(Rolling Update)

该例子会手动更新 nginx 版本 从原来的 v1 到 v2

[root@k8smaster ~]# kubectl set image deployment nginx nginx=nginx:1.15-alpine/nginx:v2deployment.extensions/nginx image updated [root@k8smaster ~]# kubectl rollout status deployment nginxWaiting for deployment "nginx" rollout to finish: 1 out of 3 new replicas have been updated...

使用describe 命令观察详细信息:

[root@k8smaster ~]# kubectl describe pod nginx-79976cbb47-8dqnkName:               nginx-79976cbb47-8dqnkNamespace:          defaultPriority:           0PriorityClassName:  
Node: k8snode2/172.16.0.13Start Time: Thu, 03 Jan 2019 14:34:16 +0100Labels: pod-template-hash=3553276603 run=nginxAnnotations:
Status: RunningIP: 10.244.2.10Controlled By: ReplicaSet/nginx-79976cbb47Containers: nginx: Container ID: docker://151150f6350d891c6504f5edb17b03da6b213d6ad207188301ce3eab6ff5264a Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:e3f77f7f4a6bb5e7820e013fa60b96602b34f5704e796cfd94b561ae73adcf96 Port: 80/TCP Host Port: 0/TCP State: Running Started: Thu, 03 Jan 2019 14:34:17 +0100 Ready: True Restart Count: 0 Environment:
Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-rxs5t (ro)Conditions: Type Status Initialized True ........

 

如果对升级不满意可以使用 rollback undo 回滚为上一个版本

kubectl rollout undo deployment nginx

 

6 Iptables dump

[root@k8smaster ~]# iptables -vnL -t nat Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target     prot opt in     out     source               destination            27  1748 KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */  167 10148 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCALChain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target     prot opt in     out     source               destination         Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target     prot opt in     out     source               destination          6328  383K KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */ 1037 62220 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCALChain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target     prot opt in     out     source               destination          6529  395K KUBE-POSTROUTING  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */    0     0 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0            1716  103K RETURN     all  --  *      *       10.244.0.0/16        10.244.0.0/16           0     0 MASQUERADE  all  --  *      *       10.244.0.0/16       !224.0.0.0/4             0     0 RETURN     all  --  *      *      !10.244.0.0/16        10.244.0.0/24           0     0 MASQUERADE  all  --  *      *      !10.244.0.0/16        10.244.0.0/16       Chain DOCKER (2 references) pkts bytes target     prot opt in     out     source               destination             0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           Chain KUBE-MARK-DROP (0 references) pkts bytes target     prot opt in     out     source               destination             0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x8000Chain KUBE-MARK-MASQ (12 references) pkts bytes target     prot opt in     out     source               destination             0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x4000Chain KUBE-NODEPORTS (1 references) pkts bytes target     prot opt in     out     source               destination         Chain KUBE-POSTROUTING (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000Chain KUBE-SEP-23Y66C2VAJ3WDEMI (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-MARK-MASQ  all  --  *      *       172.16.0.11          0.0.0.0/0            /* default/kubernetes:https */    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ tcp to:172.16.0.11:6443Chain KUBE-SEP-CGXZZGLWTRRVTMXB (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.1.6           0.0.0.0/0            /* default/nginx-service: */    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/nginx-service: */ tcp to:10.244.1.6:80Chain KUBE-SEP-DA57TZEG5V5IUCZP (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.2.10          0.0.0.0/0            /* default/nginx-service: */    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/nginx-service: */ tcp to:10.244.2.10:80Chain KUBE-SEP-L4GNRLZIRHIXQE24 (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.2.8           0.0.0.0/0            /* default/nginx-service: */    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/nginx-service: */ tcp to:10.244.2.8:80Chain KUBE-SEP-LBMQNJ35ID4UIQ2A (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.9           0.0.0.0/0            /* kube-system/kube-dns:dns */    0     0 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */ udp to:10.244.0.9:53Chain KUBE-SEP-S7MPVVC7MGYVFSF3 (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.9           0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.9:53Chain KUBE-SEP-SISP6ORRA37L3ZYK (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.8           0.0.0.0/0            /* kube-system/kube-dns:dns */    0     0 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */ udp to:10.244.0.8:53Chain KUBE-SEP-XRFUWCXKVCLGWYQC (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.8           0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.8:53Chain KUBE-SERVICES (2 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.96.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443    0     0 KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  *      *       0.0.0.0/0            10.96.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443    0     0 KUBE-MARK-MASQ  udp  --  *      *      !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:53    0     0 KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:53    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53    0     0 KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.109.139.168       /* default/nginx-service: cluster IP */ tcp dpt:80    0     0 KUBE-SVC-GKN7Y2BSGW4NJTYL  tcp  --  *      *       0.0.0.0/0            10.109.139.168       /* default/nginx-service: cluster IP */ tcp dpt:80   15   900 KUBE-NODEPORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCALChain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-SEP-XRFUWCXKVCLGWYQC  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */ statistic mode random probability 0.50000000000    0     0 KUBE-SEP-S7MPVVC7MGYVFSF3  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */Chain KUBE-SVC-GKN7Y2BSGW4NJTYL (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-SEP-CGXZZGLWTRRVTMXB  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/nginx-service: */ statistic mode random probability 0.33332999982    0     0 KUBE-SEP-DA57TZEG5V5IUCZP  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/nginx-service: */ statistic mode random probability 0.50000000000    0     0 KUBE-SEP-L4GNRLZIRHIXQE24  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/nginx-service: */Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-SEP-23Y66C2VAJ3WDEMI  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references) pkts bytes target     prot opt in     out     source               destination             0     0 KUBE-SEP-SISP6ORRA37L3ZYK  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */ statistic mode random probability 0.50000000000    0     0 KUBE-SEP-LBMQNJ35ID4UIQ2A  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */

 

7 从外部访问集群资源

为了让外部应用可以访问到集群内部服务可以使用NodePort 选项,即更改配置文件的 ClusterIP为 NodePor

使用命令 kubectl edit svc nginx-service

[root@k8smaster ~]# kubectl edit svc nginx-service # change ClusterIP to NodePort -> save and close editingservice/nginx-service edited [root@k8smaster ~]# kubectl get svc nginx-serviceNAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGEnginx-service   NodePort   10.109.139.168   
80:30200/TCP 39m

我们看到 PORTS 80:30200/TCP 是被新更改的部分,现在我们可以使用节点IP地址:30200 来访问 Pod资源 (通过Service nginx-service)

现在使用我宿主机用 curl 命令调用 master节点的 nginx 资源:

bai@bai  ~  curl http://172.16.0.11:30200                                              ✔  ⚙  230  15:23:11 Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer tonginx.org.

Commercial support is available atnginx.com.

Thank you for using nginx.

*  k8s service自动提供loadbalance 功能

完成实验!

 

 

更新某资源的

转载于:https://www.cnblogs.com/crazy-chinese/p/10216772.html

你可能感兴趣的文章