[k8s]通过svc来访问集群podhttp://api:8080/api/v1/namespaces/default/services/mynginx/proxy/

以往的套路是,安装完成k8s集群后-->安装dns-->安装ingress以便外面访问(或者映射nodeport方式)

如果你不想做任何关于ingress/nodeport一些操作.想通过master直接能够访问到集群pod.怎么破? 正是本文讲解的方法.

通过http://api:8080/api/v1/namespaces/default/services/mynginx/proxy/

通过svc访问集群报错

我想通过类似这种模式来访问我的集群

http://192.168.14.11:8080/api/v1/namespaces/default/services/mynginx/proxy/
http://192.168.14.11:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/

本来应该是这样子

结果报错了

Error: 'dial tcp 10.2.60.3:80: getsockopt: connection refused'
Trying to reach: 'http://10.2.60.3:80/'

master节点为何要跑flannel?

最终查明,本质原因在于master没跑flannel,master跑了flannel后就可以了.
master和pod通信是通过svc网络的.

集群环境(1.9)on the fly搭建

参考: https://raw.githubusercontent.com/lannyMa/scripts/master/k8s/
http://www.cnblogs.com/iiiiher/p/8159693.html

mkdir -p /kubernetes/network/config/
cat > /kubernetes/network/config/flannel-config.json << EOF
{
 "Network": "10.2.0.0/16",
 "SubnetLen": 24,
 "Backend": {
   "Type": "host-gw"
  }
}
EOF

##################################
etcd --advertise-client-urls=http://0.0.0.0:2379 --listen-client-urls=http://0.0.0.0:2379 --debug
cd /kubernetes/network/config
etcdctl set /kubernetes/network/config < flannel-config.json

flanneld -etcd-endpoints=http://192.168.14.11:2379 -iface=eth0 -etcd-prefix=/kubernetes/network


systemctl stop docker
dockerd --bip=10.2.60.1/24 --mtu=1500

##################################

kube-apiserver --service-cluster-ip-range=10.254.0.0/16 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --admission-control=ServiceAccount     --service-account-key-file=/root/ssl/ca.key --client-ca-file=/root/ssl/ca.crt --tls-cert-file=/root/ssl/server.crt --tls-private-key-file=/root/ssl/server.key --allow-privileged=true --storage-backend=etcd2 --v=2 --enable-bootstrap-token-auth --token-auth-file=/root/token.csv


kube-controller-manager   --master=http://127.0.0.1:8080   --service-account-private-key-file=/root/ssl/ca.key  --cluster-signing-cert-file=/root/ssl/ca.crt --cluster-signing-key-file=/root/ssl/ca.key --root-ca-file=/root/ssl/ca.crt --v=2  --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16


kube-scheduler --master=http://127.0.0.1:8080 --v=2 


kubelet --allow-privileged=true --cluster-dns=10.254.0.2 --cluster-domain=cluster.local --v=2 --experimental-bootstrap-kubeconfig=/root/bootstrap.kubeconfig --kubeconfig=/root/kubelet.kubeconfig --fail-swap-on=false --network-plugin=cni


kube-proxy  --master=http://192.168.14.11:8080  --v=2

测试

kubectl run --image=nginx mynginx --replicas=2
kubectl expose deployment mynginx --port=80

http://192.168.14.11:8080/api/v1/namespaces/default/services/mynginx/proxy/

最终结果

posted @ 2018-01-02 15:34  _毛台  阅读(1611)  评论(0编辑  收藏  举报