I am lazy, and more else, I do not think I can explain better than official explanation, so please refer below for official explanation.
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that’s outside of your cluster.
Kubernetes
ServiceTypes
allow you to specify what kind of Service you want. The default isClusterIP
.
Type
values and their behaviors are:
ClusterIP
: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the defaultServiceType
.
NodePort
: Exposes the Service on each Node’s IP at a static port (theNodePort
). AClusterIP
Service, to which theNodePort
Service routes, is automatically created. You’ll be able to contact theNodePort
Service, from outside the cluster, by requesting<NodeIP>:<NodePort>
.
LoadBalancer
: Exposes the Service externally using a cloud provider’s load balancer.NodePort
andClusterIP
Services, to which the external load balancer routes, are automatically created.
ExternalName
: Maps the Service to the contents of theexternalName
field (e.g.foo.bar.example.com
), by returning aCNAME
record with its value. No proxying of any kind is set up.
Personal interpretation, Service is an K8S abstraction layer, as well as an object which has four types that help expose application internally within cluster or to outside world.
I had some trouble figuring out how exactly the network part works and finally figuring them out. This post is to record what I have learned.
OK, I have a cluster with one master and two worker nodes running on AWS
Create a deployment with one Pod running Nginx container.
Lastly, create a Service with NodePort type to expose Nginx Pod. Once exposed, as explained above, Nginx Pod can be connected from within the cluster or from internet.
OK, here comes the tricky part (to me). Let’s describe mynginx Service first.
When a Service is created, you got:
- IP, 100.68.181.31
- Port, 80
- TargetPort, 80
- NodePort, 30379
- Endpoints, 100.96.2.2:80
IP: is the virtual cluster IP assigned to Service mynginx. You use this IP to connect Nginx from anywhere WITHIN the cluster.
Port: is the port exposed on IP. So you can connect Nginx by using 100.68.181.31:80 anywhere WITHIN the cluster.
K8S supports two modes for Pods to find a Service, Environment variable and DSN. With these modes, Service can be connected from Pods, not from Nodes. I will talk about this next time.
NodePort: is the port opened on nodes. Once FW on master & worker nodes are configured to allow access to NodePort, 30379 in my case, you can access Nginx Pod from internet. Verify that NodePort 30379 is opened on node, as well as knowing it is serviced by kube-proxy.
In my case, I allow my home IP to access master and worker nodes, so from my NB, use browser to access Nginx with <Node_pub_IP:NodePort>.
Traffic from internet to Nginx Pod is proxied by kube-proxy, which manipulates iptables. Check out iptables rules on any of the nodes to confirm. First output iptables to a tmp file, filter out rules related to port 30379, we got
- Rule 1 states all traffic to 30379 port goes to KUBE-SVC-XXXX
- KUBE-SVC-XXXX redirects traffic to KUBE-SEP-XXXX
- KUBE-SEP-XXXX redirects traffic to final destination, Endpoints, which is Pod IP with TargetPort
- At the same time we found out that Rule 2 states traffic to Service virtual IP at port 80, is redirected to KUBE-SVC-XXXX, which will eventually redirected to Pod IP with TargetPort as well
In conclusion, Service virtual IP with Port is used for internal connection where Node IP with NodePort is used for external connection.
TargetPort, as mentioned above, is port opened on Pod IP. So traffic from either internal to Service virtual IP, 100.68.181.31:80, or external to NodePort using Node_IP:30379, is redirected to Pod_IP:TargetPort(Endpoints), which is 100.96.2.6:80. It is already shown on iptables rule above. Let’s verify Pod information.
Endpoints, in my personal opinion, is just Node_IP:TargetPort, which is the entry point to container running inside Pod.
In addition, if you host your K8S on cloud, such as AWS like I do, you can expose Nginx through AWS ELB to internet using commend below. Wait for a few minutes for ELB health check to take effect then verify by accessing ELB url.