NodePort vs LoadBalancer vs Ingress on Google Kubernetes Engine

It can be quite daunting deciding the type of service that manages external traffic for your Workloads containing the pods running in a cluster on Google Kubernetes Engine (GKE). The best way to come to an optimal decision would first involve having a clear understanding of the different types of services and how they operate in comparison to each other.

There are three methods for dealing with external traffic namely:

  1. Proxy
  2. NodePort
  3. Ingress

By default the Kubernetes engine provisions a ClusterIP service inside your cluster to enable pods to communicate with each other internally, with external access.

A sample YAML for ClusterIP can be seen below:

apiVersion: v1
kind: Service
metadata:  
  name: my-cluster-ip
spec:
  selector:    
    app: my-kubectl-app
  type: ClusterIP
  ports:  
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP

But more pertinently most GKE deployments require access from the Internet and one of such services that provisions external access is the Kubernetes Proxy.

Proxy

Proxy Service
Proxy Service

You can provision a Kubernetes Proxy by issuing this command:

$ kubectl proxy --port=8080

You can inspect this service using this URL format:

http://localhost:8080/api/v1/proxy/namespaces/<namespace>/services/<service-name>:<port-name>/

http://localhost:8080/api/v1/proxy/namespaces/default/services/my-cluster-ip:http/
Pros and Cons of this service:
  1. This service is useful for allowing ONLY internal traffic, and monitoring internal processes, etc.
  2. It is useful for testing services, via access from your local machine.

NodePort

The NodePort service provides the capability to route external traffic to your workload by opening up a selected port on all the associated nodes (VMs)  and it forwards all traffic sent to the opened port to the workload.

GKE Nodeport Service
GKE Nodeport Service

Here’s a sample yaml file for provisioning a NodePort service:

apiVersion: v1
kind: Service
metadata:  
  name: my-nodeport-service
spec:
  selector:    
    app: my-kubectl-app
  type: NodePort
  ports:  
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30036
    protocol: TCP

NodePort gives us the option to select the port to open on the nodes, i.e. the targetPort. If a target port is not specified, it will open up a random port.

Pros and Cons of this service:
  1. Ports that can be used range from 30000-32767
  2. Only one service per port can be provisioned.
  3. In the case of a node/vm IP address change, the NodePort service would need to be updated.

It is not recommend to use a NodePort service in production for an application that demands high availability. For a cheaper solution such as a demo app or low demand utility app, you can use this service.

LoadBalancer

LoadBalancer is the most common way of exposing services to the internet and it is quite easy to provision one for a service in GKE.

GKE Loadbalancer Service
GKE Loadbalancer Service
Pros and Cons of this service:
  1. No filtering or routing for this service.
  2. It supports HTTP, TCP, UDP, Websockets, gRPC, etc.
  3. It is the standard way of forwarding traffic to ports exposed on the service.\
  4. Each service exposed gets its own IP address, which means one LoadBalancer per service, and is super expensive.

Ingress

Ingress is not a type of service, rather it acts like a “smart router” that sits in front of multiple services in your GKE cluster. The standard default GKE ingress controller comes with a HTTP(S) Load Balancer, usually called an L7 HTTP Load Balancer and it supports both path-based and subdomain based routing to target backend services. For example, you can send everything on foo.yourdomain.com to the foo service, and everything under the yourdomain.com/bar/ path to the bar service.

GKE Ingress Service
GKE Ingress Service

The yaml file for an Ingress object looks like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-kubectl-ingress
spec:
  backend:
    serviceName: other
    servicePort: 8080
  rules:
  - host: foo.mydomain.com
    http:
      paths:
      - backend:
          serviceName: foo
          servicePort: 8080
  - host: mydomain.com 
    http: 
      paths: 
      - path: /bar/* 
	backend: 
	  serviceName: bar 
	  servicePort: 8080
Pros and Cons of this service:
  1. This is the best way to expose services in a GKE cluster.
  2. There are several Ingress controllers available:
    1. Google Cloud Load Balancer
    2. Nginx
    3. Contour
    4. Istio
  3. It is useful for exposing multiple services under the same IP address.
  4. It supports SSL, Auth, Routing, etc.
  5. Its cheaper as only one load balancer is used.

Author: daltonwhyte

A technocrat who believes in a smart future, that will be proliferated with systems that allow us to focus on the bigger picture.

Leave a Reply

Your email address will not be published. Required fields are marked *