How to Use an External Load Balancer in Kubernetes
To take advantage of the load balancer that is available in your host environment, simply edit your service configuration file to set the “type” field to “LoadBalancer.” You will also need to specify a port value for the “port” field. This provides an externally-accessible IP address that sends traffic to the correct port on your cluster nodes to be accessed by the external load balancer provided by your cloud provider. Alternatively, you can create the service with the kubectl expose command and its --type=LoadBalancer flag, as documented here. Once you deploy the changes to your config file, use kubectl get services to see the status of the load balancers you’ve provisioned.
kubectl --kubeconfig=[full path to cluster config file] get services
If you have provisioned multiple load balancers but want to get the status of a specific one, you can use the “describe service” command instead. Both of these commands will expose the name, cluster IP address, external IP address, port, and age of your load balancer(s). If using the “describe service” command, the IP address is next to “LoadBalancer Ingress”.
External Load Balancer Alternatives
Sometimes, a load balancer is not provisioned for a cluster in order to efficiently manage traffic leading into it, but simply to expose it to the internet. If this is your use case, you may consider setting the “type” field to “NodePort” instead. Kubernetes will then choose an available port to open on all your nodes so that any traffic sent to this port passes through to your application. While this doesn’t provide any load balancing, it is a cheap way to route external traffic directly to your service if that’s all you require.
On the other hand, if you are trying to optimize traffic to multiple services, you may consider a more robust method than the LoadBalancer type suggested above. You will be charged by your cloud provider for each service that requires an external load balancer, as well as for each IP address provisioned for your balancers. Another strategy is to use Ingress, which allows you to expose multiple services under the same IP address. Ingress runs as a controller in a specialized Kubernetes pod that executes a set of rules governing traffic. With Ingress, you only pay for one load balancer. There are many types of Ingress controllers and your implementation will depend on your environment, but it is safe to say that deploying Ingress requires a more complicated configuration than the process given above. Thus, you must weigh the potential cost savings against the increased complexity.