An Ingress controller serves as a dedicated load balancer designed for Kubernetes and similar containerized environments. Kubernetes has become the widely accepted method of managing applications in containers.
However, when organizations migrate their production workloads to Kubernetes, they encounter difficulties and complexities in managing application traffic. To address this, it simplifies the process of routing application traffic within Kubernetes and acts as a connection point between Kubernetes services and external services.
In Kubernetes, an Ingress service enables external access to services within a cluster by exposing HTTP and HTTPS routes from outside the cluster. You can configure it to give services within a cluster externally-reachable URLs, load balance traffic, and offer name-based virtual hosting.
However, for the ingress service to work, your cluster needs an ingress controller running. It facilitates the Ingress and manages and routes traffic in and out of the cluster based on Ingress rules.
Along with log data, Ingress Controllers generate a bunch of useful metrics that help you gain insightful information about the performance of these services, the clusters they’re deployed in, and dependent environments. Their metrics also help identify issues within these services, and single-out potential issues, threats, and failures across your containerized environments.
In this article, we’ll take a look at how you can take the first step towards instrumenting your containerized environments for proactive monitoring by scraping and exposing metrics from your Ingress Controller services.
We’ll use the NGINX Ingress Controller as an example service throughout this article.
Exposing NGINX Ingress Controller metrics via Helm Upgrade
If you’ve installed your NGINX Ingress Controller using a Helm Chart, the easiest way to scrape metrics is via a Helm Upgrade. In this example, we’ll show how to run a Helm upgrade to enable metrics scraping using the NGINX Ingress Controller Helm Chart packaged by Bitnami. You can download this Helm Chart from the Bitnami Application Catalog.
We begin by downloading the values.yaml
file for the Helm Chart by running the following commands.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm show values my-release bitnami/nginx-ingress-controller > values.nginx.yaml
Next, in the values.nginx.yaml
file under the controller
> metrics
section, enable metrics
and serviceMonitor
as shown below.
kubernetes-ingress:
controller:
metrics:
enabled: true
serviceMonitor:
enabled: true
Run the Helm upgrade command, as shown below.
helm upgrade --install <your-chartname> bitnami/nginx-ingress-controller -f values.nginx.yaml
Running this command after specifying the values.nginx.yaml
enables metrics scraping on the NGINX Ingress Controller. You can access these metrics by hitting the /metrics
endpoint of your NGINX Ingress Controller. To verify whether you’ve successfully enabled metrics scraping on your NGINX Ingress Controller, first retrieve the service’s endpoint by running the following command.
kubernetes-ingress NodePort 10.152.183.171 <none> 10254:32569/TCP, 23d
Then, hit the /metrics
endpoint as shown below.
curl http://10.152.183.171:10254/metrics
You should be able to see your NGINX Ingress Controller metrics being scraped and exposed.
Exposing NGINX Ingress Controller metrics using Prometheus
If you haven’t used a Helm Chart to install the NGINX Ingress Controller or cannot run a Helm upgrade as detailed above, you can use Prometheus to scrape and expose metrics instead.
The first step is to edit your NGINX Ingress Controller service by adding the following annotations
and port
configuration to expose metrics.
metadata:
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
ports:
- name: prometheus
port: 10254
protocol: TCP
targetPort: prometheus
Next, edit the Ingress Controller’s daemonset.yaml
configuration file to detect the exposed port, as shown below.
ports:
- containerPort: 10254
name: prometheus
protocol: TCP
The final step is to enable your Prometheus instance to scrape the Ingress Controller endpoint and expose metrics. There are two ways of doing this:
- Configuring an additional
serviceMonitor
- Creating
AdditionalScrapeConfigs
Configuring an additional serviceMonitor
To configure an additional serviceMonitor
, access your servicemonitor.yaml
file and edit it as shown below.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: kubernetes-ingress
name: <ingress service name>
namespace: <namespace deployed>
spec:
endpoints:
- interval: 30s
path: /metrics
port: prometheus
namespaceSelector:
matchNames:
- Apica
selector:
matchLabels:
app: kubernetes-ingress
Next, apply the servicemonitor.yaml
file by running the following command.
kubectl apply -f Servicemonitor.yaml
Creating AdditionalScrapeConfigs
First, create the additional scrape configuration, as shown below. You can name this file prometheus-additional-config.yaml
or something similar.
- job_name: "ingress-prometheus"
static_configs:
- targets: ["localhost:10254"]
Next, create a secret from this configuration by running the following command.
kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional-config.yaml --dry-run -oyaml > additional-scrape-configs.yaml
Apply the generated Kubernetes manifest by running the following command.
kubectl apply -f additional-scrape-configs.yaml
Finally, reference this additional configuration in your prometheus.yaml
file, as shown below.
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
labels:
prometheus: prometheus
spec:
replicas: 2
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: frontend
additionalScrapeConfigs:
name: additional-scrape-configs
key: prometheus-additional-config.yaml
You should now be able to see the metrics show up within your Prometheus instance. To verify, launch Prometheus by running the following command.
kubectl port-forward -n <namespace> svc/prometheus-prometheus 9090:9090
Finally, access http://localhost:9090 on a web browser. Once your browser loads the Prometheus UI, navigate to Status > Targets. You should now see your Ingress Controller metrics being scraped.
All NGINX Ingress Controller metrics have the namespace nginx-ingress
. If you access the Graphs tab on the Prometheus UI and search fornginx-ingress
, you should see your metrics showing up in the results modal.
Now that we have our metrics, what’s next?
Now that you have access to your Ingress Controller metrics, you’re one step closer to actively monitoring the service and its dependencies. Many would state that the obvious next step would be to ship these metrics to a monitoring, analytics, and observability platform for monitoring, visualization, correlation, and generating performance insights about your environments.
However, most people tend to ignore that the key to unlocking true observability and more powerful insights lies in controlling your observability data pipelines, scalable data storage, and the ability to route data to where it’s needed the most.
In the next post in this series, we’ll look at how you can go beyond instrumenting this service for observability and monitoring.
We’ll take you through how you can ship your metrics and logs to Apica for observability, monitoring, and analysis while building a robust data pipeline that enhances data value, stores data for as long as you need it, and routes data from any time range to any upstream target system on-demand, and in real-time.
1 thought on “Scraping NGINX Ingress Controller Metrics using Helm and Prometheus”
thank you for the information