Enabling and exposing the Fabric metrics interface on customer-owned AKS, GKE, and EKS clusters
[ Air-Gapped / Kubernetes ] This document applies to air-gapped Kubernetes deployments on customer-owned AKS, GKE, or EKS clusters where K2view K2cloud Orchestrator and space profiles are not used. For K2cloud SaaS and K2cloud Self-hosted customers, see K2view Kubernetes Monitoring Stack for Fabric. For VM and bare-metal deployments, see K2view VM / Bare-Metal Monitoring Stack for Fabric.
This document explains how to enable and validate the Fabric metrics interface in air-gapped Kubernetes deployments where the K2view K2cloud Orchestrator is not present. It describes what K2view provides, how to activate it, and how your observability infrastructure can connect to it.
In air-gapped deployments, the customer owns and operates the Kubernetes cluster and all supporting infrastructure. K2view provides the Fabric platform and the metrics interface. What you do with that interface — which monitoring stack you connect to it, how you store and visualize the data — is your decision and your responsibility.
In K2cloud SaaS and K2cloud Self-hosted deployments, monitoring enablement is handled automatically through the K2view K2cloud Orchestrator and space profile mechanism. That automation is not available in air-gapped deployments.
The following table summarizes the key differences:
In an air-gapped deployment, K2view provides two monitoring interfaces. These are available once the Fabric pod is running with monitoring enabled:
The Prometheus JMX Exporter is bundled with every Fabric image. When activated, it serves Fabric and JVM metrics in Prometheus exposition format over HTTP:
http://<FABRIC_POD_IP>:7170/metrics # Fabric JVM and application metrics
http://<FABRIC_POD_IP>:7270/metrics # iid_finder metrics (if iid_finder is running)
This endpoint serves standard Prometheus text format. Any monitoring platform or collector that can scrape a Prometheus-format HTTP endpoint can consume it — no changes to the Fabric image or configuration are required on your side.
Note: The exporter binds to the pod's network interface. To scrape it from outside the pod, the port must be exposed in the container spec and reachable from your collector. See Section 5.
Fabric writes application logs to the filesystem inside the pod at:
$K2_HOME/logs/k2fabric.log
How you collect these logs is your decision. Common approaches include shipping via a sidecar log agent, using your cloud provider\'s native log collection (such as Azure Monitor, AWS CloudWatch, or GCP Cloud Logging), or deploying Promtail to forward logs to Loki. K2view does not prescribe the log collection path for air-gapped deployments.
Without K2cloud Orchestrator, you must activate the JMX Exporter by setting the MONITORING environment variable in the Fabric pod specification. This is the equivalent of what K2cloud Orchestrator does automatically in K2cloud deployments.
In your Fabric pod or deployment spec, add the following environment variable:
env:
- name: MONITORING
value: "default"
When the Fabric container starts with MONITORING=default, the container startup script runs monitor_setup.sh, which calls fabric_7_monitor.sh. This script appends the javaagent line to jvm.options and enables JMX remote management. The Fabric JVM then starts with the exporter active.
The environment variable can be delivered as a plain environment variable in the pod spec, or as a Kubernetes Secret. If using a Secret, the Secret should contain:
data:
MONITORING: ZGVmYXVsdA== # base64 of \'default\'
Note: Setting MONITORING=NONE suppresses monitoring entirely. If MONITORING is absent from the pod environment, the monitor setup scripts do not run and the exporter is not activated.
For your collector to scrape the endpoint, port 7170 must be exposed in the Fabric container spec:
ports:
- name: jmx-metrics
containerPort: 7170
protocol: TCP
If iid_finder is running in your deployment, also expose port 7270:
- name: iid-metrics
containerPort: 7270
protocol: TCP
To make the ports discoverable by a Kubernetes-native collector, create a Service that exposes these ports, or rely on pod annotation-based discovery if your collector supports it.
For reference, the full chain triggered by MONITORING=default at container startup is:
docker-entrypoint.sh
→ init_monitoring() in cloud_common.sh
→ monitor_setup.sh
→ setup_monitor() — copies monitor/ dir to $FABRIC_HOME if needed
→ init_monitor() — calls fabric_7_monitor.sh
→ start_monitor() — starts node_exporter and promtail as background processes
fabric_7_monitor.sh:
→ checks if javaagent line already in jvm.options (idempotent)
→ appends: -javaagent:.../jmx_prometheus_javaagent-1.5.0.jar=7170:.../fabric_config.yaml
→ enables JMX remote management settings in jvm.options
Note on start_monitor(): The start_monitor() function also attempts to start node_exporter and promtail as background processes inside the container. In Kubernetes, node-level metrics are more appropriately collected by a DaemonSet node-exporter on the worker node rather than from inside the Fabric container. Whether you use the in-container node_exporter or a DaemonSet-based one is your decision.
After the Fabric pod starts with MONITORING=default, validate that the endpoint is active before connecting your collector.
Confirm MONITORING is set correctly in the running pod:
kubectl exec -it <fabric-pod> -n <namespace> -- env | grep MONITORING
Expected output:
MONITORING=default
Confirm that fabric_7_monitor.sh ran successfully and appended the javaagent line to jvm.options:
kubectl exec -it <fabric-pod> -n <namespace> -- grep jmx_prometheus $K2_HOME/config/jvm.options
Expected output (line may wrap):
-javaagent:$K2_HOME/monitor/jmx_exporter/jmx_prometheus_javaagent-1.5.0.jar=7170:$K2_HOME/monitor/jmx_exporter/fabric_config.yaml
Query the endpoint from inside the pod:
kubectl exec -it <fabric-pod> -n <namespace> -- curl http://localhost:7170/metrics
A successful response returns Prometheus text format output including jvm_, fabric_, and tomcat_* metric families. If the endpoint does not respond, see the troubleshooting section below.
To validate from outside the pod (e.g., from a collector pod in the same cluster), use the pod IP or Service endpoint:
curl http://<POD_IP>:7170/metrics
For iid_finder metrics:
kubectl exec -it <fabric-pod> -n <namespace> -- curl http://localhost:7270/metrics
Once the metrics endpoint is validated, connecting it to your observability infrastructure is straightforward. The endpoint is a standard Prometheus-format HTTP endpoint — any collector, agent, or platform that can scrape this format can consume it.
K2view does not prescribe which observability stack you use. The following describes the interface you are connecting to, not a required implementation.
The K2view Terraform blueprints, available at:
https://github.com/k2view/blueprints
include a Grafana Agent k8s-monitoring Helm chart deployment that represents one way to implement the collection layer. The blueprints deploy:
You can use these blueprints as-is, adapt them, or replace them entirely with your own observability tooling. The Fabric metrics endpoint at port 7170 is the stable interface regardless of which collection layer you choose.
Note: The Grafana Agent in the K2view blueprints does not automatically scrape Fabric pods. You must add Fabric-specific scrape configuration after deployment. See How to Configure the Collection Layer to Scrape Fabric Metrics for the annotation-based and River pipeline approaches.
At minimum, your collector needs to:
The core useful metric families to retain:
Fabric logs are written to $K2_HOME/logs/k2fabric.log inside the container. Your options for collection include:
The Fabric log path assumed by the K2view reference configurations is:
/opt/apps/fabric/workspace/logs/k2fabric.log
If your deployment uses a different path, update any log collection configuration accordingly.
If you are running Fabric across multiple Kubernetes clusters, you will need a strategy for aggregating metrics across them. Each cluster should have its own local collection layer scraping its Fabric pods.
K2view uses Thanos federation in its own multi-cluster deployments — one Prometheus instance per cluster with a Thanos sidecar, and a central Thanos Query layer federating across all clusters. This is one well-understood approach. Whether it is the right approach for your environment depends on your existing observability infrastructure and operational preferences.
For an overview of how the Thanos federation model works in the context of K2view deployments, see K2view Observability Architecture for Fabric.
Note: The Terraform blueprints at github.com/k2view/blueprints include the Grafana Agent and supporting Helm charts but do not include Thanos configuration. Thanos is managed separately as central observability infrastructure in K2view's own deployments.
kubectl logs <fabric-pod> -n <namespace> | grep -i monitor
kubectl run test --image=curlimages/curl --restart=Never --rm -it -- curl http://<FABRIC_POD_IP>:7170/metrics
kubectl exec -it <fabric-pod> -- curl -s http://localhost:7170/metrics | grep <metric_name>
Enabling:
Validating:
Connecting:
Enabling and exposing the Fabric metrics interface on customer-owned AKS, GKE, and EKS clusters
[ Air-Gapped / Kubernetes ] This document applies to air-gapped Kubernetes deployments on customer-owned AKS, GKE, or EKS clusters where K2view K2cloud Orchestrator and space profiles are not used. For K2cloud SaaS and K2cloud Self-hosted customers, see K2view Kubernetes Monitoring Stack for Fabric. For VM and bare-metal deployments, see K2view VM / Bare-Metal Monitoring Stack for Fabric.
This document explains how to enable and validate the Fabric metrics interface in air-gapped Kubernetes deployments where the K2view K2cloud Orchestrator is not present. It describes what K2view provides, how to activate it, and how your observability infrastructure can connect to it.
In air-gapped deployments, the customer owns and operates the Kubernetes cluster and all supporting infrastructure. K2view provides the Fabric platform and the metrics interface. What you do with that interface — which monitoring stack you connect to it, how you store and visualize the data — is your decision and your responsibility.
In K2cloud SaaS and K2cloud Self-hosted deployments, monitoring enablement is handled automatically through the K2view K2cloud Orchestrator and space profile mechanism. That automation is not available in air-gapped deployments.
The following table summarizes the key differences:
In an air-gapped deployment, K2view provides two monitoring interfaces. These are available once the Fabric pod is running with monitoring enabled:
The Prometheus JMX Exporter is bundled with every Fabric image. When activated, it serves Fabric and JVM metrics in Prometheus exposition format over HTTP:
http://<FABRIC_POD_IP>:7170/metrics # Fabric JVM and application metrics
http://<FABRIC_POD_IP>:7270/metrics # iid_finder metrics (if iid_finder is running)
This endpoint serves standard Prometheus text format. Any monitoring platform or collector that can scrape a Prometheus-format HTTP endpoint can consume it — no changes to the Fabric image or configuration are required on your side.
Note: The exporter binds to the pod's network interface. To scrape it from outside the pod, the port must be exposed in the container spec and reachable from your collector. See Section 5.
Fabric writes application logs to the filesystem inside the pod at:
$K2_HOME/logs/k2fabric.log
How you collect these logs is your decision. Common approaches include shipping via a sidecar log agent, using your cloud provider\'s native log collection (such as Azure Monitor, AWS CloudWatch, or GCP Cloud Logging), or deploying Promtail to forward logs to Loki. K2view does not prescribe the log collection path for air-gapped deployments.
Without K2cloud Orchestrator, you must activate the JMX Exporter by setting the MONITORING environment variable in the Fabric pod specification. This is the equivalent of what K2cloud Orchestrator does automatically in K2cloud deployments.
In your Fabric pod or deployment spec, add the following environment variable:
env:
- name: MONITORING
value: "default"
When the Fabric container starts with MONITORING=default, the container startup script runs monitor_setup.sh, which calls fabric_7_monitor.sh. This script appends the javaagent line to jvm.options and enables JMX remote management. The Fabric JVM then starts with the exporter active.
The environment variable can be delivered as a plain environment variable in the pod spec, or as a Kubernetes Secret. If using a Secret, the Secret should contain:
data:
MONITORING: ZGVmYXVsdA== # base64 of \'default\'
Note: Setting MONITORING=NONE suppresses monitoring entirely. If MONITORING is absent from the pod environment, the monitor setup scripts do not run and the exporter is not activated.
For your collector to scrape the endpoint, port 7170 must be exposed in the Fabric container spec:
ports:
- name: jmx-metrics
containerPort: 7170
protocol: TCP
If iid_finder is running in your deployment, also expose port 7270:
- name: iid-metrics
containerPort: 7270
protocol: TCP
To make the ports discoverable by a Kubernetes-native collector, create a Service that exposes these ports, or rely on pod annotation-based discovery if your collector supports it.
For reference, the full chain triggered by MONITORING=default at container startup is:
docker-entrypoint.sh
→ init_monitoring() in cloud_common.sh
→ monitor_setup.sh
→ setup_monitor() — copies monitor/ dir to $FABRIC_HOME if needed
→ init_monitor() — calls fabric_7_monitor.sh
→ start_monitor() — starts node_exporter and promtail as background processes
fabric_7_monitor.sh:
→ checks if javaagent line already in jvm.options (idempotent)
→ appends: -javaagent:.../jmx_prometheus_javaagent-1.5.0.jar=7170:.../fabric_config.yaml
→ enables JMX remote management settings in jvm.options
Note on start_monitor(): The start_monitor() function also attempts to start node_exporter and promtail as background processes inside the container. In Kubernetes, node-level metrics are more appropriately collected by a DaemonSet node-exporter on the worker node rather than from inside the Fabric container. Whether you use the in-container node_exporter or a DaemonSet-based one is your decision.
After the Fabric pod starts with MONITORING=default, validate that the endpoint is active before connecting your collector.
Confirm MONITORING is set correctly in the running pod:
kubectl exec -it <fabric-pod> -n <namespace> -- env | grep MONITORING
Expected output:
MONITORING=default
Confirm that fabric_7_monitor.sh ran successfully and appended the javaagent line to jvm.options:
kubectl exec -it <fabric-pod> -n <namespace> -- grep jmx_prometheus $K2_HOME/config/jvm.options
Expected output (line may wrap):
-javaagent:$K2_HOME/monitor/jmx_exporter/jmx_prometheus_javaagent-1.5.0.jar=7170:$K2_HOME/monitor/jmx_exporter/fabric_config.yaml
Query the endpoint from inside the pod:
kubectl exec -it <fabric-pod> -n <namespace> -- curl http://localhost:7170/metrics
A successful response returns Prometheus text format output including jvm_, fabric_, and tomcat_* metric families. If the endpoint does not respond, see the troubleshooting section below.
To validate from outside the pod (e.g., from a collector pod in the same cluster), use the pod IP or Service endpoint:
curl http://<POD_IP>:7170/metrics
For iid_finder metrics:
kubectl exec -it <fabric-pod> -n <namespace> -- curl http://localhost:7270/metrics
Once the metrics endpoint is validated, connecting it to your observability infrastructure is straightforward. The endpoint is a standard Prometheus-format HTTP endpoint — any collector, agent, or platform that can scrape this format can consume it.
K2view does not prescribe which observability stack you use. The following describes the interface you are connecting to, not a required implementation.
The K2view Terraform blueprints, available at:
https://github.com/k2view/blueprints
include a Grafana Agent k8s-monitoring Helm chart deployment that represents one way to implement the collection layer. The blueprints deploy:
You can use these blueprints as-is, adapt them, or replace them entirely with your own observability tooling. The Fabric metrics endpoint at port 7170 is the stable interface regardless of which collection layer you choose.
Note: The Grafana Agent in the K2view blueprints does not automatically scrape Fabric pods. You must add Fabric-specific scrape configuration after deployment. See How to Configure the Collection Layer to Scrape Fabric Metrics for the annotation-based and River pipeline approaches.
At minimum, your collector needs to:
The core useful metric families to retain:
Fabric logs are written to $K2_HOME/logs/k2fabric.log inside the container. Your options for collection include:
The Fabric log path assumed by the K2view reference configurations is:
/opt/apps/fabric/workspace/logs/k2fabric.log
If your deployment uses a different path, update any log collection configuration accordingly.
If you are running Fabric across multiple Kubernetes clusters, you will need a strategy for aggregating metrics across them. Each cluster should have its own local collection layer scraping its Fabric pods.
K2view uses Thanos federation in its own multi-cluster deployments — one Prometheus instance per cluster with a Thanos sidecar, and a central Thanos Query layer federating across all clusters. This is one well-understood approach. Whether it is the right approach for your environment depends on your existing observability infrastructure and operational preferences.
For an overview of how the Thanos federation model works in the context of K2view deployments, see K2view Observability Architecture for Fabric.
Note: The Terraform blueprints at github.com/k2view/blueprints include the Grafana Agent and supporting Helm charts but do not include Thanos configuration. Thanos is managed separately as central observability infrastructure in K2view's own deployments.
kubectl logs <fabric-pod> -n <namespace> | grep -i monitor
kubectl run test --image=curlimages/curl --restart=Never --rm -it -- curl http://<FABRIC_POD_IP>:7170/metrics
kubectl exec -it <fabric-pod> -- curl -s http://localhost:7170/metrics | grep <metric_name>
Enabling:
Validating:
Connecting: