K2view Fabric exposes runtime and application telemetry through JMX MBeans and a bundled Prometheus JMX Exporter. This data can be collected, stored, and visualized using the K2view monitoring stack — or consumed by any compatible third-party platform.
Monitoring enables early detection of issues, informs resource allocation decisions, and provides operational visibility into Fabric health, JVM condition, and infrastructure state.
K2view provides support for external monitoring through three output types:
The monitoring architecture differs significantly between deployment models. The components involved, how the JMX Exporter is enabled, and how metrics are collected all differ. Read the section that applies to your environment.
Monitoring is enabled through the space profile, which is managed by K2view. Recent space profiles have monitoring enabled by default — confirm with K2view that your space profile includes this setting. K2cloud Orchestrator injects the MONITORING=default environment variable, which triggers the monitor setup chain at container startup. Grafana Agent acts as the local metrics collector, scraping Fabric pods and forwarding metrics to Prometheus. Thanos provides cross-cluster visibility across cloud environments.
Start here:
For customer-owned AKS, GKE, or EKS clusters without K2cloud Orchestrator, monitoring is enabled manually by setting the MONITORING environment variable in the Fabric pod spec. The K2view Terraform blueprints are used to deploy the observability stack into the cluster.
Start here:
Fabric runs as a native process. Monitoring is enabled manually by running the monitor setup script or editing jvm.options directly. Prometheus scrapes Fabric hosts using static scrape targets. Promtail ships logs to Loki.
Start here:
The standard K2view monitoring stack combines the following components, depending on deployment model:
If you are new to Fabric monitoring, the K2view Fabric Observability — Guide to the Documentation maps all available documents to reading paths based on your deployment model and goal. It is the recommended starting point before reading any individual article.
A Monitoring Dashboard Example is also available as a reference Grafana dashboard that illustrates how Fabric observability data can be visualized in practice.
On VMs and bare-metal hosts, Fabric runs as a native process. Monitoring is enabled manually by running the monitor setup script or editing jvm.options directly. Prometheus scrapes Fabric hosts using static scrape targets. Promtail ships logs to Loki.
Start here:
The standard K2view monitoring stack combines the following components, depending on deployment model:
If you are new to Fabric monitoring, the K2view Fabric Observability — Guide to the Documentation maps all available documents to reading paths based on your deployment model and goal. It is the recommended starting point before reading any individual article.
A Monitoring Dashboard Example is also available as a reference Grafana dashboard that illustrates how Fabric observability data can be visualized in practice.
K2view Fabric exposes runtime and application telemetry through JMX MBeans and a bundled Prometheus JMX Exporter. This data can be collected, stored, and visualized using the K2view monitoring stack — or consumed by any compatible third-party platform.
Monitoring enables early detection of issues, informs resource allocation decisions, and provides operational visibility into Fabric health, JVM condition, and infrastructure state.
K2view provides support for external monitoring through three output types:
The monitoring architecture differs significantly between deployment models. The components involved, how the JMX Exporter is enabled, and how metrics are collected all differ. Read the section that applies to your environment.
Monitoring is enabled through the space profile, which is managed by K2view. Recent space profiles have monitoring enabled by default — confirm with K2view that your space profile includes this setting. K2cloud Orchestrator injects the MONITORING=default environment variable, which triggers the monitor setup chain at container startup. Grafana Agent acts as the local metrics collector, scraping Fabric pods and forwarding metrics to Prometheus. Thanos provides cross-cluster visibility across cloud environments.
Start here:
For customer-owned AKS, GKE, or EKS clusters without K2cloud Orchestrator, monitoring is enabled manually by setting the MONITORING environment variable in the Fabric pod spec. The K2view Terraform blueprints are used to deploy the observability stack into the cluster.
Start here:
Fabric runs as a native process. Monitoring is enabled manually by running the monitor setup script or editing jvm.options directly. Prometheus scrapes Fabric hosts using static scrape targets. Promtail ships logs to Loki.
Start here:
The standard K2view monitoring stack combines the following components, depending on deployment model:
If you are new to Fabric monitoring, the K2view Fabric Observability — Guide to the Documentation maps all available documents to reading paths based on your deployment model and goal. It is the recommended starting point before reading any individual article.
A Monitoring Dashboard Example is also available as a reference Grafana dashboard that illustrates how Fabric observability data can be visualized in practice.
On VMs and bare-metal hosts, Fabric runs as a native process. Monitoring is enabled manually by running the monitor setup script or editing jvm.options directly. Prometheus scrapes Fabric hosts using static scrape targets. Promtail ships logs to Loki.
Start here:
The standard K2view monitoring stack combines the following components, depending on deployment model:
If you are new to Fabric monitoring, the K2view Fabric Observability — Guide to the Documentation maps all available documents to reading paths based on your deployment model and goal. It is the recommended starting point before reading any individual article.
A Monitoring Dashboard Example is also available as a reference Grafana dashboard that illustrates how Fabric observability data can be visualized in practice.