Fabric Monitoring

K2view Fabric exposes runtime and application telemetry through JMX MBeans and a bundled Prometheus JMX Exporter. This data can be collected, stored, and visualized using the K2view monitoring stack — or consumed by any compatible third-party platform.

Monitoring enables early detection of issues, informs resource allocation decisions, and provides operational visibility into Fabric health, JVM condition, and infrastructure state.

How Fabric Exposes Monitoring Data

K2view provides support for external monitoring through three output types:

  • Metrics — Fabric and JVM telemetry exposed via JMX MBeans and served in Prometheus format by the bundled JMX Exporter. This is the primary path for production monitoring.
  • Log files — Application logs available for collection and analysis. See Fabric Troubleshooting Log Files.
  • Tracing files — Request and flow traces. See Fabric Tracing.

Deployment Models

The monitoring architecture differs significantly between deployment models. The components involved, how the JMX Exporter is enabled, and how metrics are collected all differ. Read the section that applies to your environment.

Kubernetes — K2cloud SaaS / Self-Hosted (AKS, GKE, EKS)

Monitoring is enabled through the space profile, which is managed by K2view. Recent space profiles have monitoring enabled by default — confirm with K2view that your space profile includes this setting. K2cloud Orchestrator injects the MONITORING=default environment variable, which triggers the monitor setup chain at container startup. Grafana Agent acts as the local metrics collector, scraping Fabric pods and forwarding metrics to Prometheus. Thanos provides cross-cluster visibility across cloud environments.

Start here:

Kubernetes — Air-Gapped (Customer-Owned Cluster)

For customer-owned AKS, GKE, or EKS clusters without K2cloud Orchestrator, monitoring is enabled manually by setting the MONITORING environment variable in the Fabric pod spec. The K2view Terraform blueprints are used to deploy the observability stack into the cluster.

Start here:

VM / Bare-Metal

Fabric runs as a native process. Monitoring is enabled manually by running the monitor setup script or editing jvm.options directly. Prometheus scrapes Fabric hosts using static scrape targets. Promtail ships logs to Loki.

Start here:

The Monitoring Stack

The standard K2view monitoring stack combines the following components, depending on deployment model:

Component Kubernetes VM / Bare-Metal
JMX Exporter ✓ Inside Fabric container ✓ On each Fabric host
Grafana Agent ✓ Cluster-local collector
Prometheus ✓ Per-cluster (receives remote-write) ✓ On monitoring machine (scrapes directly)
Thanos ✓ Cross-cluster federation
Node Exporter ✓ DaemonSet on worker nodes ✓ On each host
kube-state-metrics ✓ Cluster singleton
Promtail / log collection ✓ Via Grafana Agent ✓ Promtail on each host
Loki ✓ Central log store ✓ On monitoring machine
Grafana ✓ Visualization ✓ Visualization

Where to Start

If you are new to Fabric monitoring, the K2view Fabric Observability — Guide to the Documentation maps all available documents to reading paths based on your deployment model and goal. It is the recommended starting point before reading any individual article.

A Monitoring Dashboard Example is also available as a reference Grafana dashboard that illustrates how Fabric observability data can be visualized in practice.

On VMs and bare-metal hosts, Fabric runs as a native process. Monitoring is enabled manually by running the monitor setup script or editing jvm.options directly. Prometheus scrapes Fabric hosts using static scrape targets. Promtail ships logs to Loki.

Start here:

The Monitoring Stack

The standard K2view monitoring stack combines the following components, depending on deployment model:

Component Kubernetes VM / Bare-Metal
JMX Exporter ✓ Inside Fabric container ✓ On each Fabric host
Grafana Agent ✓ Cluster-local collector
Prometheus ✓ Per-cluster (receives remote-write) ✓ On monitoring machine (scrapes directly)
Thanos ✓ Cross-cluster federation
Node Exporter ✓ DaemonSet on worker nodes ✓ On each host
kube-state-metrics ✓ Cluster singleton
Promtail / log collection ✓ Via Grafana Agent ✓ Promtail on each host
Loki ✓ Central log store ✓ On monitoring machine
Grafana ✓ Visualization ✓ Visualization

Where to Start

If you are new to Fabric monitoring, the K2view Fabric Observability — Guide to the Documentation maps all available documents to reading paths based on your deployment model and goal. It is the recommended starting point before reading any individual article.

A Monitoring Dashboard Example is also available as a reference Grafana dashboard that illustrates how Fabric observability data can be visualized in practice.

Fabric Monitoring

K2view Fabric exposes runtime and application telemetry through JMX MBeans and a bundled Prometheus JMX Exporter. This data can be collected, stored, and visualized using the K2view monitoring stack — or consumed by any compatible third-party platform.

Monitoring enables early detection of issues, informs resource allocation decisions, and provides operational visibility into Fabric health, JVM condition, and infrastructure state.

How Fabric Exposes Monitoring Data

K2view provides support for external monitoring through three output types:

  • Metrics — Fabric and JVM telemetry exposed via JMX MBeans and served in Prometheus format by the bundled JMX Exporter. This is the primary path for production monitoring.
  • Log files — Application logs available for collection and analysis. See Fabric Troubleshooting Log Files.
  • Tracing files — Request and flow traces. See Fabric Tracing.

Deployment Models

The monitoring architecture differs significantly between deployment models. The components involved, how the JMX Exporter is enabled, and how metrics are collected all differ. Read the section that applies to your environment.

Kubernetes — K2cloud SaaS / Self-Hosted (AKS, GKE, EKS)

Monitoring is enabled through the space profile, which is managed by K2view. Recent space profiles have monitoring enabled by default — confirm with K2view that your space profile includes this setting. K2cloud Orchestrator injects the MONITORING=default environment variable, which triggers the monitor setup chain at container startup. Grafana Agent acts as the local metrics collector, scraping Fabric pods and forwarding metrics to Prometheus. Thanos provides cross-cluster visibility across cloud environments.

Start here:

Kubernetes — Air-Gapped (Customer-Owned Cluster)

For customer-owned AKS, GKE, or EKS clusters without K2cloud Orchestrator, monitoring is enabled manually by setting the MONITORING environment variable in the Fabric pod spec. The K2view Terraform blueprints are used to deploy the observability stack into the cluster.

Start here:

VM / Bare-Metal

Fabric runs as a native process. Monitoring is enabled manually by running the monitor setup script or editing jvm.options directly. Prometheus scrapes Fabric hosts using static scrape targets. Promtail ships logs to Loki.

Start here:

The Monitoring Stack

The standard K2view monitoring stack combines the following components, depending on deployment model:

Component Kubernetes VM / Bare-Metal
JMX Exporter ✓ Inside Fabric container ✓ On each Fabric host
Grafana Agent ✓ Cluster-local collector
Prometheus ✓ Per-cluster (receives remote-write) ✓ On monitoring machine (scrapes directly)
Thanos ✓ Cross-cluster federation
Node Exporter ✓ DaemonSet on worker nodes ✓ On each host
kube-state-metrics ✓ Cluster singleton
Promtail / log collection ✓ Via Grafana Agent ✓ Promtail on each host
Loki ✓ Central log store ✓ On monitoring machine
Grafana ✓ Visualization ✓ Visualization

Where to Start

If you are new to Fabric monitoring, the K2view Fabric Observability — Guide to the Documentation maps all available documents to reading paths based on your deployment model and goal. It is the recommended starting point before reading any individual article.

A Monitoring Dashboard Example is also available as a reference Grafana dashboard that illustrates how Fabric observability data can be visualized in practice.

On VMs and bare-metal hosts, Fabric runs as a native process. Monitoring is enabled manually by running the monitor setup script or editing jvm.options directly. Prometheus scrapes Fabric hosts using static scrape targets. Promtail ships logs to Loki.

Start here:

The Monitoring Stack

The standard K2view monitoring stack combines the following components, depending on deployment model:

Component Kubernetes VM / Bare-Metal
JMX Exporter ✓ Inside Fabric container ✓ On each Fabric host
Grafana Agent ✓ Cluster-local collector
Prometheus ✓ Per-cluster (receives remote-write) ✓ On monitoring machine (scrapes directly)
Thanos ✓ Cross-cluster federation
Node Exporter ✓ DaemonSet on worker nodes ✓ On each host
kube-state-metrics ✓ Cluster singleton
Promtail / log collection ✓ Via Grafana Agent ✓ Promtail on each host
Loki ✓ Central log store ✓ On monitoring machine
Grafana ✓ Visualization ✓ Visualization

Where to Start

If you are new to Fabric monitoring, the K2view Fabric Observability — Guide to the Documentation maps all available documents to reading paths based on your deployment model and goal. It is the recommended starting point before reading any individual article.

A Monitoring Dashboard Example is also available as a reference Grafana dashboard that illustrates how Fabric observability data can be visualized in practice.