K2view Fabric Observability — Guide to the Documentation
A roadmap for understanding and implementing Fabric monitoring — from first principles to production deployment
Table of Contents
Introduction
K2view Fabric produces rich observability data — application metrics, JVM telemetry, infrastructure signals, and logs. This guide explains what each document in the set covers, where it fits in the overall sequence, and which ones apply to your deployment model.
The documentation covers three deployment contexts:
- VM / Bare-Metal — Fabric runs as a native process. Monitoring is enabled manually. Prometheus scrapes static targets.
- Kubernetes (K2cloud SaaS / Self-hosted) — Fabric runs as a pod. Monitoring is enabled through the space profile, managed by K2view. Grafana Agent is the local collector.
- Kubernetes (Air-Gapped) — Fabric runs on a customer-owned cluster without K2cloud Orchestrator. Monitoring is enabled manually via the
MONITORING environment variable in the pod spec.
Every document is labeled [ K8s ], [ VM / Bare-Metal ], or [ K8s + VM ] so you can skip sections that don't apply to your environment.
How to Use This Guide
Follow the Content Journey in order the first time through. The sections are sequenced so that each builds on the previous one. Return to individual documents later as a reference when you need specific details.
If you already know your deployment model, you can skip the architecture documents that don't apply and go straight to the enablement and deployment sections for your environment.
The Content Journey
Start Here
Concepts and Architecture
- Fabric Monitoring
Why monitoring matters in Fabric deployments, what K2view provides to support it, and how to navigate to the right starting point for your environment. - JMX Overview
What JMX is, what MBeans are, why Fabric uses JMX to expose telemetry, and the two ways to access JMX data — the Admin panel and the Prometheus endpoint. - K2view Observability Architecture for Fabric
The complete layered architecture for both deployment models — how Fabric exposes metrics, how the collection layer works, how logs are collected in parallel, and how Thanos federates across clusters. - K2view VM / Bare-Metal Monitoring Stack for Fabric
The VM-specific stack: JMX Exporter and Node Exporter on each Fabric host, Prometheus with static scrape targets on a dedicated monitoring machine, Promtail shipping logs to Loki, and Grafana unifying the view. Includes the reference architecture diagram. - K2view Kubernetes Monitoring Stack for Fabric
The Kubernetes-specific stack: what runs inside the Fabric pod vs. outside it, Grafana Agent as the cluster-local collector, the full enablement chain from space profile to active exporter, and Thanos federation across clusters. Includes the reference architecture diagram. - Fabric Monitoring in Air-Gapped Kubernetes Deployments
Monitoring for customer-owned AKS, GKE, or EKS clusters without K2cloud Orchestrator — manual enablement, blueprint deployment, and validation in an air-gapped environment.
Metric Format and Custom Statistics
- JMX Format
The Prometheus-format output that Fabric exposes: metric naming, label structure, metric types (counters, gauges, histograms), and what the raw /metrics endpoint looks like. Reference this when writing filtering rules. - JMX Custom Statistics
How to add project-specific counters and duration measurements to the Fabric JMX surface using the statsCount and statsDuration APIs, and how they appear in the /metrics output.
Enable, Verify, and Deploy
- How to Enable the JMX Exporter for Fabric
The complete enablement procedure for both deployment models — the Kubernetes automation chain from space profile through fabric_7_monitor.sh, and the VM path via script or manual jvm.options editing. - How to Verify That Fabric Is Exposing Metrics
How to confirm the JMX Exporter is active — where to run the curl command for each deployment model, what a successful response looks like, and how to diagnose common failure modes. - Monitoring Dashboard Example Setup
Step-by-step VM stack setup: Prometheus with scrape jobs for Fabric and Node Exporter, Loki, Promtail, Grafana data sources, and dashboard import. - Monitoring Dashboard Example
The reference Grafana dashboard for Fabric, Cassandra, and Kafka — panel descriptions, queries, and downloadable dashboard JSON. - Deploying the K2view Monitoring Stack on Kubernetes
Terraform/Helm deployment of the observability stack for AKS, GKE, and EKS using the K2view blueprints — what the blueprints deploy, required inputs, and the per-cloud procedure.
Configure and Tune
Third-Party Integration
What This Documentation Does Not Cover
- Dashboard creation from scratch — the reference dashboard JSON is a starting point. Extend it with your own panels and queries.
- Alert rule design — alerting thresholds are environment-specific. The documents describe available metrics but do not prescribe alert rules.
- JMX Custom Statistics beyond the API — the broader topic of Fabric application development is outside the scope.
- Grafana Cloud account setup — the Terraform blueprints reference Grafana Cloud endpoints. Account setup and API token management are outside the scope.
- Central Thanos Query layer setup — the documents describe how each cluster participates in Thanos federation, but the central Thanos infrastructure is managed separately.
- Cassandra and Kafka monitoring setup — the reference dashboard includes panels for both, but their monitoring configuration is not covered here.
Key Concepts Quick Reference
| Term | What it means in this context |
| JMX Exporter | The open-source Prometheus JMX Exporter JAR bundled with Fabric. Reads JMX MBeans from the Fabric JVM and serves them as Prometheus-format HTTP metrics. |
| MBeans | Managed Beans — the JMX objects through which Fabric exposes its runtime telemetry. The JMX Exporter reads MBeans and converts them to Prometheus format. |
| /metrics endpoint | The HTTP endpoint served by the JMX Exporter. Port 7170 for Fabric, port 7270 for iid_finder. Scraped by Prometheus or Grafana Agent. |
| Grafana Agent | The local metrics collector used in Kubernetes deployments. Scrapes Fabric pods and other sources, then remote-writes to Prometheus. Configured via River pipelines. |
| River pipeline | The configuration language used by Grafana Agent to define discovery, scraping, filtering, and forwarding of metrics and logs. |
| Thanos | The federation layer above per-cluster Prometheus instances. Provides cross-cluster visibility across AKS, GKE, and EKS deployments. |
| Active series | The number of distinct time series Prometheus is currently storing and updating. The primary measure of Prometheus storage pressure — driven by label cardinality, not just metric name count. |
| Space profile monitoring setting | The K2view-managed configuration that controls monitoring enablement for each Fabric space. When enabled, K2cloud Orchestrator injects MONITORING=default into the Fabric pod. Managed exclusively by K2view. |
| MONITORING=default | The environment variable injected by K2cloud Orchestrator that triggers monitor_setup.sh at container startup, appending the javaagent line and starting monitoring processes. |
| k8s-monitoring chart | The Grafana Helm chart deployed by the Terraform blueprints. Installs Grafana Agent, node-exporter, kube-state-metrics, and supporting components into the grafana-agent namespace. |
| Promtail | The log shipping agent. Runs as a background process on each VM host. In Kubernetes, log collection is handled by Grafana Agent instead. |
| Relabeling | Prometheus logic that alters, drops, or normalizes labels during or after scraping. Used to reduce label cardinality and control active series count. |
Version Note
All documents in this set reflect the following component versions:
- JMX Exporter: jmx_prometheus_javaagent-1.5.0.jar
- Grafana k8s-monitoring Helm chart: as referenced in the Terraform blueprints (February 2025)
- Fabric: 8.4