A K2cloud Self-hosted Kubernetes cluster for Fabric and TDM refers to a customer-managed Kubernetes environment (either on-premises or in their cloud account) that runs the full K2view Fabric and optional TDM services independently, while still integrating securely with the K2cloud Orchestrator for deployment management and control.
Self-Managed Infrastructure: The Kubernetes cluster is provisioned and maintained by the customer in their preferred environment—on bare-metal, AWS (EKS), GCP (GKE), Azure (AKS), or any other compliant platform.
Cloud-Orchestrated, Locally Executed: Although the environment is fully hosted and operated by the customer, it is connected to K2view’s centralized K2cloud Orchestrator via the secure K2-Agent. This enables remote deployment instructions, configuration management, and monitoring without exposing internal services to the public internet.
Fabric and TDM Services: The deployment includes Fabric Server, Fabric Studio, and optionally TDM—each running as containers managed by Kubernetes. These services interact with local or managed databases, cloud-native storage, and customer-specific data sources.
Deployment Automation: Terraform and Helm blueprints are provided to standardize and simplify the provisioning and installation process, enabling Infrastructure-as-Code and DevOps workflows.
Customer-Owned Data Plane: All customer data, services, and integrations remain within the self-hosted environment. K2view does not access customer data; the K2-Agent only communicates control messages.
Compliance and Flexibility: This deployment model is ideal for organizations with strict data residency, compliance, or connectivity requirements. It provides the flexibility to integrate with private networks, existing IAM systems, and custom CI/CD pipelines.
This model delivers the advantages of centralized orchestration and product updates from K2view while allowing enterprises complete control over the runtime environment and data boundaries.
The K2view Fabric deployment is designed for modularity, scalability, and security, leveraging modern cloud-native architecture principles. At its core, the solution operates within a Kubernetes cluster (K8s), where all services are deployed as containers orchestrated through the Kubernetes control plane.
Key aspects of the high-level deployment include:
Cloud-Agnostic K8s Orchestration: Whether running on AWS (EKS), GCP (GKE), Azure (AKS), or on-premises, the architecture maintains a consistent deployment model using Helm charts and Terraform configurations.
Separation of Control and Runtime: Fabric and Studio and control services are logically separated from runtime workloads, enabling fine-grained access control, easier scaling, and secure CI/CD pipelines.
Ingress and Load Balancing: An NGINX Ingress Controller is typically deployed within the cluster, acting as the central entry point for all external requests. This allows mapping of incoming HTTPS traffic to internal services such as Fabric, TDM, and Studio.
Secure Communications and Identity: All services are secured using TLS certificates and integrated with role-based access controls. The K2-Agent connects securely to the K2cloud Orchestrator using outbound HTTPS.
Infrastructure as Code (IaC): Environments are provisioned using Terraform for reproducibility, traceability, and ease of configuration. Helm charts are used to deploy application components.
Registry and Artifact Management: Docker images are pulled from K2view’s Nexus repository and pushed into customer-specific OCI-compliant registries (e.g., Azure or a native provider Container Registry). These are referenced during Helm-based deployment.
DNS and URL Management: Depending on the version, Fabric Spaces can be accessed via subdomain-based URLs or context-based URLs under a single domain. This provides flexibility in how environments are exposed and simplifies certificate management.
Scalability and HA: The deployment supports multi-AZ clusters, auto-scaling node pools, and shared persistent storage via Cloud provider-native solutions.
This architecture ensures that K2view Fabric deployments are secure, highly available, and adaptable to various enterprise deployment models.
K2cloud Fabric deployments on customer self-hosted Kubernetes clusters rely on several core components:
For platform-specific sizing guidance, the Requirements and Prerequisites for Cloud Self-hosted Kubernetes Installation topic outlines detailed hardware specifications across AWS, GCP, and Azure environments, ensuring compatibility with Fabric and TDM workloads. It covers the following additional topics:
To install a K2cloud Self-hosted Kubernetes cluster for Fabric and TDM, you will need to prepare and perform the necessary steps in coordination with your K2view representative. The process begins with gathering key configuration details, including TLS certificate files, and ensuring outbound internet access to specific K2view endpoints. These are essential for secure communications, image retrieval, and configuration via the K2cloud Orchestrator. K2view will also need to perform provisioning actions requiring information you will provide it.
Your K2view representative provide you with access credentials and provisioning information. This includes a Cloud Mailbox ID, a K2view Nexus Repository account for pulling required Docker images, and a list of container images to populate your private registry.
K2view will share a planning guide to help you with this provisioning and coordinate activities.
Steps include:
K2view provisions:
Customer provides:
Internet access to:
Required tools:
Please refer to: (K8s Requirements)
The K2-Agent is a lightweight Kubernetes service that securely connects your on-premises or cloud-based K2view Fabric deployment to the K2cloud Orchestrator. It plays a crucial role in enabling centralized management, monitoring, and deployment orchestration.
This design makes the K2-Agent a secure, robust component for enterprise-grade hybrid deployments.
Pease refer to: (K8s Requirements)
K2view Docker images for Fabric, Studio, and supporting services must be pulled from K2view’s Nexus repository and pushed into a customer-managed, OCI-compliant container registry. This enables secure, high-performance retrieval of images during Helm-based Kubernetes deployment.
Customers can use any of the following cloud-native registries:
You may also use a private registry hosted on your infrastructure as long as it supports OCI-compliant image handling and secure access.
values.yaml
files with the correct image repository path and tag.Once the registry is populated, you must share the full image paths (e.g., gcr.io/project-id/k2view/fabric:8.2.1_40
) with your K2view representative to complete environment setup.
fabric
, fabric-studio
, k2-cloud-deployer
K2view uses NGINX as the default Ingress Controller for routing external traffic to services inside the Kubernetes cluster. This controller provides a flexible, production-grade entry point capable of handling TLS termination, path-based routing, rate limiting, and custom annotations for fine-grained access control.
ingress-nginx
).While NGINX is the default, some environments may choose native cloud ingress solutions for deeper integration:
These alternatives may simplify integration with native services such as identity management or centralized certificate storage, but may offer less flexibility than NGINX.
K2view would like to help you through the installation to help you overcome various issues. Good planning goes a long way to ensuring that the installation goes smoothly and quickly. Being ready having your certificate will help a lot make this a reality.
Here's a summary of the certificate requirements:
Please refer to:(K2view Terraform Blueprints)
K2view provides Terraform blueprints to automate the provisioning of cloud infrastructure required for hosting Fabric and TDM services. This infrastructure-as-code approach ensures a consistent, repeatable, and secure setup of services across environments.
Clone K2view Blueprints
git clone https://github.com/k2view/blueprints.git
Terraform
directory corresponding to your cloud provider (e.g., azure-template
, aws-template
, or gcp-template
).Modify terraform.tfvars
terraform.tfvars
file to define key parameters such as:Initialize and Apply the Terraform Plan
terraform init
terraform apply
Review Terraform Output
Next Steps
kubectl
and test the DNS resolution).Using Terraform, organizations gain improved visibility, compliance, and manageability of their infrastructure lifecycle.
Please refer to: (K2view Helm Blueprints)
This guide outlines the step-by-step process to deploy K2view components using Helm charts. It assumes that you have a running Kubernetes cluster and have cloned the K2view Helm blueprints repository.
For detailed configurations and advanced deployment scenarios, refer to the individual README files in the K2view Helm blueprints repository.
K2view provides a customized Helm chart for deploying the NGINX Ingress Controller.
helm install ingress-nginx ./ingress-nginx-k2v \
--namespace ingress-nginx \
--create-namespace \
-f ingress-nginx-k2v/values.yaml
`
Note: Customize
values.yaml
to suit your cloud provider's load balancer settings and TLS configurations.
The TLS certificate used to secure external HTTPS access to K2view Fabric and Studio is installed on the Ingress Controller, typically the NGINX Ingress Controller.
ingress-nginx
(deployed via the ingress-nginx-k2v
Helm chart)Secret
of type kubernetes.io/tls
ingress-nginx
)Ingress
resource or Helm values.yaml
under the controller.extraArgs.default-ssl-certificate
settingReplace the certificate and key paths with your own, and adjust the namespace if different:
kubectl create secret tls fabric-tls-cert \
--cert=/path/to/fullchain.pem \
--key=/path/to/privkey.pem \
-n ingress-nginx
`
values.yaml
)Update the values.yaml
file used with the ingress-nginx-k2v
Helm chart to reference the TLS secret:
controller:
ingressClass: nginx
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb" # if applicable
extraArgs:
default-ssl-certificate: ingress-nginx/fabric-tls-cert
In your Fabric or Studio Ingress
resources (usually templated through Helm), verify that TLS is enabled and the host matches the certificate:
ingress:
enabled: true
hosts:
- host: fabric.example.com
paths: [/]
tls:
- secretName: fabric-tls-cert
hosts:
- fabric.example.com
With this setup, the ingress controller will terminate TLS traffic using the provided certificate and securely route requests to K2view services within the cluster.
For development or testing environments, you can deploy a PostgreSQL database using the provided generic-db
chart.
helm install generic-db ./generic-db \
--namespace k2view \
--create-namespace \
-f generic-db/values.yaml
Important: For production environments, it's recommended to use a managed PostgreSQL service provided by your cloud provider.
The K2view Agent facilitates communication between your Kubernetes cluster and the K2cloud Orchestrator.
helm install k2view-agent ./k2view-agent \
--namespace k2view-agent \
--create-namespace \
-f k2view-agent/values.yaml
Ensure that your values.yaml
includes the correct MAILBOX_ID
and Docker registry credentials if pulling images from a private registry.
Deploy the core Fabric services using the Fabric Helm chart.
helm install fabric ./fabric \
--namespace k2view \
--create-namespace \
-f fabric/values.yaml
Customize the fabric/values.yaml
file to configure:
After deploying all components, verify that all pods are running as expected:
kubectl get pods -n ingress-nginx
kubectl get pods -n k2view
kubectl get pods -n k2view-agent
Check the services and ingress resources to ensure they're correctly configured:
kubectl get svc -n ingress-nginx
kubectl get svc -n k2view
kubectl get ingress -n k2view
Once all services are up and running, access the Fabric Web Studio using the configured ingress hostname. Ensure that your DNS records point to the ingress controller's external IP and that TLS certificates are correctly set up.
values.yaml
files and Kubernetes manifests.A K2cloud Self-hosted Kubernetes cluster for Fabric and TDM refers to a customer-managed Kubernetes environment (either on-premises or in their cloud account) that runs the full K2view Fabric and optional TDM services independently, while still integrating securely with the K2cloud Orchestrator for deployment management and control.
Self-Managed Infrastructure: The Kubernetes cluster is provisioned and maintained by the customer in their preferred environment—on bare-metal, AWS (EKS), GCP (GKE), Azure (AKS), or any other compliant platform.
Cloud-Orchestrated, Locally Executed: Although the environment is fully hosted and operated by the customer, it is connected to K2view’s centralized K2cloud Orchestrator via the secure K2-Agent. This enables remote deployment instructions, configuration management, and monitoring without exposing internal services to the public internet.
Fabric and TDM Services: The deployment includes Fabric Server, Fabric Studio, and optionally TDM—each running as containers managed by Kubernetes. These services interact with local or managed databases, cloud-native storage, and customer-specific data sources.
Deployment Automation: Terraform and Helm blueprints are provided to standardize and simplify the provisioning and installation process, enabling Infrastructure-as-Code and DevOps workflows.
Customer-Owned Data Plane: All customer data, services, and integrations remain within the self-hosted environment. K2view does not access customer data; the K2-Agent only communicates control messages.
Compliance and Flexibility: This deployment model is ideal for organizations with strict data residency, compliance, or connectivity requirements. It provides the flexibility to integrate with private networks, existing IAM systems, and custom CI/CD pipelines.
This model delivers the advantages of centralized orchestration and product updates from K2view while allowing enterprises complete control over the runtime environment and data boundaries.
The K2view Fabric deployment is designed for modularity, scalability, and security, leveraging modern cloud-native architecture principles. At its core, the solution operates within a Kubernetes cluster (K8s), where all services are deployed as containers orchestrated through the Kubernetes control plane.
Key aspects of the high-level deployment include:
Cloud-Agnostic K8s Orchestration: Whether running on AWS (EKS), GCP (GKE), Azure (AKS), or on-premises, the architecture maintains a consistent deployment model using Helm charts and Terraform configurations.
Separation of Control and Runtime: Fabric and Studio and control services are logically separated from runtime workloads, enabling fine-grained access control, easier scaling, and secure CI/CD pipelines.
Ingress and Load Balancing: An NGINX Ingress Controller is typically deployed within the cluster, acting as the central entry point for all external requests. This allows mapping of incoming HTTPS traffic to internal services such as Fabric, TDM, and Studio.
Secure Communications and Identity: All services are secured using TLS certificates and integrated with role-based access controls. The K2-Agent connects securely to the K2cloud Orchestrator using outbound HTTPS.
Infrastructure as Code (IaC): Environments are provisioned using Terraform for reproducibility, traceability, and ease of configuration. Helm charts are used to deploy application components.
Registry and Artifact Management: Docker images are pulled from K2view’s Nexus repository and pushed into customer-specific OCI-compliant registries (e.g., Azure or a native provider Container Registry). These are referenced during Helm-based deployment.
DNS and URL Management: Depending on the version, Fabric Spaces can be accessed via subdomain-based URLs or context-based URLs under a single domain. This provides flexibility in how environments are exposed and simplifies certificate management.
Scalability and HA: The deployment supports multi-AZ clusters, auto-scaling node pools, and shared persistent storage via Cloud provider-native solutions.
This architecture ensures that K2view Fabric deployments are secure, highly available, and adaptable to various enterprise deployment models.
K2cloud Fabric deployments on customer self-hosted Kubernetes clusters rely on several core components:
For platform-specific sizing guidance, the Requirements and Prerequisites for Cloud Self-hosted Kubernetes Installation topic outlines detailed hardware specifications across AWS, GCP, and Azure environments, ensuring compatibility with Fabric and TDM workloads. It covers the following additional topics:
To install a K2cloud Self-hosted Kubernetes cluster for Fabric and TDM, you will need to prepare and perform the necessary steps in coordination with your K2view representative. The process begins with gathering key configuration details, including TLS certificate files, and ensuring outbound internet access to specific K2view endpoints. These are essential for secure communications, image retrieval, and configuration via the K2cloud Orchestrator. K2view will also need to perform provisioning actions requiring information you will provide it.
Your K2view representative provide you with access credentials and provisioning information. This includes a Cloud Mailbox ID, a K2view Nexus Repository account for pulling required Docker images, and a list of container images to populate your private registry.
K2view will share a planning guide to help you with this provisioning and coordinate activities.
Steps include:
K2view provisions:
Customer provides:
Internet access to:
Required tools:
Please refer to: (K8s Requirements)
The K2-Agent is a lightweight Kubernetes service that securely connects your on-premises or cloud-based K2view Fabric deployment to the K2cloud Orchestrator. It plays a crucial role in enabling centralized management, monitoring, and deployment orchestration.
This design makes the K2-Agent a secure, robust component for enterprise-grade hybrid deployments.
Pease refer to: (K8s Requirements)
K2view Docker images for Fabric, Studio, and supporting services must be pulled from K2view’s Nexus repository and pushed into a customer-managed, OCI-compliant container registry. This enables secure, high-performance retrieval of images during Helm-based Kubernetes deployment.
Customers can use any of the following cloud-native registries:
You may also use a private registry hosted on your infrastructure as long as it supports OCI-compliant image handling and secure access.
values.yaml
files with the correct image repository path and tag.Once the registry is populated, you must share the full image paths (e.g., gcr.io/project-id/k2view/fabric:8.2.1_40
) with your K2view representative to complete environment setup.
fabric
, fabric-studio
, k2-cloud-deployer
K2view uses NGINX as the default Ingress Controller for routing external traffic to services inside the Kubernetes cluster. This controller provides a flexible, production-grade entry point capable of handling TLS termination, path-based routing, rate limiting, and custom annotations for fine-grained access control.
ingress-nginx
).While NGINX is the default, some environments may choose native cloud ingress solutions for deeper integration:
These alternatives may simplify integration with native services such as identity management or centralized certificate storage, but may offer less flexibility than NGINX.
K2view would like to help you through the installation to help you overcome various issues. Good planning goes a long way to ensuring that the installation goes smoothly and quickly. Being ready having your certificate will help a lot make this a reality.
Here's a summary of the certificate requirements:
Please refer to:(K2view Terraform Blueprints)
K2view provides Terraform blueprints to automate the provisioning of cloud infrastructure required for hosting Fabric and TDM services. This infrastructure-as-code approach ensures a consistent, repeatable, and secure setup of services across environments.
Clone K2view Blueprints
git clone https://github.com/k2view/blueprints.git
Terraform
directory corresponding to your cloud provider (e.g., azure-template
, aws-template
, or gcp-template
).Modify terraform.tfvars
terraform.tfvars
file to define key parameters such as:Initialize and Apply the Terraform Plan
terraform init
terraform apply
Review Terraform Output
Next Steps
kubectl
and test the DNS resolution).Using Terraform, organizations gain improved visibility, compliance, and manageability of their infrastructure lifecycle.
Please refer to: (K2view Helm Blueprints)
This guide outlines the step-by-step process to deploy K2view components using Helm charts. It assumes that you have a running Kubernetes cluster and have cloned the K2view Helm blueprints repository.
For detailed configurations and advanced deployment scenarios, refer to the individual README files in the K2view Helm blueprints repository.
K2view provides a customized Helm chart for deploying the NGINX Ingress Controller.
helm install ingress-nginx ./ingress-nginx-k2v \
--namespace ingress-nginx \
--create-namespace \
-f ingress-nginx-k2v/values.yaml
`
Note: Customize
values.yaml
to suit your cloud provider's load balancer settings and TLS configurations.
The TLS certificate used to secure external HTTPS access to K2view Fabric and Studio is installed on the Ingress Controller, typically the NGINX Ingress Controller.
ingress-nginx
(deployed via the ingress-nginx-k2v
Helm chart)Secret
of type kubernetes.io/tls
ingress-nginx
)Ingress
resource or Helm values.yaml
under the controller.extraArgs.default-ssl-certificate
settingReplace the certificate and key paths with your own, and adjust the namespace if different:
kubectl create secret tls fabric-tls-cert \
--cert=/path/to/fullchain.pem \
--key=/path/to/privkey.pem \
-n ingress-nginx
`
values.yaml
)Update the values.yaml
file used with the ingress-nginx-k2v
Helm chart to reference the TLS secret:
controller:
ingressClass: nginx
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb" # if applicable
extraArgs:
default-ssl-certificate: ingress-nginx/fabric-tls-cert
In your Fabric or Studio Ingress
resources (usually templated through Helm), verify that TLS is enabled and the host matches the certificate:
ingress:
enabled: true
hosts:
- host: fabric.example.com
paths: [/]
tls:
- secretName: fabric-tls-cert
hosts:
- fabric.example.com
With this setup, the ingress controller will terminate TLS traffic using the provided certificate and securely route requests to K2view services within the cluster.
For development or testing environments, you can deploy a PostgreSQL database using the provided generic-db
chart.
helm install generic-db ./generic-db \
--namespace k2view \
--create-namespace \
-f generic-db/values.yaml
Important: For production environments, it's recommended to use a managed PostgreSQL service provided by your cloud provider.
The K2view Agent facilitates communication between your Kubernetes cluster and the K2cloud Orchestrator.
helm install k2view-agent ./k2view-agent \
--namespace k2view-agent \
--create-namespace \
-f k2view-agent/values.yaml
Ensure that your values.yaml
includes the correct MAILBOX_ID
and Docker registry credentials if pulling images from a private registry.
Deploy the core Fabric services using the Fabric Helm chart.
helm install fabric ./fabric \
--namespace k2view \
--create-namespace \
-f fabric/values.yaml
Customize the fabric/values.yaml
file to configure:
After deploying all components, verify that all pods are running as expected:
kubectl get pods -n ingress-nginx
kubectl get pods -n k2view
kubectl get pods -n k2view-agent
Check the services and ingress resources to ensure they're correctly configured:
kubectl get svc -n ingress-nginx
kubectl get svc -n k2view
kubectl get ingress -n k2view
Once all services are up and running, access the Fabric Web Studio using the configured ingress hostname. Ensure that your DNS records point to the ingress controller's external IP and that TLS certificates are correctly set up.
values.yaml
files and Kubernetes manifests.