This article describes the guidelines and instructions for creating a K2cloud site - a K8s (Kubernetes) cluster - on premises.
While K2cloud K8s cluster deployment on the Cloud as a Self-hosted Kubernetes Cluster Installation is done using Terraform and Helm charts, based on cloud provider’s K8s infrastructure, the on-premises K8s cluster deployment is done by running a script that is responsible for preparing all required infrastructure components.
k8s-setup.sh
) k8s-setup.sh
Script Works There are two variants of on-premises Kubernetes installations for K2view Fabric and TDM, each tailored to different deployment needs and environments. These variants are based on two setup scripts available in the K2view Blueprints repository:
k8s-setup.sh
This variant is designed for production-like environments and installs a multi-node Kubernetes cluster on bare metal servers.
Purpose: Prepares both control plane and worker nodes using kubeadm
.
Components Installed:
kubeadm
, kubectl
, kubelet
Features:
kubeadm init
)Use Case: When you need a realistic, multi-node Kubernetes environment for hosting Fabric and TDM services across multiple machines.
single_node.sh
This variant targets development, testing, or proof-of-concept scenarios and installs a self-contained Kubernetes cluster on a single machine. It is also known as the Kubenetes-in-a-box installation.
Purpose: Sets up a combined control plane and worker node on one host.
Components Installed:
Features:
kubeadm init
)Use Case: When you need a quick, local setup of Fabric and TDM on a single bare-metal host for testing or evaluation.
These options allow K2view customers to tailor their on-prem Kubernetes deployments of Fabric and TDM based on the scale and purpose of their environment.
Customers looking to install Fabric Web Studio should consider installing it on Docker Compose or Podman. This provides a simpler installation experience than the use of the Kubernetes-in-a-box option.
Please consult the requirements and prerequisites section for the K2cloud on-premises K8s cluster installation.
These recommendations apply to both the multi-node and single-node cluster installations.
To install a K2cloud site on-premises, you must prepare and provide the necessary steps in coordination with your K2view representative. The process begins with gathering key configuration details, including TLS certificate files, and ensuring outbound internet access to specific K2view endpoints. These are essential for secure communications, image retrieval, and configuration via the K2cloud Orchestrator.
You must also contact your K2view representative to request access credentials and provisioning information. This includes a Cloud Mailbox ID, a K2view Nexus Repository account for pulling required Docker images, and a list of container images to populate your private registry.
The Baremetal Blueprint article published to K2view's blueprint GitHub repository provides comprehensive guidance for deploying a K2view Fabric Kubernetes cluster on bare-metal (on-premises) infrastructure. It outlines the two supported installation options — single-node and multi-node clusters — using purpose-built setup scripts, and details the prerequisites, tools, and execution steps required for each.
You'll need to clone this repository as described later in this article.
k8s-setup.sh
)The k8s-setup.sh
script automates the deployment of a multi-node Kubernetes cluster on bare-metal infrastructure. It prepares the operating system, installs required Kubernetes components, configures networking, and provides the necessary tooling for control plane and worker node setup. The script also includes options for enabling various optional components and configurations, making it suitable for both production-grade and testing environments.
ntpd
or chronyd
)If you're running the script behind a firewall or proxy, please make sure that HTTPS access to these sources is allowed and that DNS resolution is working for the corresponding domains. Optionally, you could mirror or host the required packages and manifests internally for air-gapped environments.
The k8s-setup.sh
script from the K2view Blueprints repository installs various components required to bootstrap a Kubernetes cluster on bare metal. Here's a list of internet packages (downloaded via apt
, curl
, or similar tools) that the script installs or downloads from public sources:
These are installed via apt-get install
:
apt-transport-https
ca-certificates
curl
gnupg
lsb-release
software-properties-common
jq
conntrack
containerd
or docker.io
(depending on the configuration)kubelet
kubeadm
kubectl
These packages come from:
https://packages.cloud.google.com/apt
)https://download.docker.com/linux/ubuntu
)curl
/wget
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor -o /usr/share/keyrings/kubernetes-archive-keyring.gpg
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
CNI Plugin (e.g., Calico or Flannel) The script supports networking setup via:
https://raw.githubusercontent.com/projectcalico/calico/...
)https://raw.githubusercontent.com/coreos/flannel/...
)These are applied with:
kubectl apply -f <CNI_URL>
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Before installing Kubernetes, the swap and the internal firewall must be disabled. (Important: those steps will NOT be configured automatically during the execution of our installation script!)
The official documentation for kubeadm installation can be found at this link: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
k8s-setup.sh
Script WorksThe script orchestrates the following steps:
k8s-setup.sh
will change the following settings:
Kubernetes uses a container runtime to run pods (a list of all supported runtimes can be found in the link above). If none is installed, k8s-setup.sh will install and configure containerd as a runtime. If containerd is installed and its service is running, no modifications to its settings will be made. If that's the case, please ensure all parameters required by kubeadm are set.
systemd
cgroup driverKubernetes Installation
kubeadm
, kubectl
, kubelet
)Control Plane Initialization (if selected)
kubeadm init
with specified network CIDR and hostnamekubectl
Worker Node Setup (if selected)
kubeadm join
using a token to connect to the control planeCluster Verification
kubectl
The following addons will be automatically installed in your Kubernetes cluster:
Note:
A private container registry can be configured if containerd is used for container runtime.
To push images to this private container registry, pull or load (import) the desired image (Docker will not be installed by default, but you can use ctr as long as you can run sudo)
sudo ctr image pull -u <USERNAME> docker.share.cloud.k2view.com/k2view/fabric-studio:8.0.0_123
sudo ctr image import /path/to/fabric-studio-8.0.0_123.tar
sudo ctr image tag docker.share.cloud.k2view.com/k2view/fabric-studio:8.0.0_123 registry.localhost/k2view/fabric-studio:8.0.0_123
sudo ctr image push --plain-http registry.localhost/k2view/fabric-studio:8.0.0_123
Now you can instruct the Cloud Orchestrator to use the image from your private container registry.
The k8s-setup.sh
script offers the ability to optionally enable or configure the following components during installation:
While these components are not toggled via flags directly in the script today, their design allows future extensibility or manual modification to enable/disable them.
git clone https://github.com/k2view/blueprints.git
cd blueprints/baremetal
chmod +x k8s-setup.sh
sudo ./k8s-setup.sh
You will be prompted to select the node type (control plane or worker), hostname, and provide any required settings such as the advertise IP address or join command.
kubectl get nodes
kubectl get pods -A
kubeadm join
command on additional worker nodes to add them to the cluster.The installation script will automatically configure and install everything required to have K8s running and ready.
You should perform the following commands:
Clone the Git repository 'k2view/blueprints' in GitHub
git clone https://github.com/k2view/blueprints.git
Navigate to the directory 'blueprints/baremetal'
cd blueprints/baremetal
Run the script 'single_node.sh' and follow all the in-screen instructions.
./single_node.sh
This script installs the following:
During the installation, the installer script will request you to provide the values prepared in the Prerequisites phase:
Once the setup process is complete (it may take a few minutes) - and before you can create a new space - a few steps need to be taken:
docker load -i /path/to/file.tar.gz
docker tag <IMAGE_HASH> localhost:32000/image-name:tag
docker push localhost:32000/image-name:tag
deploy_certificate.sh /path/to/fullchain.cer /path/to/private.key
Use the following commands to stop and restart the cluster:
Stopping the Cluster
microk8s stop
Starting the Cluster
microk8s start
Restarting the Cluster
microk8s restart
Uninstalling the Cluster
Delete the spaces and other resources from the Cloud Manager, and then use the following commands to remove the cluster from your machine.
microk8s uninstall
This article describes the guidelines and instructions for creating a K2cloud site - a K8s (Kubernetes) cluster - on premises.
While K2cloud K8s cluster deployment on the Cloud as a Self-hosted Kubernetes Cluster Installation is done using Terraform and Helm charts, based on cloud provider’s K8s infrastructure, the on-premises K8s cluster deployment is done by running a script that is responsible for preparing all required infrastructure components.
k8s-setup.sh
) k8s-setup.sh
Script Works There are two variants of on-premises Kubernetes installations for K2view Fabric and TDM, each tailored to different deployment needs and environments. These variants are based on two setup scripts available in the K2view Blueprints repository:
k8s-setup.sh
This variant is designed for production-like environments and installs a multi-node Kubernetes cluster on bare metal servers.
Purpose: Prepares both control plane and worker nodes using kubeadm
.
Components Installed:
kubeadm
, kubectl
, kubelet
Features:
kubeadm init
)Use Case: When you need a realistic, multi-node Kubernetes environment for hosting Fabric and TDM services across multiple machines.
single_node.sh
This variant targets development, testing, or proof-of-concept scenarios and installs a self-contained Kubernetes cluster on a single machine. It is also known as the Kubenetes-in-a-box installation.
Purpose: Sets up a combined control plane and worker node on one host.
Components Installed:
Features:
kubeadm init
)Use Case: When you need a quick, local setup of Fabric and TDM on a single bare-metal host for testing or evaluation.
These options allow K2view customers to tailor their on-prem Kubernetes deployments of Fabric and TDM based on the scale and purpose of their environment.
Customers looking to install Fabric Web Studio should consider installing it on Docker Compose or Podman. This provides a simpler installation experience than the use of the Kubernetes-in-a-box option.
Please consult the requirements and prerequisites section for the K2cloud on-premises K8s cluster installation.
These recommendations apply to both the multi-node and single-node cluster installations.
To install a K2cloud site on-premises, you must prepare and provide the necessary steps in coordination with your K2view representative. The process begins with gathering key configuration details, including TLS certificate files, and ensuring outbound internet access to specific K2view endpoints. These are essential for secure communications, image retrieval, and configuration via the K2cloud Orchestrator.
You must also contact your K2view representative to request access credentials and provisioning information. This includes a Cloud Mailbox ID, a K2view Nexus Repository account for pulling required Docker images, and a list of container images to populate your private registry.
The Baremetal Blueprint article published to K2view's blueprint GitHub repository provides comprehensive guidance for deploying a K2view Fabric Kubernetes cluster on bare-metal (on-premises) infrastructure. It outlines the two supported installation options — single-node and multi-node clusters — using purpose-built setup scripts, and details the prerequisites, tools, and execution steps required for each.
You'll need to clone this repository as described later in this article.
k8s-setup.sh
)The k8s-setup.sh
script automates the deployment of a multi-node Kubernetes cluster on bare-metal infrastructure. It prepares the operating system, installs required Kubernetes components, configures networking, and provides the necessary tooling for control plane and worker node setup. The script also includes options for enabling various optional components and configurations, making it suitable for both production-grade and testing environments.
ntpd
or chronyd
)If you're running the script behind a firewall or proxy, please make sure that HTTPS access to these sources is allowed and that DNS resolution is working for the corresponding domains. Optionally, you could mirror or host the required packages and manifests internally for air-gapped environments.
The k8s-setup.sh
script from the K2view Blueprints repository installs various components required to bootstrap a Kubernetes cluster on bare metal. Here's a list of internet packages (downloaded via apt
, curl
, or similar tools) that the script installs or downloads from public sources:
These are installed via apt-get install
:
apt-transport-https
ca-certificates
curl
gnupg
lsb-release
software-properties-common
jq
conntrack
containerd
or docker.io
(depending on the configuration)kubelet
kubeadm
kubectl
These packages come from:
https://packages.cloud.google.com/apt
)https://download.docker.com/linux/ubuntu
)curl
/wget
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor -o /usr/share/keyrings/kubernetes-archive-keyring.gpg
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
CNI Plugin (e.g., Calico or Flannel) The script supports networking setup via:
https://raw.githubusercontent.com/projectcalico/calico/...
)https://raw.githubusercontent.com/coreos/flannel/...
)These are applied with:
kubectl apply -f <CNI_URL>
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Before installing Kubernetes, the swap and the internal firewall must be disabled. (Important: those steps will NOT be configured automatically during the execution of our installation script!)
The official documentation for kubeadm installation can be found at this link: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
k8s-setup.sh
Script WorksThe script orchestrates the following steps:
k8s-setup.sh
will change the following settings:
Kubernetes uses a container runtime to run pods (a list of all supported runtimes can be found in the link above). If none is installed, k8s-setup.sh will install and configure containerd as a runtime. If containerd is installed and its service is running, no modifications to its settings will be made. If that's the case, please ensure all parameters required by kubeadm are set.
systemd
cgroup driverKubernetes Installation
kubeadm
, kubectl
, kubelet
)Control Plane Initialization (if selected)
kubeadm init
with specified network CIDR and hostnamekubectl
Worker Node Setup (if selected)
kubeadm join
using a token to connect to the control planeCluster Verification
kubectl
The following addons will be automatically installed in your Kubernetes cluster:
Note:
A private container registry can be configured if containerd is used for container runtime.
To push images to this private container registry, pull or load (import) the desired image (Docker will not be installed by default, but you can use ctr as long as you can run sudo)
sudo ctr image pull -u <USERNAME> docker.share.cloud.k2view.com/k2view/fabric-studio:8.0.0_123
sudo ctr image import /path/to/fabric-studio-8.0.0_123.tar
sudo ctr image tag docker.share.cloud.k2view.com/k2view/fabric-studio:8.0.0_123 registry.localhost/k2view/fabric-studio:8.0.0_123
sudo ctr image push --plain-http registry.localhost/k2view/fabric-studio:8.0.0_123
Now you can instruct the Cloud Orchestrator to use the image from your private container registry.
The k8s-setup.sh
script offers the ability to optionally enable or configure the following components during installation:
While these components are not toggled via flags directly in the script today, their design allows future extensibility or manual modification to enable/disable them.
git clone https://github.com/k2view/blueprints.git
cd blueprints/baremetal
chmod +x k8s-setup.sh
sudo ./k8s-setup.sh
You will be prompted to select the node type (control plane or worker), hostname, and provide any required settings such as the advertise IP address or join command.
kubectl get nodes
kubectl get pods -A
kubeadm join
command on additional worker nodes to add them to the cluster.The installation script will automatically configure and install everything required to have K8s running and ready.
You should perform the following commands:
Clone the Git repository 'k2view/blueprints' in GitHub
git clone https://github.com/k2view/blueprints.git
Navigate to the directory 'blueprints/baremetal'
cd blueprints/baremetal
Run the script 'single_node.sh' and follow all the in-screen instructions.
./single_node.sh
This script installs the following:
During the installation, the installer script will request you to provide the values prepared in the Prerequisites phase:
Once the setup process is complete (it may take a few minutes) - and before you can create a new space - a few steps need to be taken:
docker load -i /path/to/file.tar.gz
docker tag <IMAGE_HASH> localhost:32000/image-name:tag
docker push localhost:32000/image-name:tag
deploy_certificate.sh /path/to/fullchain.cer /path/to/private.key
Use the following commands to stop and restart the cluster:
Stopping the Cluster
microk8s stop
Starting the Cluster
microk8s start
Restarting the Cluster
microk8s restart
Uninstalling the Cluster
Delete the spaces and other resources from the Cloud Manager, and then use the following commands to remove the cluster from your machine.
microk8s uninstall