This document outlines the upgrade and rollback procedures for upgrading the Fabric application, which runs on a Kubernetes cluster.
Here's the updated upgrade and rollback procedure for the Fabric application running on Kubernetes. Step 1 has been modified to include a command that lists the fabric-deployment
, container versions, and their state.
Note: This procedure is valid for only minor upgrades or all versions above 8.0. Major upgrades may require additional steps, such as schema migrations, configuration changes, or planning for downtime. Always consult the specific upgrade documentation for major version changes. Prerequisites:
deployment
is used for Fabric Web Studio, and statefulset
is used for Fabric.fabric-deployment
, including container versions and their state in the space-k2view
namespace: kubectl get deployment fabric-deployment -n space-k2view -o=custom-columns='NAME:.metadata.name,READY:.status.readyReplicas,AVAILABLE:.status.availableReplicas,IMAGE:.spec.template.spec.containers[0].image'
or
kubectl get statefulset fabric-statefulset -n space-k2view -o=custom-columns='NAME:.metadata.name,READY:.status.readyReplicas,AVAILABLE:.status.availableReplicas,IMAGE:.spec.template.spec.containers[0].image'
This command outputs the deployment's name, the number of ready and available replicas, and the current image version of the container.
To check which version of the Fabric deployment is currently running, use the following command with a filter to display the image version:
kubectl get deployment fabric-deployment -n space-k2view -o=jsonpath='{.spec.template.spec.containers[0].image}'
or
kubectl get statefulset fabric-statefulset -n space-k2view -o=jsonpath='{.spec.template.spec.containers[0].image}'
This command will output the current image version used by the deployment.
kubectl set image
command to update the deployment with the new image version: kubectl set image deployment/fabric-deployment fabric-container=<new-image>:<tag> -n space-k2view
or
kubectl set image statefulset/fabric-statefulset fabric-container=<new-image>:<tag> -n space-k2view
Replace the <new-image>:<tag>
string with the specific image name and tag for the new version.
kubectl rollout status deployment/fabric-deployment -n space-k2view
or
kubectl rollout status statefulset/fabric-statefulset -n space-k2view
Check the status of the new pods to ensure they are running correctly:
kubectl get pods -n space-k2view
Verify that Fabric is functioning as expected: You can access the Web and generate a test code snippet to make sure it operates correctly.
Check that the generated code is committed and pushed to the Git repository:
If the upgrade fails or any issues arise, please follow the steps below to roll back to the previous version.
kubectl rollout undo deployment/fabric-deployment -n space-k2view
or
kubectl rollout undo statefulset/fabric-statefulset -n space-k2view
kubectl rollout status deployment/fabric-deployment -n space-k2view
or
kubectl rollout status statefulset/fabric-statefulset -n space-k2view
kubectl get pods -n space-k2view
If you encounter issues during the upgrade or the rollback process, consider the following general troubleshooting steps:
kubectl logs deployment/fabric-deployment -n space-k2view
or
kubectl logs statefulset/fabric-statefulset -n space-k2view
kubectl get events -n space-k2view --sort-by=.metadata.creationTimestamp
kubectl get svc,configmap,pvc -n space-k2view
describe
command to get detailed information about pod failures or restarts: kubectl describe pod <pod-name> -n space-k2view
To ensure that the user running the kubectl
commands has the necessary permissions, the following ClusterRole
and ClusterRoleBinding
configurations should be applied.
Create or update a ClusterRole
named fabric--manager
with the required permissions by saving the following configuration in a file named fabric--clusterrole.yaml
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fabric--manager
rules:
- apiGroups: ["", "apps"]
resources:
- pods
- pods/exec
- deployments
- replicasets
- services
- configmaps
- persistentvolumeclaims
verbs:
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- edit
- apiGroups: ["apps"]
resources:
- deployments/rollback
- deployments/scale
verbs:
- update
- apiGroups: ["extensions"]
resources:
- replicasets
verbs:
- delete
Apply the ClusterRole
configuration:
kubectl apply -f fabric--clusterrole.yaml
Bind the ClusterRole
to the user executing the kubectl
commands. Create or update the ClusterRoleBinding
by saving the following configuration in a file named fabric--clusterrolebinding.yaml
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fabric--manager-binding
subjects:
- kind: User
name: <username> # Replace with the actual username
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: fabric--manager
apiGroup: rbac.authorization.k8s.io
Apply the ClusterRoleBinding
configuration:
kubectl apply -f fabric--clusterrolebinding.yaml
This document outlines the upgrade and rollback procedures for upgrading the Fabric application, which runs on a Kubernetes cluster.
Here's the updated upgrade and rollback procedure for the Fabric application running on Kubernetes. Step 1 has been modified to include a command that lists the fabric-deployment
, container versions, and their state.
Note: This procedure is valid for only minor upgrades or all versions above 8.0. Major upgrades may require additional steps, such as schema migrations, configuration changes, or planning for downtime. Always consult the specific upgrade documentation for major version changes. Prerequisites:
deployment
is used for Fabric Web Studio, and statefulset
is used for Fabric.fabric-deployment
, including container versions and their state in the space-k2view
namespace: kubectl get deployment fabric-deployment -n space-k2view -o=custom-columns='NAME:.metadata.name,READY:.status.readyReplicas,AVAILABLE:.status.availableReplicas,IMAGE:.spec.template.spec.containers[0].image'
or
kubectl get statefulset fabric-statefulset -n space-k2view -o=custom-columns='NAME:.metadata.name,READY:.status.readyReplicas,AVAILABLE:.status.availableReplicas,IMAGE:.spec.template.spec.containers[0].image'
This command outputs the deployment's name, the number of ready and available replicas, and the current image version of the container.
To check which version of the Fabric deployment is currently running, use the following command with a filter to display the image version:
kubectl get deployment fabric-deployment -n space-k2view -o=jsonpath='{.spec.template.spec.containers[0].image}'
or
kubectl get statefulset fabric-statefulset -n space-k2view -o=jsonpath='{.spec.template.spec.containers[0].image}'
This command will output the current image version used by the deployment.
kubectl set image
command to update the deployment with the new image version: kubectl set image deployment/fabric-deployment fabric-container=<new-image>:<tag> -n space-k2view
or
kubectl set image statefulset/fabric-statefulset fabric-container=<new-image>:<tag> -n space-k2view
Replace the <new-image>:<tag>
string with the specific image name and tag for the new version.
kubectl rollout status deployment/fabric-deployment -n space-k2view
or
kubectl rollout status statefulset/fabric-statefulset -n space-k2view
Check the status of the new pods to ensure they are running correctly:
kubectl get pods -n space-k2view
Verify that Fabric is functioning as expected: You can access the Web and generate a test code snippet to make sure it operates correctly.
Check that the generated code is committed and pushed to the Git repository:
If the upgrade fails or any issues arise, please follow the steps below to roll back to the previous version.
kubectl rollout undo deployment/fabric-deployment -n space-k2view
or
kubectl rollout undo statefulset/fabric-statefulset -n space-k2view
kubectl rollout status deployment/fabric-deployment -n space-k2view
or
kubectl rollout status statefulset/fabric-statefulset -n space-k2view
kubectl get pods -n space-k2view
If you encounter issues during the upgrade or the rollback process, consider the following general troubleshooting steps:
kubectl logs deployment/fabric-deployment -n space-k2view
or
kubectl logs statefulset/fabric-statefulset -n space-k2view
kubectl get events -n space-k2view --sort-by=.metadata.creationTimestamp
kubectl get svc,configmap,pvc -n space-k2view
describe
command to get detailed information about pod failures or restarts: kubectl describe pod <pod-name> -n space-k2view
To ensure that the user running the kubectl
commands has the necessary permissions, the following ClusterRole
and ClusterRoleBinding
configurations should be applied.
Create or update a ClusterRole
named fabric--manager
with the required permissions by saving the following configuration in a file named fabric--clusterrole.yaml
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fabric--manager
rules:
- apiGroups: ["", "apps"]
resources:
- pods
- pods/exec
- deployments
- replicasets
- services
- configmaps
- persistentvolumeclaims
verbs:
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- edit
- apiGroups: ["apps"]
resources:
- deployments/rollback
- deployments/scale
verbs:
- update
- apiGroups: ["extensions"]
resources:
- replicasets
verbs:
- delete
Apply the ClusterRole
configuration:
kubectl apply -f fabric--clusterrole.yaml
Bind the ClusterRole
to the user executing the kubectl
commands. Create or update the ClusterRoleBinding
by saving the following configuration in a file named fabric--clusterrolebinding.yaml
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fabric--manager-binding
subjects:
- kind: User
name: <username> # Replace with the actual username
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: fabric--manager
apiGroup: rbac.authorization.k8s.io
Apply the ClusterRoleBinding
configuration:
kubectl apply -f fabric--clusterrolebinding.yaml