This document details the upgrade and rollback procedure for upgrading the Fabric application that runs on a Kubernetes cluster.
Here's the updated upgrade and rollback procedure for the Fabric application running on Kubernetes. Step 1 has been modified to include a command that lists the fabric-deployment
, container versions, and their state.
Note: This procedure is valid for only minor upgrades or all versions above 8.0. Major upgrades may require additional steps like schema migrations, configuration changes, or downtime planning. Always consult regarding the specific upgrade documentation for major version changes.
Prerequisites:
fabric-deployment
, including container versions and their state in the space-k2view
namespace: kubectl get deployment fabric-deployment -n space-k2view -o=custom-columns='NAME:.metadata.name,READY:.status.readyReplicas,AVAILABLE:.status.availableReplicas,IMAGE:.spec.template.spec.containers[0].image'
or
kubectl get statefulset fabric-statefulset -n space-k2view -o=custom-columns='NAME:.metadata.name,READY:.status.readyReplicas,AVAILABLE:.status.availableReplicas,IMAGE:.spec.template.spec.containers[0].image'
This command outputs the deployment's name, the number of ready and available replicas, and the current image version of the container.
kubectl get deployment fabric-deployment -n space-k2view -o=jsonpath='{.spec.template.spec.containers[0].image}'
or
kubectl get statefulset fabric-statefulset -n space-k2view -o=jsonpath='{.spec.template.spec.containers[0].image}'
This command will output the current image version used by the deployment.
kubectl set image
command to update the deployment with the new image version: kubectl set image deployment/fabric-deployment fabric-container=<new-image>:<tag> -n space-k2view
or
kubectl set image statefulset/fabric-statefulset fabric-container=<new-image>:<tag> -n space-k2view
Replace the <new-image>:<tag>
string with the specific image name and tag for the new version.
kubectl rollout status deployment/fabric-deployment -n space-k2view
or
kubectl rollout status statefulset/fabric-statefulset -n space-k2view
kubectl get pods -n space-k2view
Verify that Fabric is functioning as expected:
Check that the generated code is committed and pushed to the Git repository:
If the upgrade fails or if any issues are being encountered, follow the steps below to roll back to the previous version.
kubectl rollout undo deployment/fabric-deployment -n space-k2view
or
kubectl rollout undo statefulset/fabric-statefulset -n space-k2view
kubectl rollout status deployment/fabric-deployment -n space-k2view
or
kubectl rollout status staetfulset/fabric-statefulset -n space-k2view
kubectl get pods -n space-k2view
Confirm that Fabric is functioning as expected following the rollback:
Confirm that the application is functioning as expected following the rollback.
Note: After performing either an upgrade or a rollback, document the actions taken and validate the application's functionality thoroughly before proceeding with further steps.
If you encounter issues during the upgrade or the rollback process, consider the following general troubleshooting steps:
kubectl logs deployment/fabric-deployment -n space-k2view
or
kubectl logs statefulset/fabric-statefulset -n space-k2view
kubectl get events -n space-k2view --sort-by=.metadata.creationTimestamp
kubectl get svc,configmap,pvc -n space-k2view
Network Connectivity: Ensure that there are no network issues or blocked connections affecting Fabric's ability to operate properly. This can include checking network policies, service endpoints, and DNS resolutions within the cluster.
Check Pod Health: Use the describe
command to get detailed information about pod failures or restarts:
kubectl describe pod <pod-name> -n space-k2view
Resource Limits and Requests: Review the resource limits and requests for Fabric pods to ensure they are not being throttled or evicted due to insufficient resources.
Cluster Health: Ensure the overall health of the Kubernetes cluster is good, and there are no node issues, resource shortages, or other factors that could affect the deployment.
Contact Support: If issues persist, consider reaching out to support with the relevant logs, steps taken, and any specific errors encountered.
To ensure that the user running the kubectl
commands has the necessary permissions, the following ClusterRole
and ClusterRoleBinding
configurations should be applied.
Create or update a ClusterRole
named fabric--manager
with the required permissions by saving the following configuration in a file named fabric--clusterrole.yaml
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fabric--manager
rules:
- apiGroups: ["", "apps"]
resources:
- pods
- pods/exec
- deployments
- replicasets
- services
- configmaps
- persistentvolumeclaims
verbs:
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- edit
- apiGroups: ["apps"]
resources:
- deployments/rollback
- deployments/scale
verbs:
- update
- apiGroups: ["extensions"]
resources:
- replicasets
verbs:
- delete
Apply the ClusterRole
configuration:
kubectl apply -f fabric--clusterrole.yaml
Bind the ClusterRole
to the user executing the kubectl
commands. Create or update the ClusterRoleBinding
by saving the following configuration in a file named fabric--clusterrolebinding.yaml
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fabric--manager-binding
subjects:
- kind: User
name: <username> # Replace with the actual username
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: fabric--manager
apiGroup: rbac.authorization.k8s.io
Apply the ClusterRoleBinding
configuration:
kubectl apply -f fabric--clusterrolebinding.yaml
This document details the upgrade and rollback procedure for upgrading the Fabric application that runs on a Kubernetes cluster.
Here's the updated upgrade and rollback procedure for the Fabric application running on Kubernetes. Step 1 has been modified to include a command that lists the fabric-deployment
, container versions, and their state.
Note: This procedure is valid for only minor upgrades or all versions above 8.0. Major upgrades may require additional steps like schema migrations, configuration changes, or downtime planning. Always consult regarding the specific upgrade documentation for major version changes.
Prerequisites:
fabric-deployment
, including container versions and their state in the space-k2view
namespace: kubectl get deployment fabric-deployment -n space-k2view -o=custom-columns='NAME:.metadata.name,READY:.status.readyReplicas,AVAILABLE:.status.availableReplicas,IMAGE:.spec.template.spec.containers[0].image'
or
kubectl get statefulset fabric-statefulset -n space-k2view -o=custom-columns='NAME:.metadata.name,READY:.status.readyReplicas,AVAILABLE:.status.availableReplicas,IMAGE:.spec.template.spec.containers[0].image'
This command outputs the deployment's name, the number of ready and available replicas, and the current image version of the container.
kubectl get deployment fabric-deployment -n space-k2view -o=jsonpath='{.spec.template.spec.containers[0].image}'
or
kubectl get statefulset fabric-statefulset -n space-k2view -o=jsonpath='{.spec.template.spec.containers[0].image}'
This command will output the current image version used by the deployment.
kubectl set image
command to update the deployment with the new image version: kubectl set image deployment/fabric-deployment fabric-container=<new-image>:<tag> -n space-k2view
or
kubectl set image statefulset/fabric-statefulset fabric-container=<new-image>:<tag> -n space-k2view
Replace the <new-image>:<tag>
string with the specific image name and tag for the new version.
kubectl rollout status deployment/fabric-deployment -n space-k2view
or
kubectl rollout status statefulset/fabric-statefulset -n space-k2view
kubectl get pods -n space-k2view
Verify that Fabric is functioning as expected:
Check that the generated code is committed and pushed to the Git repository:
If the upgrade fails or if any issues are being encountered, follow the steps below to roll back to the previous version.
kubectl rollout undo deployment/fabric-deployment -n space-k2view
or
kubectl rollout undo statefulset/fabric-statefulset -n space-k2view
kubectl rollout status deployment/fabric-deployment -n space-k2view
or
kubectl rollout status staetfulset/fabric-statefulset -n space-k2view
kubectl get pods -n space-k2view
Confirm that Fabric is functioning as expected following the rollback:
Confirm that the application is functioning as expected following the rollback.
Note: After performing either an upgrade or a rollback, document the actions taken and validate the application's functionality thoroughly before proceeding with further steps.
If you encounter issues during the upgrade or the rollback process, consider the following general troubleshooting steps:
kubectl logs deployment/fabric-deployment -n space-k2view
or
kubectl logs statefulset/fabric-statefulset -n space-k2view
kubectl get events -n space-k2view --sort-by=.metadata.creationTimestamp
kubectl get svc,configmap,pvc -n space-k2view
Network Connectivity: Ensure that there are no network issues or blocked connections affecting Fabric's ability to operate properly. This can include checking network policies, service endpoints, and DNS resolutions within the cluster.
Check Pod Health: Use the describe
command to get detailed information about pod failures or restarts:
kubectl describe pod <pod-name> -n space-k2view
Resource Limits and Requests: Review the resource limits and requests for Fabric pods to ensure they are not being throttled or evicted due to insufficient resources.
Cluster Health: Ensure the overall health of the Kubernetes cluster is good, and there are no node issues, resource shortages, or other factors that could affect the deployment.
Contact Support: If issues persist, consider reaching out to support with the relevant logs, steps taken, and any specific errors encountered.
To ensure that the user running the kubectl
commands has the necessary permissions, the following ClusterRole
and ClusterRoleBinding
configurations should be applied.
Create or update a ClusterRole
named fabric--manager
with the required permissions by saving the following configuration in a file named fabric--clusterrole.yaml
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fabric--manager
rules:
- apiGroups: ["", "apps"]
resources:
- pods
- pods/exec
- deployments
- replicasets
- services
- configmaps
- persistentvolumeclaims
verbs:
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- edit
- apiGroups: ["apps"]
resources:
- deployments/rollback
- deployments/scale
verbs:
- update
- apiGroups: ["extensions"]
resources:
- replicasets
verbs:
- delete
Apply the ClusterRole
configuration:
kubectl apply -f fabric--clusterrole.yaml
Bind the ClusterRole
to the user executing the kubectl
commands. Create or update the ClusterRoleBinding
by saving the following configuration in a file named fabric--clusterrolebinding.yaml
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fabric--manager-binding
subjects:
- kind: User
name: <username> # Replace with the actual username
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: fabric--manager
apiGroup: rbac.authorization.k8s.io
Apply the ClusterRoleBinding
configuration:
kubectl apply -f fabric--clusterrolebinding.yaml