Kubectl Delete Pod Restarts New Pods Again

How to Restart Kubernetes Pods With Kubectl

Illustration of the Kubernetes logo on a smartphone
o_m/Shutterstock.com

Kubernetes Pods should operate without intervention but sometimes yous might hit a problem where a container's not working the style it should. Restarting the Pod tin can help restore operations to normal.

Kubectl doesn't have a direct mode of restarting individual Pods. Pods are meant to stay running until they're replaced as office of your deployment routine. This is normally when you release a new version of your container image.

Here are a few techniques y'all can use when yous want to restart Pods without building a new image or running your CI pipeline. They can assistance when you think a fresh prepare of containers will become your workload running again.

Scaling the Replica Count

Although in that location'due south no kubectl restart, you tin achieve something similar by scaling the number of container replicas yous're running. This works when your Pod is role of a Deployment, StatefulSet, ReplicaSet, or Replication Controller.

kubectl scale deployment my-deployment --replicas=0 kubectl calibration deployment my-deployment --replicas=3

Scaling your Deployment down to 0 will remove all your existing Pods. Wait until the Pods have been terminated, using kubectl go pods to bank check their status, then rescale the Deployment back to your intended replica count. Kubernetes will create new Pods with fresh container instances.

Downtimeless Restarts With Rollouts

Manual replica count adjustment comes with a limitation: scaling downward to 0 will create a period of downtime where there's no Pods available to serve your users. An culling choice is to initiate a rolling restart which lets you replace a fix of Pods without downtime. Information technology's bachelor with Kubernetes v1.xv and afterward.

kubectl rollout restart deployment my-deployment

When you lot run this control, Kubernetes volition gradually terminate and replace your Pods while ensuring some containers stay operational throughout. The rollout's phased nature lets yous keep serving customers while effectively "restarting" your Pods behind the scenes.

After the rollout completes, you'll have the same number of replicas as before but each container will be a fresh instance. You can check the status of the rollout by using kubectl get pods to listing Pods and sentinel as they become replaced. There's likewise kubectl rollout status deployment/my-deployment which shows the current progress also.

kubectl rollout works with Deployments, DaemonSets, and StatefulSets. Most of the time this should be your go-to choice when you desire to terminate your containers and immediately kickoff new ones.

(Ab)using ReplicaSet Monitoring

When your Pod's part of a ReplicaSet or Deployment, you lot can initiate a replacement past simply deleting it. The ReplicaSet will notice the Pod has vanished as the number of container instances will drop below the target replica count.

kubectl delete pod my-pod

The ReplicaSet will intervene to restore the minimum availability level. It'll automatically create a new Pod, starting a fresh container to replace the onetime one.

This is technically a side-effect – information technology'southward better to utilise the scale or rollout commands which are more explicit and designed for this use instance. Nonetheless transmission deletions can be a useful technique if y'all know the identity of a single misbehaving Pod within a ReplicaSet or Deployment. A rollout would replace all the managed Pods, non but the 1 presenting a fault.

Y'all can expand upon the technique to supersede all failed Pods using a single command:

kubectl delete pods --field-selector=status.stage=Failed

Any Pods in the Failed land volition be terminated and removed. The replication controller will find the discrepancy and add new Pods to motility the state back to the configured replica count. If you're confident the sometime Pods failed due to a transient fault, the new ones should stay running in a healthy state.

Changing Pod Annotations

Another mode of forcing a Pod to be replaced is to add or modify an annotation. Kubernetes volition replace the Pod to apply the alter.

You lot can use the kubectl annotate control to apply an note:

kubectl annotate pods my-pod app-version="2" --overwrite

This command updates the app-version note on my-pod. The --overwrite flag instructs Kubectl to utilise the modify fifty-fifty if the annotation already exists. Without it you can only add together new annotations as a condom measure to prevent unintentional changes.

Updating a deployment's environs variables has a like effect to changing annotations. This is ideal when y'all're already exposing an app version number, build ID, or deploy date in your environs.

kubectl gear up env deployment my-deployment APP_VERSION="two"

Conclusion

Kubernetes Pods should usually run until they're replaced by a new deployment. Every bit a event, there's no directly way to "restart" a single Pod. If i of your containers experiences an outcome, aim to supplant it instead of restarting. The subtle change in terminology better matches the stateless operating model of Kubernetes Pods.

Calibration your replica count, initiate a rollout, or manually delete Pods from a ReplicaSet to terminate old containers and get-go fresh new instances. Rollouts are the preferred solution for mod Kubernetes releases merely the other approaches work too and tin be more than suited to specific scenarios.

Foremost in your mind should be these 2 questions: do you desire all the Pods in your Deployment or ReplicaSet to be replaced, and is whatever downtime acceptable? Manual Pod deletions can exist ideal if you want to "restart" an individual Pod without downtime, provided y'all're running more than one replica, whereas calibration is an option when the rollout command tin't be used and you're not concerned about a brief period of unavailability.

hensleytworet.blogspot.com

Source: https://www.cloudsavvyit.com/14587/how-to-restart-kubernetes-pods-with-kubectl/

0 Response to "Kubectl Delete Pod Restarts New Pods Again"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel