Kubernetes k8s Features:-
This blog post will provide an introduction to Kubernetes Features and we will see which are the specific features and benefits of this tool. Kubernetes has many features that help orchestrate containers across multiple hosts, automate the management of K8s clusters, and maximize resource usage through better utilization of infrastructure.
The main aim of Kubernetes, as the
other orchestration systems, is to simplify the work of technical teams, by
automating many processes of applications and services deployment that before
were carried out manually. In particular, now we’ll show you Kubernetes
features that improve IT field’s work and the benefits for companies who decide
to use it.
Some main features are:
Numerous features of Kubernetes make
it possible to manage K8s clusters automatically, orchestrate containers across
different hosts, and optimize resource usage by making better use of infrastructure.
Automated rollouts and rollbacks
Kubernetes rolls out changes to your
application or its configuration, while monitoring application health to ensure
it doesn't kill all your instances at the same time. If something goes wrong,
Kubernetes will rollback the change for you. Take advantage of a growing
ecosystem of deployment solutions. Automate deployments and updates with the
ability to Rollback to previous versions and Pause and continue a deployment.
Storage-orchestration
Automatically mount the storage
system of your choice, whether from local storage, a public cloud provider such
as AWS or GCP, or a network storage system such as NFS, iSCSI. Ability to mount and add
storage dynamically.
Self-healing
Kubernetes' ability to self-heal is
one of its most appealing features. Kubernetes will automatically reload a
containerized app or an application component if it goes down. Kubernetes
checks constantly the health of nodes and containers. Auto placement, auto
restart, auto replication and auto scaling provide application self-healing.
Restarts containers that fail, replaces and reschedules containers when nodes
die, kills containers that don't respond to your user-defined health check, and
doesn't advertise them to clients until they are ready to serve.
Automatic bin packing
This is one of the significant
features of Kubernetes. This is where Kubernetes helps in automatically placing
containers based on their resource requirements, limits, and other constraints,
without compromising on availability. Mix critical and best-effort workloads in
order to drive up utilization and save even more resources.
Mix critical and best-effort
workloads to manage utilization and save more resources. Kubernetes has the
ability to manage resources, and can automatically specify how each container
in a pod uses CPU and RAM, etc.
Secret and configuration
management
Deploy and update secrets and
application configuration without rebuilding your image and without exposing
secrets in your stack configuration.
Kubernetes has a built-in mechanism
of storing configuration values that you would prefer to keep private.
Sensitive information such as user name, passwords with encryption, and other
credentials can be kept confidentially. Kubernetes can also manage app
configurations by establishing and maintaining consistency of a product’s
functional, authentication, performance, and physical attributes.
Batch execution
Kubernetes can manage your batch and
CI workloads, replacing containers that fail, if desired. You can specify the
maximum number of Pods that should run in parallel as well as the number of
Pods that should complete their tasks before the Job is finished. A Job can
also be used to run multiple Pods at the same time.
Horizontal scaling
Scale your application up and down
with a simple command, with a UI, or automatically based on CPU usage. In
Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource
(such as a Deployment or StatefulSet), with the aim of automatically scaling
the workload to match demand.
Horizontal scaling means that the
response to increased load is to deploy more Pods. This is different from
vertical scaling, which for Kubernetes would mean assigning more resources (for
example: memory or CPU) to the Pods that are already running for the workload.
If the load decreases, and the
number of Pods is above the configured minimum, the HorizontalPodAutoscaler
instructs the workload resource (the Deployment, StatefulSet, or other similar
resource) to scale back down.
Service discovery and load balancing
No need to modify your application
to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their
own IP addresses and a single DNS name for a set of Pods, and can load-balance
across them. In simple words, service discovery is the process of figuring out
how to connect to a service. Kubernetes service discovery find services through
two approaches:
- Using the environment
variables
- Using DNS based
service discovery to resolve the service names to the service’s IP address
Load balancing identifies containers
by the DNS name or even IP address and redistributes traffic from high load to
low load areas depending on the traffic congestion.
In this article, we have gone through the conceptual understanding about key features of Kubernetes.





