Wednesday, May 3, 2023

Kubernetes (K8s) main Features Overview #DevOps

Kubernetes k8s Features:-

This blog post will provide an introduction to Kubernetes Features and we will see which are the specific features and benefits of this tool. Kubernetes has many features that help orchestrate containers across multiple hosts, automate the management of K8s clusters, and maximize resource usage through better utilization of infrastructure.

The main aim of Kubernetes, as the other orchestration systems, is to simplify the work of technical teams, by automating many processes of applications and services deployment that before were carried out manually. In particular, now we’ll show you Kubernetes features that improve IT field’s work and the benefits for companies who decide to use it.



Some main features are:

Numerous features of Kubernetes make it possible to manage K8s clusters automatically, orchestrate containers across different hosts, and optimize resource usage by making better use of infrastructure.


Automated rollouts and rollbacks

Kubernetes rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions. Automate deployments and updates with the ability to Rollback to previous versions and Pause and continue a deployment.


Storage-orchestration

Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as AWS or GCP, or a network storage system such as NFS, iSCSI. Ability to mount and add storage dynamically.


Self-healing

Kubernetes' ability to self-heal is one of its most appealing features. Kubernetes will automatically reload a containerized app or an application component if it goes down. Kubernetes checks constantly the health of nodes and containers. Auto placement, auto restart, auto replication and auto scaling provide application self-healing. Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.

 

Automatic bin packing

This is one of the significant features of Kubernetes. This is where Kubernetes helps in automatically placing containers based on their resource requirements, limits, and other constraints, without compromising on availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.

Mix critical and best-effort workloads to manage utilization and save more resources. Kubernetes has the ability to manage resources, and can automatically specify how each container in a pod uses CPU and RAM, etc.


Secret and configuration management

Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration.

Kubernetes has a built-in mechanism of storing configuration values that you would prefer to keep private. Sensitive information such as user name, passwords with encryption, and other credentials can be kept confidentially. Kubernetes can also manage app configurations by establishing and maintaining consistency of a product’s functional, authentication, performance, and physical attributes.

 

Batch execution

Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired. You can specify the maximum number of Pods that should run in parallel as well as the number of Pods that should complete their tasks before the Job is finished. A Job can also be used to run multiple Pods at the same time.


Horizontal scaling

Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage. In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.

Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload.

If the load decreases, and the number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale back down.


Service discovery and load balancing

No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. In simple words, service discovery is the process of figuring out how to connect to a service. Kubernetes service discovery find services through two approaches:

  • Using the environment variables
  • Using DNS based service discovery to resolve the service names to the service’s IP address

Load balancing identifies containers by the DNS name or even IP address and redistributes traffic from high load to low load areas depending on the traffic congestion.

 Conclusion

In this article, we have gone through the conceptual understanding about key features of Kubernetes.

Monday, May 1, 2023

Kubernetes (K8s) Overview # The history of K8s (DevOps)

This blog post will provide an introduction to Kubernetes so that you can understand the motivation behind the tool, what it is, and how you can use it.



PAST: -

Kubernetes has its roots in Google’s internal Borg System, introduced between 2003 and 2004. Later, in 2013, Google released another project known as Omega, a flexible, scalable scheduler for large compute clusters.

Google introduced the Borg System around 2003-2004. It started off as a small-scale project, with about 3-4 people initially in collaboration with a new version of Google’s new search engine. Borg was a large-scale internal cluster management system, which ran hundreds of thousands of jobs, from many thousands of different applications, across many clusters, each with up to tens of thousands of machines.

Following Borg, Google introduced the Omega cluster management system, a flexible, scalable scheduler for large compute clusters. In mid-2014, Google introduced Kubernetes as an open source version of Borg, it was a first initial release and first GitHub commit for Kubernetes in 7th June 2014. In next month 10th July 2014, Microsoft, RedHat, IBM, Docker joins the Kubernetes community.

21st July 2015, Kubernetes v1.0 gets released and Google also partnered with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF). The CNFC aims to build sustainable ecosystems and to foster a community around a constellation of high-quality projects that orchestrate containers as part of a microservices architecture.

In 11th July 2016, Minikube was released, Minikube tool that makes it easy to run Kubernetes locally. In the same year on 26th of September, Kubernetes 1.4 introduces a new tool, kubeadm, that helps improve Kubernetes’ installability. This release provides easier setup, stateful application support with integrated Helm, and new cross-cluster federation features. In 29th September 2016, Pokemon GO! Kubernetes Case Study Released! Pokémon GO was the largest Kubernetes deployment on Google Container Engine ever. Luckily, it’s creators released a case study about how they did it.

In 2nd March 2018, First Beta Version of Kubernetes 1.10 was announced. Users could test the production-ready versions of Kubelet TLS Bootstrapping, API aggregation, and more detailed storage metrics and on 1st May Google also launched the Kubernetes Podcast that was hosted by Craig Box.


PRESENT: -

Today, Kubernetes is the container orchestration solution to use. Today, Kubernetes has 1800+ contributors, 500+ meetups worldwide, and 42,000+ users. 83% of enterprises surveyed by the Cloud Native Computing Foundation (CNCF) in 2020 are using Kubernetes.

Kubernetes is now a powerful container management tool that automates the deployment and management of containers. Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Part of the reason why Kubernetes has become so popular is that it was built on top of Docker. Containers have a long history in Linux and BSD variants; however, Docker made containers extremely popular by focusing on the user experience and made building and running containers very easy.  Kubernetes built on the popularity of containers and made running (aka. orchestrating) containers on a cluster of compute nodes easy.

Another reason for Kubernetes' popularity and extensive adoption is that it didn't change the model for running software too much.

Another big aspect of Kubernetes popularity is its strong community. For starters, Kubernetes was donated to a vendor-neutral home in 2015 as it hit version 1.0: the Cloud Native Computing Foundation. There is also a wide range of community SIGs (special interest groups) that target different areas in Kubernetes as the project moves forwards. They continuously add new features and make it even more user friendly.

The Cloud Native Foundation also organizes CloudNativeCon/KubeCon, which as of this writing, is the largest ever open-source event in the world. The event, which is normally held up to three times a year, gathers thousands of technologists and professionals who want to improve Kubernetes and its ecosystem as well as make use of some of the new features released every three months.


FUTURE: -

Kubernetes has seen a dramatic increase in visibility and adoption throughout 2017, with all major cloud providers now offering their own native Kubernetes service, and several container orchestration platforms rebuilding with Kubernetes as an underpinning.  With Azure, Google Cloud, and AWS all now offering or having announced a managed Kubernetes service, creating a Cluster in the near future should be trivial and have a much lower barrier to entry than it has historically. A Kubernetes Cluster will soon be as trivial to create as any other managed cloud service.

In the year or so we’ve been running our Cluster we’ve seen it go from strength to strength, with new versions being released frequently containing significant improvements and exciting new features, and meanwhile more and more vendors are adopting it. We think in the next year, Kubernetes is going to be everywhere, and we’ll start seeing even more exciting technology being built on top of it thanks to everyone having extra capacity from not trying to re-invent the basics.

One of the main challenges developers face in the future is how to focus more on the details of the code rather than the infrastructure where that code runs on. For that, serverless is emerging as one of the leading architectural paradigms to address that challenge. There are already very advanced frameworks such as Knative and OpenFaas that use Kubernetes to abstract the infrastructure from the developer.

Kubernetes’ popularity is on the rise with use cases in mission-critical sectors such as finance, edtech, and traditional enterprise IT. However, Kubernetes face a few challenges. The extremely complex nature of developing and running distributed frameworks at scale is one of its main challenges. Despite this, experts believe Kubernetes will become a ‘universal control plane’ to manage containers, virtual machines, and other modern applications

We’ve shown a brief peek at Kubernetes in this article, but this is just the tip of the iceberg. There are many more resources, features, and configurations users can leverage.


Kubernetes (K8s) Service Account, Token, Secrets, Authentication and Authorization in RBAC Overview, #DevOps

In this blog, we will be covering Service Account, Token, Secrets & RBAC - Role-Based access control. It’s the way to outline which users can do what within a Kubernetes cluster. A Kubernetes cluster is a set of node machines for running containerized applications

This blog introduces the ServiceAccount object in Kubernetes, providing information about how service accounts work, use cases, limitations, alternatives, and links to resources for additional guidance.

Service Account

A service account provides an identity for processes that run in a Pod, and maps to a Service Account object. When you authenticate to the API server, you identify yourself as a particular user.

A service account is a type of non-human account that, in Kubernetes, provides a distinct identity in a Kubernetes cluster. Application Pods, system components, and entities inside and outside the cluster can use a specific ServiceAccount's credentials to identify as that ServiceAccount. This identity is useful in various situations, including authenticating to the API server or implementing identity-based security policies.



Default service accounts

When you create a cluster, Kubernetes automatically creates a ServiceAccount object named default for every namespace in your cluster. The default service accounts in each namespace get no permissions by default other than the default API discovery permissions that Kubernetes grants to all authenticated principals if role-based access control (RBAC) is enabled. 

If you delete the default ServiceAccount object in a namespace, the control plane replaces it with a new one.  If you deploy a Pod in a namespace, and you don't manually assign a ServiceAccount to the Pod, Kubernetes assigns the default ServiceAccount for that namespace to the Pod.

Properties of Service Account: -

 Namespaced: Each service account is bound to a Kubernetes namespace. Every namespace gets a default ServiceAccount upon creation.

Lightweight: Service accounts exist in the cluster and are defined in the Kubernetes API. You can quickly create service accounts to enable specific tasks.

Portable: A configuration bundle for a complex containerized workload might include service account definitions for the system's components. The lightweight nature of service accounts and the namespaced identities make the configurations portable.

Service accounts are different from user accounts, which are authenticated human users in the cluster. By default, user accounts don't exist in the Kubernetes API server; instead, the API server treats user identities as opaque data. 

You can authenticate as a user account using multiple methods. Some Kubernetes distributions might add custom extension APIs to represent user accounts in the API server.

How to use service accounts?

To use a Kubernetes service account, we need to do the following:

1. Create a ServiceAccount object using a Kubernetes client like kubectl or a manifest          

2. Grant permissions to the ServiceAccount object using an authorization mechanism such as RBAC

3. Assign the ServiceAccount object to Pods during Pod creation.

Grant permissions to a ServiceAccount

You can use the built-in Kubernetes RBAC mechanism to grant the minimum permissions required by each service account. You create a role, which grants access, and then bind the role to your ServiceAccount. 

RBAC lets you define a minimum set of permissions so that the service account permissions follow the principle of least privilege.

 Secrets

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confidential data in your application code.

Because Secrets can be created independently of the Pods that use them, there is less risk of the Secret (and its data) being exposed during the workflow of creating, viewing, and editing Pods.

Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. 

Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.

RBAC - Role-Based Access Control

In this blog, we will be covering RBAC or Role-Based access control.

It’s the way to outline which users can do what within a Kubernetes cluster. It’s an approach that’s used for proscribing access to users and applications on the system/network. RBAC could be a security style that restricts access to valuable resources based on the role the user holds, hence the name role-based.

Role-based access control (RBAC) is a technique of regulating access to a computer or network resources based on the roles of individual users within an enterprise. In this context, access is the ability of a private user to perform a selected task, like read, create, or modify a file.

 Authentication and Authorization in RBAC



In Kubernetes, you must be authenticated (logged in) before your request can be authorized (granted permission to access).

Authentication

Once TLS is established, the HTTP request moves to the Authentication step.This is shown as step 1 in the diagram. The cluster creation script or cluster admin configures the API server to run one or additional Authenticator Modules. Authentication modules include Client Certificates, Password, and Plain Tokens, Bootstrap Tokens, and JWT Tokens (used for service accounts).

Users in Kubernetes

All Kubernetes clusters have 2 categories of users: service accounts managed by Kubernetes, and normal users. It is assumed that a cluster-independent service manages normal users in the following ways:

·        an administrator distributing private keys.
·        a user store like Keystone or Google Accounts.
·        a file with a list of usernames and passwords.

 Authorization

After the request is authenticated as coming from a selected user, the request should be authorized. This is shown as step 2 in the above diagram. A request should include the username of the requester, the requested action, and also the object affected by the action. The request is authorized if an existing policy declares that the user has permission to complete the requested action.

Install minikube on Windows, single node Kubernetes cluster, Local K8s

 In this blog, we will install Kubernetes with Minikube on Windows. Minikube is a free tool that helps in setting up single-node Kubernetes clusters on various platforms.

Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single node Kubernetes cluster inside a VM. It is one of the best way to try out Kubernetes locally.


Pré-Requisites for your Minikube:

https://minikube.sigs.k8s.io/docs/start/

1. Installation: Click on the buttons that describe your target platform, To install the latest minikube stable release on x86-64 Windows using .exe download: *Operating system  *Architecture  *Release type  *Installer type

Download and run the installer for the latest release.


Click on Next

Click on I Agree

Click on Install

2. Start your cluster

From a terminal with administrator access (but not logged in as root), run: It may take time as per your internet speed.

$minikube start

3. Interact with your cluster

If you already have kubectl installed, you can now use it to access your

shiny new cluster:

kubectl get po –A



Alternatively, minikube can download the appropriate version of kubectl and you should be able to use it like this:

minikube kubectl -- get po –A


You can also make your life easier by adding the following to your shell config:

alias kubectl="minikube kubectl --"

Initially, some services such as the storage-provisioner, may not yet be

in a Running state. This is a normal condition during cluster bring-up, and will resolve itself momentarily. For additional insight into your cluster state, minikube bundles the Kubernetes Dashboard, allowing you to get easily acclimated to your new environment:

minikube dashboard

4.Deploy applications

Create a sample deployment and expose it on port 8080:

kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0
kubectl expose deployment hello-minikube --type=NodePort --port=8080


It may take a moment, but your deployment will soon show up when you run:
kubectl get services hello-minikube


The easiest way to access this service is to let minikube launch a web browser for you:

minikube service hello-minikube

Alternatively, use kubectl to forward the port:

kubectl port-forward service/hello-minikube 7080:8080

 Tada! Your application is now available at http://localhost:7080/

You should be able to see the request metadata in the application
output. Try changing the path of the request and observe the changes.
Similarly, you can do a POST request and observe the body show up in the
output.

5. Manage your cluster

Pause Kubernetes without impacting deployed applications: 

minikube pause

Unpause a paused instance: 

minikube unpause

Halt the cluster: 

minikube stop

Change the default memory limit (requires a restart):

minikube config set memory 9001

Browse the catalog of easily installed Kubernetes services:

minikube addons list

Create a second cluster running an older Kubernetes release:

minikube start -p aged --kubernetes-version=v1.16.1

Delete all of the minikube clusters:

minikube delete --all

Kubernetes (K8s) main Features Overview #DevOps

Kubernetes k8s Features:- This blog post will provide an introduction to Kubernetes Features and we will see which are the specific features...