DevOps Assembly Lines: End To End Automation

Aman Jhagrolia
8 min readApr 14, 2021
Jenkins Automation

The word “DevOps” is a mashup of “development’ and “operations” but it represents a set of ideas and practices much larger than those two terms alone, or together. DevOps includes security, collaborative ways of working, data analytics, and many other things. DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality.

Automation is the ultimate need for DevOps practice and ‘Automate everything’ is the key principle of DevOps. In DevOps, automation kick starts from the code generation on the Developers machine till the code is pushed to the code and even after that to monitor the application and system in production. So we’ve created a setup for deploying application on the production environment using different tools and techniques of DevOps culture and provide an End to End Automation. Here we’ve created a Jenkins Pipeline to Build, Test, Deploy, Monitor, Analyze and Auto Scale the Web Infrastructure.

Tools and Technologies Used -

Jenkins - It is a free and open source automation server. It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery.

Kubernetes - It is an open-source container-orchestration system for automating computer application deployment, scaling, and management. Here our complete production, monitoring and analysis environments are running on top of kubernetes cluster.

Ansible - It is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. Ansible playbooks are used to depoly production, monitoring and analysis environments.

AWS EC2 and AWS EKS - The Jenkins server is running on the Amazon EC2 Instances and Amazon EKS is used for managed Kubernetes cluster.

Git/Github - Git is software for tracking changes in any set of files, usually used for coordinating work among programmers collaboratively developing source code during software development and GitHub is a provider of Internet hosting for software development and version control using Git. Here our complete source code is hosted on Github.

Prometheus - It is a free software application used for event monitoring and alerting. It records real-time metrics in a time series database.

Grafana - It is a multi-platform open source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources. Here we’ve created a dashboard to visualize and monitor apache webserver metrics.

Elastic Search - It is a search engine tool which provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.

Filebeat and Metricbeat - Filebeat is a lightweight shipper for forwarding and centralizing log data. Here filebeat collects and ships the apache webserver logs to elasticsearch. Metricbeat is a lightweight shipper that you can install on your servers to periodically collect metrics from the operating system and from services running on the server. Here metricbeat runs on each node of our kubernetes cluster and ships the metrics of each node to elasticsearch and kibana.

Kibana- It is a data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Here we’ve created a dashboard to monitor our apache logs and node metrics shipped by filebeat and metricbeat.

Slack - It is a channel-based messaging platform. With Slack, people can work together more effectively, connect all their software tools and services, and find the information they need to do their best work all within a secure, enterprise-grade environment. Here also slack is used as a colloboration tool, notifications are sent on slack at every stage of pipeline.

What is Jenkins Pipeline?

Jenkins Pipeline (or simply “Pipeline”) is a suite of plugins that supports implementing and integrating continuous delivery pipelines into Jenkins. A continuous delivery pipeline is an automated expression of your process for getting software from version control right through to your users and customers. Jenkins Pipeline provides an extensible set of tools for modelling simple-to-complex delivery pipelines “as code”.

Github Repository for code -

Here is the Workspace contains the Jenkinsfile, HTML Website, Dockerfile, Ansible Playbooks.

Workspace

Jenkins Pipeline — Here is the Jenkins Pipeline which takes the Jenkinsfile code from specified Github Repository.

The Pipeline is Parameterised -

  • PATCH_PRODUCTION — The production environment will only be patched/updated when this parameter is set to true.
  • PATCH_MONITOR — The monitoring environment will only be patched/updated when this parameter is set to true.
  • PATCH_ANALYSIS — The analysis environment will only be patched/updated when this parameter is set to true.
Jenkins Pipeline Job

Here’s the Kubernetes status before running the Jenkins Pipeline i.e. only default resources are there.

K8s Status Before

Now on running the pipeline, all the stages will start doing their work in a sequence. Here all the stages of the pipeline are successful and our application is built, tested and deployed in a production environment along with a monitoring and analysis environment.

7 Stages of our Pipeline —

  1. Build: Downloads the code from GitHub, Builds a Containerized Application, Publish it to a centralised container registry
  2. Deploy: Test: Deploy that containerized application on Testing Environment
  3. Approve and Notify: Test and Approve the application and Notify the Developers
  4. Deploy: Production: Deploy the application to Production Environment if Approved
  5. Monitor: Deploy the Monitoring Environment i.e. Prometheus, Grafana, Metricbeat etc
  6. Analysis: Deploy the Logging analysis Environment i.e. EFK Stack and EMK Stack
  7. Final Test and Notify: Take a Final Test on Application running in production Environment and Notify
Pipeline Runs

Here to deploy the production, monitor and analysis environment we use Ansible. Different stages will invoke the different Ansible Playbooks to deploy the corresponding environment.

Ansible Playbook invoked by Pipeline

Slack is used as a collaboration and notification tool. A notification is sent to the developer at every stage of the pipeline. If any issue occurs in the deployment cycle of the application then immediately the developer will be notified.

On the successful deployment, URLs of the application are sent on Slack.

Slack Notification

Here we've successfully connected to our website -

Website Deployed!

Monitoring and Analysis environment i.e. Prometheus, Grafana and Kibana are also deployed successfully.

Elasticsearch, Filebeat, Metricbeat, Metrics-server, Prometheus Adapter are also deployed at the backend.

Prometheus Grafana Kibana

Here is our Kubernetes status after running the pipeline successfully.

  • We can see that multiple resources i.e. Deployments, Pods, Services, DaemonSet are created.
  • PVCs and ConfigMaps are also created to store the data persistently and to map the configuration files respectively.
  • An HPA (Horizontal Pod Autoscalar) is also deployed to automatically scale the application pods when needed.
K8s Status After

Github Webhooks are configured to automatically trigger the pipeline when the developer pushes the new and updated code to a Github repository.

Github Webhook

If there is some issue with the application, then after testing it is not approved for the production environment and the next stages of the pipeline will fail. And on this, the developer will be notified through the slack channel. Now after this developer will resolve the issue and push the updated code. On Push, Jenkins pipeline will be triggered automatically using Github webhooks and updated code will be again tested and deployed if successful.

Test Failed

Horizontal Pod Auto Scaling -

As we've deployed a monitoring environment, So it continuously keeps on monitoring the application and metrics. And if high load comes then application pods are automatically scaled up and automatically scaled down when load reduces. This autoscaling is done using HPA (Horizontal Pod Autoscalar) based on the custom metrics created with help of metrics-server and Prometheus adapter.

Pod Auto Scaling

After the application successfully passes the first testing stage and deployed to the production environment, if some issue occurs in the production environment then on the final test production environment deployment is rolled back to the last version automatically.

Rolling Back

Then on test failed notification, the developer will resolve the issue and again push the updated code, On this pipeline will be triggered by Github webhooks and updates are Rolled Out again in the production environment.

Pipeline Successful

Here our updated website is successfully connected -

Updated Website!

Here is the video demonstrating the automated workflow -

Thanks for reaching here, Hope it’s helpful🙂

--

--