DevOps for Full Stack Developers

Learn DevOps for Full Stack Developers with this tutorial

Share This Post

Share on linkedin
Share on facebook
Share on twitter
Share on email

DevOps is a buzzword, and if you are in IT you should have heard it plenty of times. In fact, it has been one of the major trends in recent years, and it revolutionizes how we write code. Hence, everyone should know how to put DevOps into practice. In this tutorial, we explain DevOps to Full Stack Developers. If you are a Full Stack Developer, here you will find everything you need to know about DevOps.

What is DevOps?

DevOps is about how you write and deploy code, rather than about what code you write. To understand it, it is worth discussing how things used to work before DevOps, and what DevOps changes.

Before DevOps

The traditional approach to code has been around for a few decades now, and it is extremely simple. Developers write code. That’s it. Or, as the former CEO of Microsoft said:

Developers, developers, developers DEVELOP!

Steve Ballmer

So, to create your web app, you just add to write the code for it. This is wonderful, but that app sitting idle on your computer is worth nothing. At some point you have to ship it: put it in a server that users can access.

The concept is the same for any piece of software. Even for software to be installed on users’ PCs, to ship it you have to create the executable installer so that they can actually install it. Even with a Python library, you have to compile it and push it to the PyPI Repository.

Traditionally, this was no task for developers. It was a task for Operations. As a developer, you handed your code to a System Engineer, who was then in charge of setting up the server, load balancers, MySQL database, and everything else. You relied on that person to prepare the environment that your app needed to run.

This is wonderful, but it has two evident limitations. The first is that this process is slow, as it is manual. But that’s not the only thing we should worry about, in fact, there is something more important going on here. That is, there is no relationship between the app and its environment. If you need a different environment for the new version of the app, your only option as a developer is to talk to the System Engineer. Hence, to move between versions and rollbacks, you need a System Engineer who is extremely well versed in your app.

DevOps Changes Everything

If the original process is so slow and prone to error, why did everyone adopt it in the first place? Well, believe it or not, it made sense. Setting up an environment used to be expensive, so you should adapt your app to your existing environment rather than the other way around. You already have Oracle DB? Better run all your apps on that type of database. Microsoft Servers? Same thing. Furthermore, speed was not a major concern, as new software versions were released once a year or so. Most importantly, many of the DevOps technologies were not available. Now, they are.

According to DevOps, the deployment of the app and the setting up of the environment must be automatic. And it must be defined with code. Think about writing a step-by-step manual to instruct your System Engineer on how to set up the server. Now, think that a computer was able to read that manual and actually execute it. Bam! You now have DevOps.

The world “DevOps” itself comes from “Development + Operations”. That is because these two functions merge into one thing now. Now, the same person who writes the code of the app also writes the code that defines how to set up the environment.

Does that mean that we are firing all the System Engineers? Not quite: more and more of them will gravitate toward developers – and vice versa.

What may not be evident is that, since you specify exactly how you need your environment to be set up, you need to know about that stuff. You need to have at least the basics of Operations. And, if you know about Operations, you need to know how to develop so that you can write the code that prepares the environment you want. This is why the two functions merge.

How DevOps Work

DevOps is not a technology, but more like an approach, a way to see things. Nonetheless, many technologies help to make the DevOps vision real.

At the core of DevOps, we need to write files that instruct on how to configure the environment. Using special code, we define in those files that we should create a database, turn on a virtual machine, or deploy a Docker container. We call those files pipelines.

Having a pipeline is not enough: we need the ability to run it. We can do that with a DevOps server, and there are many alternatives out there. The most popular is probably Jenkins, but you have also more convenient options. In fact, if you do not want to set up your own Jenkins server, you can rely on cloud services like Azure DevOps, or GitLab as they both have good free options and require no setup.

Depending on which server you use, you will have to use a different language for your pipelines. In fact, at this point, there is no standardized way to DevOps. Most DevOps servers use YAML or JSON as a language, but it is merely a way of formatting data. This tells you nothing about what the data actually is.

Fortunately, most DevOps servers come with a nice web interface, so that you can construct your pipelines from the GUI and see what you are doing.

Each project should have at least two pipelines: build and deploy. Build, or integration is creating something that is ready for deployment (a container, a compiled version of the code, etc.), while deployment is all about actually deploying that something into the servers.

Since build is also known as integration, this approach is the famous CI/CD: Continuous Integration/Continuous Deployment.

DevOps Technologies

We already told you the basic concept of DevOps. That is it, plain and simple. Yet, I can feel you are still hungry for more information. If you are like me, you want something actionable – something you can actually do. So, here we will present the key technologies you can use to put in place a valid DevOps pipeline, that actually does something.

Docker Containers

Docker containers are crucial for DevOps. Technically, your DevOps servers can connect to other servers and change their configuration, but it is something complex and unreliable – because then the server it’s left on its own, and configuration may change over time.

Instead, you want your backend application to come pre-shipped with all the environment it needs. For example, if it is a web app, you want to couple it with the webserver that runs it, such as Apache or Nginx.

Docker offers a way to accomplish that with containers. A container is like a virtual machine, that is created according to a specific configuration inside the .dockerfile. This file provides the configuration instruction for that virtual machine, such as the Operating System to use, the packages to use, some configuration files to change, and so on.

Keep in mind that containers are stateless: when a container is powered off, it loses all its state and data. Hence, running your database on a container is not a good idea. There are ways to prevent that, but they are beyond this guide. The recommendation is to use an external database instead.

So, the first thing you must do is containerize your backend app. To do that, create a dockerfile that prepares a container the way you need it, by including your source or built code.

Our application now becomes the container itself. That is our artifact, ready to be deployed. Hence, containerizing your app should be part of your build process (the CI part of CI/CD).

Depending on the role of your container, you may want to publish it to a Docker registry so that it can be downloaded (either publicly or privately). This is generally a release process, as you do not want to publish every single version that you build.

Kubernetes

Kubernetes takes your docker to the next level. It is a container orchestrator and comes useful when deploying your app.

In fact, your app will never be a single container. Never. Let that sink in: never, even if you think so. You will need things like secrets (to store certificates), services to expose your containers externally, load balancers to distribute requests, PVC to have persistent disks for databases, and more. Most notably, you will need a definition of environment variables, which in Kubernetes is a ConfigMap. All of this to say, spinning up your newly created container is not enough.

Kubernetes comes to the rescue. First, you need to have a Kubernetes cluster, which is a server designed to host containers. It is a cluster because you should have at least 3 servers running for redundancy, but for tests, you can get away with just one. Then, you can provide some configuration to those servers to deploy your app.

Specifically, you can tell them how many containers to deploy, if they need something special, which ones to expose externally, and so on.

Instead of setting up your own Kubernetes cluster, you should consider a cloud solution, such as AWS Elastic Kubernetes Service (EKS) or Azure Kubernetes Service (AKS). In both, you just say how big your servers should be, and how many you want.

The Kubernetes cluster will accept YAML configuration files, and you can give them to it with the kubectl apply -f command line option. To run that, you need to install the kubectl utility on your PC and connect to the cluster first.

Helm

Okay, you learned about docker. It was not enough, so you learned about Kubernetes. I am sorry to break it down for you, but it is still not enough. You need just one more technology to make it. Helm.

Helm is for Kubernetes what Docker is for your app. Like you package all your app source code and environment into a docker container, you package all your Kubernetes configuration into a Helm Chart.

Can’t I just use my Kubernetes configuration? Nope. That is because Kubernetes has no sense of versioning or of what is related to what else. Instead, Helm does.

With Helm, you create YAML files that are basically Kubernetes configuration files, and then you package them together and version the whole package. Whenever you update parts of it, Helm is smart enough to redeploy only the parts that actually need redeployment.

Another wonder of Helm is that you can use it to deploy applications made by other people, just like dockers. For example, you can find the chart for a MySQL database that can automatically take care of deploying three containers and one persistent volume claim (storage that will not expire) for you.

To run Helm, you will need the Helm console client on your PC, much like kubectl, and to spin up a container inside your cluster that is actually the Helm backend and takes care of managing things. The Helm client can launch that container for you.

Nginx

If you are a Full Stack Developer, your app is a web app, so you need to know about Nginx. Nginx is a popular and lightweight web server, much like Apache. But it is way more than that, particularly in Kubernetes.

In fact, Nginx can act as a load balancer for your web application inside your Kubernetes cluster. You will have an Nginx container (known as Nginx Controller) that receives external requests, and then route them into your services, with some manipulations if needed.

It can alter the path of the request, clear headers, use SSL certificates, route different paths to different applications, and more.

Then, inside the container of your app, you will have another web server, this time running your own app. This will receive requests from the Nginx Controller.

The good news is that you can have just one Nginx controller per cluster, and have your applications use it, even for different domains and functions. That is because the controller has no configuration by itself. Instead, each configuration defines that it needs an Nginx controller with an ingress resource, which also defines how that controller should be configured.

This gives you scalability and flexibility. Even better, if you are starting low it allows you to save some money by exposing a single public IP (and thus paying less) with your cloud provider.

Testing when Building

If you have a good piece of code, you have tests. And, if you are running JavaScript on the backend, you are probably using Jest to run those tests (Need a guide on Jest? Here).

In your build pipeline, you should always run the test, and make the pipeline fail if the tests do not pass. If you are in for the real game, you should also run coverage testing and let it fail if you don’t reach a good coverage (how about 100%?).

This is important because it ensures only good code can be built, and thus saves you time in checking everything works fine at a later stage.

DevOps for the Frontend

DevOps is definitely simpler for the frontend code, but it is still important and you should use it. Even for the frontend, you should take care of building and deploying.

Building for the frontend means creating the packaged version of your app that can run on users’ browsers. This will mean minimizing JavaScript and CSS for better performance, building everything together, and copying assets.

Instead, deploying for the frontend means publishing your packaged frontend file on your server. The modern approach is to have a static website that the user can browse. When the user goes there, it will download your frontend app, which then will make calls to your API (another website).

To learn more about how to prepare a static website, you can read our guide on publishing a Vue.js app.

Note that, since you will be calling your API – which is on a different URL – you will need to implement CORS. CORS is the acronym of Cross-Origin Resource Sharing, and it is a setting you have to enable on your backend.

Dev/Test, QA, Prod

Let’s talk about environments. For DevOps, environments are basically clones of your app, different instances. You should have at least two, possibly 3, and for some rare cases 4.

Each environment is a completely separate instance of your app. It has its Kubernetes cluster, its static website from the frontend, its database, and so on. Why having such replicas? Because it enables you to produce better quality software.

  • Dev (for development) and Test are generally a single environment. You deploy your app here first and test it with sample data to see that everything works fine. Here, you want to deploy early and often.
  • Prod (for production) is where your “real” users actually are. You want to deploy here only stable versions, only when you are sure they work.
  • QA (for Quality Assurance) is an intermediary step that you may want to adopt. Whatever you deploy to production, should be deployed to QA first. Then, you should have some users test that everything is fine, and only if so deploy the exact same thing to QA.

For everything to work smoothly, those environments must be exact replicas. However, you should use less powerful servers for Dev and QA, to save some money.

If you are even on a tighter budget, instead of running separate servers you may run one cluster and then use namespaces to separate between environments. Of course, if your app is complex and has different Kubernetes namespaces on its own, this will not be possible.

In Conclusion

DevOps is a powerful way of thinking and working. It enables you to reduce your build and deploy time from days or hours to minutes oven seconds. It makes you faster and nimbler.

This, in turn, enables you to write better code and have an app of way better quality overall. Hence, there is no reason for not adopting DevOps right now, whatever your app may be. The learning curve may be steep at the beginning but is well worth the efforts. Try now, thank me later.

Alessandro Maggio

Alessandro Maggio

Project manager, critical-thinker, passionate about networking & coding. I believe that time is the most precious resource we have, and that technology can help us not to waste it. I founded ICTShore.com with the same principle: I share what I learn so that you get value from it faster than I did.
Alessandro Maggio

Alessandro Maggio

Project manager, critical-thinker, passionate about networking & coding. I believe that time is the most precious resource we have, and that technology can help us not to waste it. I founded ICTShore.com with the same principle: I share what I learn so that you get value from it faster than I did.

One Response

Comments are closed.

Join the Newsletter to Get Ahead

Revolutionary tips to get ahead with technology directly in your Inbox.

Alessandro Maggio

2021-06-03T16:30:00+00:00

Unspecified

Full Stack Development Course

Unspecified

Want Visibility from Tech Professionals?

If you feel like sharing your knowledge, we are open to guest posting - and it's free. Find out more now.