menu
DevOps Tools Future Processing
Software Development

DevOps tools that give you superpowers

date: 26 October 2023
reading time: 0 min

Have you ever tried to squash a mysterious bug in your client’s software system but never found anyone who knew how the system was set up and why?

We’ve all been there. And it was never easy. And huge manuals never helped. Fortunately, at Future Processing we’ve built some pretty amazing DevOps tools which help people solve such problems in a much better way. Let’s look at how they affect the way we work and which of them have already made our lives so much easier.


DevOps tools make people more engaged

Without DevOps tools, an account manager who would have to solve a problem with a software would need to rely on other people to assemble various components so that they matched the specific setup. It would take time and the success would rely on the memory and availability of these people.

DevOps tools allow to deal with such matters independently of others. All versions of software are stored in an artifacts repository and thanks to reproducible code build on CI server you can recreate every component. Assembling the needed setup means simply checking out a branch containing the configuration code and running the release pipeline on a testing environment. This in consequence allows the scripts to deploy correct component versions and automatically apply customer-specific configurations.

What’s more, the configuration branch can be easily transformed into a pull request, allowing others to review it – it’s a valuable learning and collaborative experience.


DevOps tools give people confidence

Our work is very often concentrated on handling minor features, bug fixes and on ongoing improvements. With a smaller scale of those works, the release becomes much more manageable. The frequency of these daily releases has a huge impact on our confidence. The standard workflow involves creating a pull request with the proposed changes, followed by automatic validation by CI system, peer reviews, approval, margining and deployment. The review process serves a dual purpose – it provides an extra layer of verification, and it fosters learning and collaboration.


DevOps tools give you quick feedback and reduce lead time

One of the principles of the DevOps approach is a rapid feedback loop, meaning if there is mistake in the configuration, the automated validation process catches it immediately. To identify an error in the application of specific settings you need a few hours. The automated deployment routine provides feedback on the success of any change on the very same day you introduced it.

Also, code reviews comments are an invaluable source of knowledge, as they help to gain a deeper understanding of the system and allow you to improve your skills.

In DevOps, the person responsible for the deployment of the change is its initiator – this is how you learn whether the system really works or how it breaks. Over time, monitoring the deployment progress and observing the system’s behaviours, such as changes in metrics and logs immediately afterwards, becomes your second nature.

Another benefit is that DevOps approach dramatically reduces delivery times so that a client can get the value quicker or provide feedback if something else was expected.


And now the promised overview of the actual DevOps tools and how we used them


Infrastructure stack

Our principle was to ensure that the entire infrastructure was automated. We started with the provisioning of virtual machines and networks in the AWS cloud using Terraform, a cloud-agnostic infrastructure as code tool. We decided to use Terraform so that our customer’s infrastructures could be hosted on various providers.

That’s done, we used Ansible, which applied the operating system configurations and installed the necessary tools. We only had a few smaller roles in Ansible – it was a deliberate decision as it enhanced our ability to manage the system while also bolstering security by reducing the potential attack surface.

All the heavy lifting was done by Docker Swarm. Each application had its dedicated Docker image, complete with all the runtime dependencies, and Swarm efficiently managed the workload across a fleet of virtual machine nodes.

We chose Docker Swarm as at the time it was actively developed by Docker Inc. It was much more straightforward comparing to Kubernetes, which back then was not as feature-rich as it is today.


Infrastructure as Code

The fact that all the infrastructure is stored as code within a single comprehensive repository had a big impact on collaboration and promotion of shared responsibility. What mattered most was the transparency that effectively combatted possible knowledge silos.

Another important aspect was the ability to comprehensively view and simultaneously compare all development and testing environments. It greatly helped us to understand which features were being deployed and tested at each stage. And although the infrastructure we worked on was extensive and intricate, we had just one highly skilled specialist involved.

According to DevOps there is no strict division between development and operations, which means everyone was encouraged to make configuration changes, deploy code, and contribute to the setup of the infrastructure.


Executable documentation

Our system was installed, configured and operated by people with different skills, which is why we preferred to use tools that were straightforward and easy to understand. The majority relied on YAML configuration files for Docker Swarm, sharing the same format as Docker Compose, which made it especially convenient to set up on any machine that runs Docker.

A comparable approach was used with our deploy .sh Bash script, which codified the steps an operator would manually enter on the machine. The script was enriched with comments, effectively transforming it into executable documentation that was run on daily basis. Such an approach eliminated the need to repeat commands from an operational manual.

The distinct separation of various layers within our infrastructure (virtual machines through Terraform, operating systems via Ansible and applications via Docker Swarm) allowed customers the flexibility to choose which components to use. This modularity proved to be of utmost importance, particularly in a situation where a public cloud environment was not a viable option.

And there was a bonus: a script designated to generate release noted based on code and source control metadata. It was a natural outcome of our commitment to associate each commit message with a corresponding Jira ticket. It allowed us to automatically generate changes in our release notes from this data. What’s more, our installation instructions were essentially copy-pastes of the scripts we had prepared, resulting in comprehensive documentation with minimal effort.


So, is DevOps really that good?

Are you still wondering whether the DevOps approach we adopted was that good and worth it? Would we recommend it to others? And to whom exactly?

The extent of automation involved makes DevOps undeniably beneficial for medium-sized projects which involve dozens of people. When it comes to larger projects, I would say it’s a proper must have – without it, a lot of precious time is wasted on repetitive, everyday tasks. But we’ve used the same approach, albeit with less extensive tooling, for smaller teams comprising 5 to 7 developers. And we still got all the benefits!

You may think DevOps is just about technology. I would rather say it’s about people. Implementing DevOps foster a more engaged team that feels empowered and accountable for the product. This, in turn, leads to numerous opportunities for learning, fuelled by candid feedback and a boost in confidence. I’m writing more about it on my personal blog: check it out. Working in such an environment is truly a pleasure and I cannot stop feeling the benefits of it!

Read more on our blog

Discover similar posts

Contact

© Future Processing. All rights reserved.

Cookie settings