To do DevOps, you need automation. And to automate your DevOps process, you need good DevOps automation tools. But how do you select good tools? And when do you buy vs. building your own?
In this article, I'll discuss what DevOps automation tools are, how to pick them, and some of the more popular tools in the market today.
Why automate DevOps?
DevOps aims to collapse software development and system operations/monitoring into a single unified workflow. Gone are the days when developers would write code and throw it over the wall to sysadmins to make work...somehow. In a DevOps model, the application and the environment in which it runs are treated as a single deliverable.
Fortunately, the cloud makes it easy to implement a DevOps model. Cloud computing systems such as Amazon Web Services (AWS) enable development teams to rent computing power much like we use electricity. Teams can even request this capacity programmatically. This makes it possible to stand up everything your application needs to run - servers, databases, networks, file storage, data processing, AI models, etc. - every time you ship a new version of your application.
But online software applications are complex beasts. It takes a lot of thought and care to release a new version of a Web app that's currently running and serving real customers. Ideally, every release:
- Builds and tests code as it's checked in by developers (i.e., Continuous Integration)
- Undergoes static and dynamic analysis to enforce good coding practices and detect security flaws
- Is deployed to one or more pre-production environments and vetted
- Is released gradually to production
- Is closely monitored for any issues
Now, you could deploy changes through such a complex system by hand. But with all of the files and configuration settings involved in even a small app, that's a recipe for disaster. Small mistakes, like a single misnamed environment variable, could render your entire application nonfunctional. Furthermore, it's almost impossible to test everything that could go wrong manually.
That's why, in my previous article on good practices for DevOps automation, it's well-nigh impossible to have a good DevOps process that isn't automated. Good DevOps automation ensures the reliability and stability of each release.
The challenges of automating DevOps
A fully automated DevOps release pipeline is every team's nirvana. However, there are a lot of challenges that prevent many from getting there.
Complexity
The largest obstacle to DevOps automation is: it's hard! Deploying the code and infrastructure for an application is, in itself, a large task. Making it resilient enough to deploy to multiple environments while also implementing testing, security scanning, and monitoring can be a heavy lift for understaffed teams.
Expertise
The lift is even more substantial if no one on the team has prior DevOps experience. Seasoned DevOps experts generally have a small army of tools and scripts they've written and gathered over several years of work. If your team doesn't have a DevOps Engineer on staff already, someone on the team has to assume this learning curve.
Time
DevOps automation frameworks take time to build, test, and deploy. That's time that one or more members of your team aren't spending on feature work. For many teams, finding the time to improve their DevOps posture is an ongoing struggle.
What DevOps tools can do
DevOps tools run the gamut in terms of functionality. Name an area of DevOps - continuous integration, continuous deployment, artifact storage, security, monitoring, logging - and you'll find a tool built for the purpose. In fact, it's possible to construct your entire DevOps pipeline from off-the-shelf tools and services.
The benefits of using DevOps tools
Given the complexity as well as the expertise and time required, it's little wonder many teams look to DevOps automation tools to help lighten the load.
Most DevOps work streams aren't specific to an application. All of the major components I outlined above - CI/CD, testing, staged releases, monitoring, security - are common to almost all DevOps initiatives. Good DevOps automation tools extract this commonality and make it generally available.
By using a pre-built DevOps tool, your team can lighten the undifferentiated heavy lifting of DevOps. You can also remove the need to hire a DevOps expert who can write everything from the ground up. This means your team can start on its DevOps journey earlier.
Using a DevOps tool also brings a higher degree of reliability to your DevOps processes. If you build your own DevOps pipeline yourself, you'll need to spend days and weeks debugging and re-tooling it. By contrast, real-world customers have already used, vetted, and provided months or years of feedback on a DevOps tool. By using a well-tested tool, you benefit from the experience of hundreds or even thousands of other development teams.
Popular DevOps automation tools
So where do you get started? After all, the market is awash in DevOps tools.
First, it's unlikely you'll use a single tool from a single vendor. Yes, a few tools attempt to provide all-in-one turnkey DevOps experiences. But in reality, few achieve this lofty aim. Therefore, you're likely to end up using multiple tools that each do a specific job very well.
It's hard to give a definitive list of current tools. While there are definitely market leaders, the DevOps tooling space is evolving rapidly year over year. So, instead of suggesting a single tool, I'll talk about each category of DevOps tools and what they can do. Then I'll give a few examples of some of the most popular options as of this writing. Make sure to do your own research before committing to a specific tool!
Application packaging
One of the most fundamental questions in modern Web application deployment is how you package your application. Year after year, more teams drift away from managing their own server farms and toward a serverless architecture. In a serverless model, you package your code in a predefined format and hand it off to a cloud provider, such as AWS. The cloud provider handles provisioning the required computing capacity to run your app.
These days, there are two key ways to package your application. The first is using Docker. A Docker container is a virtualized environment containing your application as well as the full environment - operating system, libraries, configuration, etc. - that it needs to run.
Migrating to Docker by itself gives teams a leg up in the DevOps game. A Docker container that runs successfully can run on any Docker-capable cloud service. You can also run it at any stage of the software development process.
However, Docker isn't the only way to package and deploy your code. Serverless functions - services like AWS Lambda and Azure Functions - enable developers to ship code as packages of REST API calls. And tools such as Serverless or AWS's Serverless Application Model (SAM) provide higher-level services that simplify deployment and orchestration of multiple microservices.
Infrastructure as Code
Packaging your code, however, is only part of the story. You need to run the code somewhere. And most modern applications require much more than code and a couple of servers. You also need to stand up virtual networks, databases and storage, load balancers, data processing services, queues...the list is almost endless.
Fortunately, every cloud provider supports some form of infrastructure as code (IaC). With IaC, you can programmatically stand up and tear down networks, databases, and applications on demand.
While you can use your favorite programming language for this task, cloud providers also provide their own domain-specific languages for IoC. AWS CloudFormation and Azure Resource Manager (ARM) templates are two of the most prominent examples.
For example, with CloudFormation, you can use JSON or YAML to create a file that instantiates any number of AWS resources. (The snippet below, for example, constructs a Virtual Private Cloud network with three public subnets.) You can then run the YAML file on AWS through the AWS Management Console, using the AWS Command Line Interface, or via an API call in your programming language of choice.
While these tools suffice for many teams, others need higher level features. IaC platforms such as Terraform offer features such as multi-cloud deployments, code reuse, and state management. A system like Terraform can also make it easier to offer self-service infrastructure services at larger organizations.
CI/CD Software
So you have your application packaging and your IaC solutions. However, you still need something to orchestrate the entire build/package/deploy process.
It's more or less a given that teams these days use Git for source code version control. Continuous Integration (CI) builds off of Git by using source code check-ins to drive the building, testing, and packaging of your application. The CI system can then feed the final packaged application to a Continuous Deployment (CD) process that pushes the new version to production.
The CI/CD system marketplace is full of competition. However, Jenkins remains the CI/CD of choice for many teams. Sure, it has something of a learning curve. But its flexibility remains unparalleled. On the down side, you'll either need to wrestle with hosting Jenkins yourself or pay someone (like CloudBees) to host it for you.
If you use Github for your Git needs, it may make sense to use GitHub Actions to drive your CI/CD processes. While a bit clunky from a UI perspective, its direct integration with GitHub allows you to manage your code and builds via a single service.
You can also elect to use your cloud provider's choice of CI/CD systems. Both AWS CodeStar and Azure DevOps offer full CI/CD capabilities along with integrated issue tracking. However, some teams - especially those that manage multi-cloud deployments - may be wary about putting all their eggs into one cloud provider's basket.
Security
In the past, security tended to be an afterthought added to applications after the "real work" was done. But no team can afford that approach in today's distributed online world. Any application can be the target of a cyberattack. That means that security needs to be built into the deployment process from the ground up.
Security is a complex topic. I can hardly do it justice in a few paragraphs. Briefly, here are a few areas to consider:
Software composition analysis (SCA) and supply chain management (SCM). The world runs on open source. Unfortunately, open source software packages can have massive security holes. 2021 saw an exponential increase in the number of attackers embedding exploits directly into popular open source projects.
Tools like Artifactory can help with managing build outputs and maintaining the chain of custody in builds. Other tools such as Flexnet Code Insight can also perform SCA by integrating directly with your existing CI/CD infrastructure.
Static Application Security Testing (SAST). SAST tools scan your code and binaries for known security vulnerabilities. Common examples include scanning for credentials in source code, known binary file vulnerabilities, and common security errors in source code. SonarQube and GitLab are two examples of providers in this space.
Dynamic Application Security Testing (DAST). DAST tools attempt to find vulnerabilities in running Web applications. The standout in this area is OWASP Zed Attack Proxy (ZAP).
And, finally, there's real-time monitoring of your running application. Which brings us to our next topic...
Monitoring and alerting
You're not done just because your release is out the door. As I discussed in a previous article, monitoring is key to DevOps automation. By monitoring key signals and metrics across all of your stacks and stages, you can decide whether to proceed with a deployment or roll it back. And if something goes wrong post-deployment, automation tooling can produce alerts and even generate trouble tickets for your team.
Monitoring tools generally fall into two categories. Logging tools gather logs and trace output from various systems and collate them in a central location. Metrics tools generate near-real-time signals on the running state of your app and infrastructure.
Every cloud provider provides some form of native monitoring and alerting. For example, CloudWatch in AWS enables developers both to centralize log management and emit metrics that can, in turn, generate alerts.
However, many teams opt to integrate third-party monitoring tools that offer more enhanced capabilities, scalability, and performance. Tools like Splunk specialize in gathering and aggregating logs. Other tools, such as Prometheus, focus on metrics emission, alerting, and monitoring.
Of course, how you visualize data is just as important as how you gather it. Many DevOps monitoring tools offer dashboarding and visualization tools to provide a single pane of glass into your application's operations. But there are also some great open source options for visualization. One of the most popular currently is Grafana, which has a generous free tier that includes 50GB of logs and trace data storage, as well as 10K series of Prometheus metrics.
Incident response management
Let's say your monitoring notices an anomaly. What happens next? How do you get the right people together to respond to the issue?
This is where incident response management tools come in. Teams can use tools like PagerDuty and OpsGenie to define on-call rotations. When an incident is detected - e.g., via anomalous metrics or log data - the tool can automatically raise an issue and contact the appropriate on-call team members.
Common criteria for DevOps automation tools
As you can see, there are a number of tools in the DevOps marketplace that serve different purposes. However, there are a set of common criteria you can use in deciding whether a given tool is right for you.
Hosting model/pricing
There are a lot of open source DevOps tools. The problem is that server-based open source DevOps tools often require you to provide your own hosting. This can produce a heavy operational burden, as hosting a system like Jenkins in a distributed and scalable fashion takes time and resources.
By contrast, a Software as a Service (SaaS) offering alleviates the burden of hosting. The trade-off is that you'll pay an additional operating expense month over month for the tool.
Neither model is better than the other. If you have engineering talent with DevOps experience and can afford to dedicate their time to maintaining your DevOps pipeline, self-hosting is a fine decision. Teams that are resource constrained or lacking in-house DevOps chops are often better off going the SaaS route.
Segmentation
As I've discussed before, a robust DevOps pipeline has multiple stages, such as development, testing, pre-prod, canary, and production. Your DevOps automation tools should support maintaining separate, segmented data sets for each stage in your DevOps pipeline. Without such segmentation, you won't be able to distinguish a problem in production from an issue earlier in the release process.
Integration with other DevOps tools
The odds that you'll use a single tool to fulfill all your DevOps automation needs are slim to none. When assessing a new tool for your toolkit, ask yourself how this tool will connect with the others. For example, can your monitoring tools emit custom metrics from your team's development framework? And can your tools all feed data into a common DevOps dashboard?
How TinyStacks automation can help developers
If you're just starting on your DevOps journey, the plethora of DevOps automation tools on the market can feel intimidating. Where do you get started? And how do you start your DevOps journey if you don't already have an in-house dedicated DevOps engineer?
We built TinyStacks to solve just this problem. TinyStacks is a DevOps automation suite that simplifies creating your first DevOps deployment pipeline in the cloud. Just provide your application code and our system will take care of building your infrastructure and deploying your app through your release pipeline. TinyStacks can have you up and running in an hour, even if you have no prior experience with DevOps or the cloud.
Want to know more? Contact us today for a demo!