Why using automation is a better approach and more secure than using “Golden” Amazon Machine Images (AMIs)
A commonly used (“best”) practice is to utilize the AWS Amazon Machine Image (AMI) service to create and store the complete contents of an application server instance running on Amazon’s AWS Infrastructure-as-a-Service (IAAS) cloud. Typically, a DevOps engineer will stand up and configure an instance. When the instance is functioning as desired, the DevOps engineer initiates the creation of what we refer to as a “Golden” AMI. AWS then copies the server into a flat file image (the AMI). This AMI can then be launched by AWS to create a new duplicate server instance up and running in mere minutes, with all accounts, services, and data ready-to-go. Any number of these instances may be launched with a click of a button.
The problem with this approach is that this new server is only as secure as the last time it was scanned and/or had updates applied, the image once created remains unchanged over time, and the only way to “update” it is to create a new image. New vulnerabilities continue to be discovered across the variety of OS programs & tools; as well as the software containers, platforms, and frameworks that comprise a modern fully-configured application server almost daily.
What might have been a fully secured application server, when the image was created, may very well be a target for hackers by week’s end; and while you can run updates to bring the server back into compliance, you’ve lost the real reason for utilizing the image in the first place — your server is not truly ready-to-go after launch.
A better practice is to treat the construction of the instances in the same way that software developers have built code over the past twenty plus years, in an iterative manner that includes testing, reporting, and notification, i.e. infrastructure as code. The approach here is to automate the creation and configuration of the server and then test it for both functionalities as well as security.
The DevOps team and/or IT SecOps should subscribe to and monitor the products and tools that are used by the organization’s application servers so that the automation can be updated ASAP. The scanning tools should be updated on at least a weekly basis to ensure that any new issues can be caught and remediated. Once the source of the automation is updated, it will be automatically applied in the future.
Every part of the server’s creation and configuration should be automated, using an orchestration server such as Ansible or Puppet, acting upon instructions stored in a Software Configuration Management tool such as Git and controlled by a tool such as Jenkins that ensures automated tests for functionality and security are executed and reported upon.
The automated creation of a new server would begin with the launch of a base OS AMI. Once the new image has completed its launch process, we’ll start with an update of the OS level programs (e.g. > sudo yum update -y) and install any software required to more easily work with the chosen automation tool (i.e. Ansible/Puppet). Next, apply security hardening scripts to limit the OS level services provided and install antivirus and firewall services. The organization’s DevOps automation account should be created and configured with the appropriate public SSH credentials installed and the default account should be removed. At this point, the server would be ready to begin the customization to support whatever tasks for which it was being provisioned.
After the application customization has been completed by automation, it is important that automated functional tests and security scans are run and reviewed. Any issues detected should be immediately corrected in the automation source and a new server creation process kicked off. The previous server can be terminated.
The benefits of this “Infrastructure as Code” approach are many, including:
- Ensures that servers are not built with known security issues and are verified with up-to-date scans.
- Because the “Infrastructure as Code” approach puts a focus on testing for both security and the functionality of the server itself, issues are more likely to be detected earlier, rather than after the server is put into service.
- Many of these same automation instructions verified on one server can by used in the creation of others, ensuring that lessons learned once are applied to all. Because the complete set of instructions for the creation of the server instances are documented in SCM, they can be reviewed and audited.
- When launching new servers from “golden” AMIs, the specification of the capability of the server (number of processors and memory) is baked into the AMI, when using the automation approach, the selection is simply a configuration variable making it easy to modify if a different capacity server is needed.
- Amazon charges you for the AMI’s that are created on your account, by using automation those charges are eliminated as well as giving you more control over the entire process.
- The time saved by using automation to create the instances is worth the time that is required to implement the automation. Issues fixed once don’t find their way into other servers reducing the time required to stand up new infrastructure.
Treat your infrastructure as code so that you can improve your organization’s ability to stay ahead of attackers.
*American Cyber Security Management (AmericanCSM.com) is focused on reducing your risk of data misuse. We do this through our Security, Privacy and DevOps offerings, delivered by seasoned experts. We can ensure your Agile delivery processes are secured and efficient, while maximizing your investments.