AWS Well-Architected Framework: What It Can Do For You

What Is The AWS Well-Architected Framework

If you’re thinking about building your company’s future in the AWS cloud, then you’ll want to make sure that your cloud resources and infrastructure are architected as safely, efficiently, and as cost effective as possible.

The good news for cloud architects is that there is no need to reinvent the wheel. Amazon has put together a series of best practices designed to provide objective guidance to those architecting on AWS.

The Well-Architected Framework contains five pillars which guide those building in AWS on how to ensure that their cloud is cost and performance optimized. The framework refers to a series of best practices that should be followed by experts when building their cloud environment.

Unlike many other AWS resources, it does not refer to a series of checks that can be initiated programmatically. Therefore, clients that wish to make sure that their workloads are architected in a manner compliant with the Framework (and which may not have the expertise to do this in-house) should engage the services of an external party (like Innovative’s Well Architected Review service).

 

Well-Architected Framework Pillars

The Well-Architected Framework contains five pillars. An AWS Premier Consulting Partner, like Innovative, follows an AWS Well-Architected Framework checklist to make sure that your cloud can be better aligned to best practices.

Operational Excellence

When migrating workloads to the cloud, it’s key to ensure that they are running efficiently. This means, for instance, avoiding duplication of resources. Or, if servers run at variable capacities, making sure that servers are configured with elastic resources, or take advantage of auto-scaling.

Additionally, those architecting in AWS should frequently refine their operations and build for failure, using practices such as regular game days to test workloads, ensuring that there are plans in place to failover key resources in a DR scenario, and ensuring that their organization culture supports a strong cloud presence.

Security

AWS cloud environments should be architected to protect data, systems, and assets and to take advantage of cloud technologies to improve security.  Best security practices include:

  • Applying security at every layer of the cloud: this can include ensuring that the VPC, subnets, and related resources are configured correctly; ensuring that network access control (NACL) policies are appropriately set; and granting the least amount of permissions possible for every user to do their job.
  • Audit and traceability: making sure that appropriate audit and logs are configured in the cloud. Users should also centralize identity management and ensure that users are not encouraged to use static credentials over the long term.

Reliability

This pillar calls for AWS cloud architects to ensure that their cloud is performing reliably and consistently.  Well-architected workloads should be able to:

  • Automatically recover from failure of components
  • Scale horizontally or vertically as needed.

Additionally, infrastructure changes should be handled by automation rather than being deployed manually.

Performance Efficiency

Performance efficiency means using cloud resources in a way that meets system requirements as efficiently as possible.

When an AWS Partner, like Innovative, applies this pillar of the Framework to clients’ cloud environments, they will:

  • Ensure that global resources are being used where reasonable. These reduce latency for end users resulting in faster performance.
  • Ensure that serverless architectures are being employed where possible. Serverless architectures means that architects do not need to manually provision infrastructure and can instead configure automation that provides capacity as workload requirements evolve.

Cost Optimization

Of course, users should also ensure that their cloud workloads are being run as cheaply as possible. Users should:

  • Measure the overall efficiency of their cloud.
  • Avoid paying for resources when they are not required.
  • Ensure that they are following cost optimization guidelines and aware of the components of the AWS cloud for tracking budget.

 

Why Get A Well-Architected Review?

The AWS cloud can save your organization time and money, all while boosting efficiency. While migrating from on-premises to a cloud-first environment will do the same, users might not be able to tap into the maximum advantages of the AWS cloud until they ensure that their architecture is compliant with the Well-Architected Framework.

 

Schedule a Well-Architected Review with Innovative

During Innovative’s Well-Architected Review, an AWS expert will review the client’s desired workload, assessing it against the Well-Architected Framework, and make recommendations.

As an AWS Premier Consulting Partner, Innovative’s team of experts can then take time providing hands-on optimizations to ensure that clients’ cloud infrastructure are in line with best practices.

Innovative Solutions Launches Cutting Edge Managed Service Offerings for Amazon Web Services Customers

Best in class managed services align to the needs of growing cloud businesses.

ROCHESTER, NY, April 5, 2021 – Innovative Solutions, a leading Amazon Web Services Premier Consulting Partner, announced today the launch of three new AWS Managed Cloud Services offerings. Unlike alternatives, these offerings are focused on providing customers with choices that fit their business needs as they leverage cloud-based services. Backed by a team of 100% AWS certified cloud experts, small and medium-sized businesses can select the cloud support offering that aligns to their business needs.

Starting today, the three new offerings from Innovative Solutions are:

Innovative MCS Tier Offerings

The three managed service offerings include access to curated technology and tools that are fully integrated into AWS, including:

  • The Innovative iNOC for 24 x 7 x 365 support
  • New Relic, for application performance monitoring
  • CloudCheckr, for cost analytics, cost savings and optimization, security, and compliance
  • Cloud Storage Security, for cloud storage antivirus scanning
  • SecureCloudDB, for cloud database security monitoring
  • Skeddly, for cloud automation
  • PagerDuty, for alerting and escalations
  • Emergency cloud engineering support

 

Learn More
Innovative Solutions believes that every company is a technology company. As the fastest growing AWS Premier Partner, Innovative Solutions helps customers in more than 240 cities throughout North America. With an army of cloud experts leveraging the Innovative Cloud RunbookTM, Innovative Solutions gives businesses of every size the confidence to grow in the cloud.

Justin Copie

Justin Copie, CEO

“Managing a cloud environment is complicated. Our entire business is designed to lessen the burden on the business owner’s shoulders and help them recognize the power of the cloud. Managed Cloud Services are the number one enabler to achieving this goal. More businesses have selected AWS than any other cloud provider, and hundreds of small and medium-sized businesses are selecting Innovative Solutions as their partner of choice to help them buy, optimize, and secure their AWS environments in the cloud.”

Is an AWS Well-Architected Review right for you?

Do you ever wonder if you’re realizing the full benefits of your IT Infrastructure? Is your current infrastructure built in a way that will help to achieve your business and technical outcomes? The AWS  Well-Architected Framework can help your company create a more efficient and effective IT infrastructure, even if it’s not in an AWS environment.

The best way to ensure your workload is meeting best practices and understand the business impact of your architecture is through a Well-Architected Review (WAR). A WAR uses the Well-Architected Framework as a guideline to ensure you are building a secure, high-performing, resilient, and efficient infrastructure for your applications, so you can focus on scaling your business while your infrastructure scales with it.

You may be wondering, “Is it even worth it to perform a Well-Architected Review on my architecture?” You could easily spend countless business hours trying to understand if the WAR is right for you, but we will simplify this process by breaking down the four most common scenarios that we have seen while conducting WAR’s for our customers. This way you can make this valuable decision sooner rather than later.

Is an AWS Well-Architected Review Right For You?

Choosing to do an AWS Well-Architected Review boils down to two factors. The intricacy of your workload and the depth of your AWS knowledge and expertise are two factors that create four common situations that your company may fall into. Each situation has its own outlook on whether you should follow through with a WAR.

Relatively Simple Workload, Little to Zero AWS Knowledge

In this category, customers may be using some of the most common AWS Services (i.e. EC2, Amazon S3, Amazon RDS, etc.), have a lower AWS spend, and have very few complexities to meet their infrastructure needs. If you have a simple workload and do not spend a substantial amount of your IT budget on your AWS workload, it’s not the end of the world if you don’t go through a WAR. Nonetheless, if you have plans to scale your infrastructure soon or do not know if your architecture is set up to optimize costs, you may still see a benefit from a WAR.

Relatively Simple Workload, Deep AWS Expertise

Businesses with relatively simple workloads and a higher knowledge of AWS do not necessarily need to consider a WAR. Although WAR’s provide substantial benefits, your internal team may have the expertise needed to conduct one yourself using the free documentation that AWS provides to the public. There is always an opportunity to find value in a WAR, but if you have confidence in your own cloud experts and your company’s cloud practice, you might want to keep this work internal.

Highly Complicated Workload, Deep AWS Expertise

In this scenario, your company may have a highly complex workload, but you have an experienced cloud team already in place to manage your infrastructure. In this case, a WAR would be optional. However, receiving a WAR from a WAR-certified AWS Partner Network (APN) partner can provide an extra set of eyes that will help to provide feedback on the architecture your experts have built. Ultimately, if you fall into this bucket, you may decide that your team has everything under control, or you may decide that a second look wouldn’t be such a bad idea after all.

Highly Complicated Workload, Little to Zero AWS Knowledge

Managing a complex infrastructure can be a strenuous process, especially if you don’t have the resources or skillset to handle it. If your company has a complicated workload and you don’t feel you have the know-how to ensure your infrastructure is running efficiently, you would greatly benefit from a WAR. AWS customers in this category commonly struggle to optimize their AWS costs and do not see the full potential of their AWS workload. After helping these customers through a WAR, we often identify other opportunities for our customers to utilize AWS and see additional business value. We recommend these types of customers to take advantage of a WAR, and the findings may be worth their weight in gold.

All in all, a Well-Architected Review is not necessary in every scenario, but they can provide value no matter your business’s situation. Even if you have gone through a WAR within the last year, AWS recommends that these reviews are conducted on a semi-continuous basis to address newer concerns with your architecture.

After reviewing these common scenarios, you may realize you want to conduct a WAR on your infrastructure. You may be thinking, “Who can help me through this process and help me see the benefits of a WAR now?” Good news for you: you have partners that are here to help! AWS has a Well-Architected Partner Program where AWS trains APN Consulting Partners on how to perform Well-Architected reviews. Our company along with others have this certification and we can help you establish good architectural habits, eliminate risk, and respond faster to changes that affect designs, applications, and workloads. If you’re interested in conducting a WAR for your infrastructure, feel free to reach out to us by filling out the form below, or find another partner that will be able to assist you in your needs.

 

 

Still have a question about AWS Well-Architected Reviews? Contact us to get your answer

The Value of an AWS Well-Architected Review

Is your cloud environment architected to meet your desired business and technical goals? Consider a formal evaluation of your cloud infrastructure with an AWS Well-Architected Review. Learn, measure, and build using architectural best practices to enhance and modernize your infrastructure. This assessment will help your business optimize and accelerate your AWS environment to meet your key business objectives.  But what does the phrase “well-architected” mean?

What is an AWS Well-Architected Review (WAR)?

An AWS Well-Architected Review, or WAR, is a framework that was developed by AWS Cloud Architects to help create an efficient and effective infrastructure for applications being used in the AWS environment. The framework is now used globally by AWS Cloud Architect’s to help customers increase the value of their AWS platform for their specific business needs.

AWS Well-Architected Reviews are based on the following five key pillars:

These five key pillars are the foundation of your architecture. Just like buildings, when the foundation is not solid, structural problems can weaken the integrity of the building, leaving you at risk. Incorporating the pillars into your cloud architecture allows you to produce a stable and efficient foundation that can be easily built upon.

Not only do five the pillars allow you to focus on other aspects of software design, such as functional requirements, but it provides a consistent approach to evaluate your infrastructure.  Learn more about the 5 Pillars of an AWS Well-Architected Review

 

What is the Value of an AWS Well-Architected Review?

Conducting a Well Architected Review will help align your technology and business objectives. After this assessment, you will receive direct actionable solutions to strengthen your foundation. These recommendations are highly valuable and if chosen to proceed with the remediations, the benefits your company will experience are very clear.  A WAR can provide value to your business in the following ways:

  • Cut down costs and maximize your company’s IT spend
  • Help leverage cloud technology to improve your cloud usage and modernize infrastructure
  • Address any concerns or questions surrounding security, reliability, and operations.
  • Receive help in navigating the many services provided by the AWS.

 

How can an AWS Well-Architected Review Support Your Business?

A WAR can teach you how to achieve your business outcomes while cost optimizing in four key ways:

  • Right sizing your resources so you only pay for what you use
  • Choosing the right pricing model to meet your cost targets
  • Meeting changes in demand with cloud elasticity
  • Measuring, monitoring, and improving your usage and spending to ensure you are taking the most cost-effective approaches

 

Why Choose Innovative for an AWS Well-Architected Review?

Just like AWS, we are customer-obsessed in everything we do. We want to help customers maximize their AWS platform to get the most out of it. Our experts provide an efficient process to help clients create a roadmap to improve their infrastructure. To help drive confidence in your cloud decisions, we are committed to showing you relentless support. As an AWS Advanced Consulting Partner, we can take your company to the next level through modernizing and transforming your business and technology. We will show you how to harness the power of AWS to experience full business potential.

What Should I Do Next?

There is no better time than now to schedule your AWS Well-Architected Review. Make sure your business is running efficiently in a cost-optimized environment and you are leveraging the right services to meet your key business objectives.

Schedule Your Well-Architected Review

Why you should consider Infrastructure as Code

Infrastructure as Code (IaC) has revolutionized the way that infrastructure is provisioned. In short, IaC is defining your cloud infrastructure (Amazon VPC, subnet, Amazon EC2 instances, security groups, etc.) in a template file or in actual code.

Initially, you could only define the infrastructure in a template using JSON or YAML and then create a stack using AWS CloudFormation. Now, there is another option – the Cloud Development Kit (CDK) – that allows you to write code in common programming languages such as JavaScript and Python to define your cloud infrastructure. Under the hood, the CDK converts the code to an AWS CloudFormation template and then creates a stack from that. No matter which route you choose, IaC provides many benefits such as automation, repeatability, compliance-ready design, and the ability to leverage source control.

Automation

By defining your infrastructure as code with a servicelike AWS CloudFormation you can easily build your entire infrastructure with the click of a button. Before cloud computing platforms, like AWS, the infrastructure team would need to manually spin up each server, configure their settings and services, and install any needed software and packages. This was a manual, time-consuming process with a high risk of human error. By using AWS CloudFormation and its associated helper scripts such as cfn-init and cfn-signal, you can install and configure software packages as the infrastructure is provisioned ensuring everything is built in the correct order.

AWS provides the Metadata section in AWS CloudFormation to define information that can be used to customize the setup of an instance. The AWS::CloudFormation::Init: section under Metadata helps us declare information that we need to help install and configure our instances. For example, we can automate the installation and configuration of a LAMP stack onto our Amazon EC2 instance. As seen below, we declare two configSets: Install and Configure. Under the Install configSet, we declare the packages that we want to install and the package manager we want to use to install them (yum in this case).

Further down in the Amazon EC2 resource definition, the UserData section is where we can define commands to run automatically on startup of an instance. In this case, we update the AWS CloudFormation bootstrap package and then run the cfn-init command, which looks at the AWS::CloudFormation::Init section where we defined the packages that we want to install. It passes in the name of the AWS CloudFormation stack, the name of the resource, the configSets that we want to run and the region as command line parameters.

After the cfn-init command, there is another AWS CloudFormation helper script command called cfn-signal. This command is receiving the output (success or failure) from the cfn-init command and signals to the CreationPolicy if the installation was successful. The timeout in the CreationPolicy section means that AWS CloudFormation will wait for five minutes for a success signal. If it doesn’t receive a signal in that time period, the AWS CloudFormation will stop the stack creation and mark it as “failed to create.”

Repeatability

Once you have defined your infrastructure in an AWS CloudFormation template, you can repeatably create environments anytime. Here at Innovative Solutions, we have a standard networking templates that can be used for any new projects. This removes human error involved with manually provisioning your infrastructure with each new project.

Compliance-ready

By default, an AWS CloudFormation stack allows update actions on all the underlying resources. To solve this, we can define a stack policy that will ensure that the resources in the AWS CloudFormation stack cannot be updated. There are also other tools such as Drift Detection to ensure no one is changing the underlying infrastructure. Ad hoc manual changes to the stack should never be permitted because this could result in a non-compliant environment. Especially for a production environment, all changes should be run through the AWS CloudFormation template via a stack update.

Source control

Another great part of having your infrastructure defined as code is you can check it into source control just as you would with any code. This allows you and your team to be able to see the history of templates and the various changes that happen over time. Also, this allows your team to collaborate on the development of templates.

Organizing and managing templates between teams

When starting out with AWS CloudFormation you will probably put all your resources in one template. However, as your infrastructure gets more complex, this will become unmanageable. For example, a company has three teams working on a given application: a network team, an application development team, and a security team. Each team will have multiple resources that they need to provision for the application. Let’s say the network team needs to make a change to the VPC resource they have defined in the AWS CloudFormation template. If the teams are sharing one template, this could cause confusion and unnecessary overlap. To solve this issue, the best practice is to create three separate templates, one for each of the teams. This way each team can manage their own template without needing to check with and coordinate with the other teams before making changes to their resources.

Certainly, there will be resources that will need to be shared and referenced between the three templates. To solve this, we can use cross-stack references, which allow resources to be exported from one template and imported into another. For example, if the security team needs to reference the VPC defined in the network stack, it can do so by importing the VPC resource (if the network stack exported that VPC resource).

In the network stack, we need to export the ProdVPC resource:

In the application stack, we import the VPC Id for use in defining the target group of our Elastic Load Balancer.

Another best practice is to use nested stacks to re-use templates that are commonly used. Let’s say you have a network stack that is used in all the applications you create. Instead of defining the same network stack in each application template, you can make the network stack its own template, host it in Amazon S3, and then when you require another stack, you can define a resource type AWS::CloudFormation::Stack and then point to the location of the network template.

Here is an example of what this looks like in a template:

We are defining a CloudFormation stack that is referencing a template file that is stored in S3.

How does Innovative Solutions leverage IaC?

Innovative Solutions has been leveraging AWS CloudFormation for years because of the many benefits it provides for our organization. We have developed many different templates for networking, security, and others, that have been incrementally improved through the years. Having mature AWS CloudFormation templates at our disposal makes it easy to build infrastructure quickly and reliably. This allows us to save time and focus on the actual workload.

Our templates are stored in source control so they can be easily updated as services evolve. All past versions are tracked and can be easily updated for collaboration. For each project, we are able leverage our AWS CloudFormation templates to easily deploy multiple identical stacks for each of our environments (dev, staging, production).

Cloud Development Kit

The Cloud Development Kit (CDK) is another excellent way to define your AWS infrastructure as code. In fact, CDK abstracts away a lot of complexity of the AWS CloudFormation template. It allows you to provision AWS resources in popular programming languages such as C#, Java, JavaScript, and TypeScript, instead of creating a separate template file written in JSON or YAML. Using the CDK also allows you to use programming logic (if statements/for loops) that developers are comfortable with to help provision infrastructure resources. Writing ten lines of code using the CDK can produce hundreds of lines of an AWS CloudFormation template.

When you run your CDK app, an AWS CloudFormation template is synthesized (created). This doesn’t create any resources. The cdk deploy command actually creates the stack and the underlying resources.

Below is a sample Python CDK application that creates an SQS queue and an SNS topic. The queue is added as a subscription to the SNS topic so that it will receive messages when they are pushed to the SNS topic.

As seen above, this is simple, easy to understand code to write in Python. These lines of code create a CloudFormation template that is 150 lines long! The CDK provides an amazing level of abstraction that organizations can adopt quickly, if they haven’t already.

IaC has forever changed how we create virtual infrastructure. Once an organization learns how to leverage IaC, they will never go back to manually creating virtual servers and configuring all the settings and services associated with them. Not only is doing all this manual work extremely tedious, it also poses a high risk of human error because of the manual steps involved. With the development of the CDK, there is less of a barrier to entry leveraging IaC at your organization. You can find numerous sample templates online on the AWS website e. There is some up-front work involved with IaC, but once you are up and running you will appreciate the multitude of benefits that come with it.

 

Do you still have questions about Infrastructure as Code (IaC) ?

Feel free to contact us, we’d love the opportunity to further discuss anything you have read.

Contact us for more information

Content Management Systems on Amazon Web Services (AWS)

A content management system (CMS) is an application allowing users to become authors of their own content. An administrator of a CMS site has the ability to add new pages, text, files, and completely own the structure and content of their website without any backend access or development knowledge. CMS sites can be efficiently hosted and maintained on various services provided by AWS. Many of these services improve the SDLC of the application by securely storing application code, enabling frequent releases, providing highly available custom content, and the ability to replicate environments using Infrastructure as Code.

AWS CodePipeline

AWS provides CI/CD tools that work seamlessly with CMS applications. It’s important for any application to have a well-defined release process and AWS CodePipeline streamlines the build and deploy steps. AWS CodePipeline can use AWS CodeCommit, GitHub, or Amazon S3 as sources. Many open source CMS solutions have their source checked into GitHub which makes tying these projects to AWS CodePipeline incredibly simple.

Developers can connect to GitHub or AWS CodeCommit through the AWS console under the AWS CodePipeline service where they can select their source repository. They can then add build, deploy, custom, and manual approval actions as needed. Once in place, when a change is made to a source repository a pipeline can automatically trigger and all defined steps in the pipeline will be executed.

 Environment automation with AWS CloudFormation and AWS Elastic Beanstalk

AWS provides two services to quickly spin up new environments and projects with the click of a button: AWS CloudFormation and AWS Elastic Beanstalk. Both can be used to create environments for CMS sites providing different levels of automation and environmental control. Often a new environment will need to be spun up quickly to provide a new testing or QA site. In other scenarios, a brand-new application may need to be created, but that application’s functionality overlaps with previously created CMS sites and just needs to be customized for a client. AWS CloudFormation allows a developer to create templates that describe in yaml or json code the environment’s specific resources such as Amazon EC2 servers, Amazon S3 buckets, or Security Groups. These templates only need to be created once and can then be reused or modified to quickly create new environments. AWS CloudFormation allows complete control over the environment’s resources whereas AWS Elastic Beanstalk manages more of the environment.

AWS Elastic Beanstalk requires just a few pieces of information about the type of application being created, and then automatically creates all necessary resources for the environment. There is less control over the resources created by an AWS Elastic Beanstalk application, but the speed in creating an entire application stack means less time developers need to spend provisioning and configuring resources by hand. Both environment creation methods decrease the potential for human error caused by a manual process.

Amazon EBS, Amazon EFS, Amazon S3, and Amazon FSx for Decoupled Site Asset Storage

CMS applications often allow users to upload custom files such as media or CSS. By default, most CMS frameworks store these files to local disc. AWS provides many storage options, each with benefits and drawbacks, to store these assets: Amazon EBS, Amazon EFS, Amazon S3, and Amazon FSx.

Amazon EBS (Elastic Block Store) has two main disk type options: SSD and HDD. An SSD disk type will provide faster performance than an HDD. Amazon EBS volumes can be attached to both Linux and Windows servers and it is typically the most performant solution. However, Amazon EBS volumes can only be attached to one EC2 instance at a time, meaning it will not be usable for shared storage in auto-scaling scenarios. If a CMS site will get a lot of traffic and needs to scale to maintain site performance, Amazon EBS would not be a good choice to store dynamic content. Amazon EBS provides a snapshot method of backing up content and restoring to new Amazon EC2 instances.

Amazon EFS (Elastic File System) is similar to Amazon EBS but has the ability to be accessed by multiple Amazon EC2 instances. It is therefore useful when auto-scaling is needed for a heavily trafficked application. However, Amazon EFS cannot be mounted to Windows instances.

If auto-scaling is needed in a Windows environment, Amazon S3 (Simple Storage Solution) and Amazon FSx are viable options. Amazon S3 differs in that it stores files as objects in buckets via an API instead of leveraging a file system mount. Replication can also be configured on a bucket to copy an item from one bucket to another, providing a method of backups or syncing site assets across environments. In general, Amazon S3 will be the cheapest option compared to the alternatives. The biggest decision factor is Amazon S3 likely requires significant application changes to leverage since it can’t be mounted.

Amazon FSx for Windows File Server works similarly to Amazon EFS but for Windows Servers. It provides a managed storage solution that can be attached to multiple instances and also provides some additional functionality like Active Directory integration.

Conclusion

The services mentioned above work together to provide a stable and efficient environment that accommodates a CMS application. AWS CodePipeline can manage the release process and is already integrated with popular version control providers like GitHub. CMS applications can utilize AWS CloudFormation and AWS Elastic Beanstalk which enable new environments to be built and created quickly. AWS provides flexibility when choosing the appropriate storage platform. Amazon EFS, Amazon EBS, Amazon S3, and Amazon FSx cover most storage scenarios when running a CMS.

 

Do you still have questions about custom development or CMS?

Feel free to contact us, we’d love the opportunity to further discuss anything you have read.

Contact us for more information

Learn more about our AWS Cloud Service offering

Why Innovative Leverages DevOps

Innovative Solutions is a mid-sized company, but many times we encounter communication and coordination problems at an enterprise scale. Internally, we have multiple development teams with each team comprised of multiple team members bringing a range of skill sets. Each team interacts with third-party vendors, providers, and clients who often bring their own development teams with whom we collaborate. Quite often clients are directly communicating with third parties as well. As the number of entities in this communication graph increases, the complexity of organizing and interacting grows which requires structures and processes to be put in place to ensure efficient communication.

Over the years we have been employing and maturing our SDLC methodologies, following Agile practices, and incorporating the latest tools to help develop, deploy, and support our products and our clients. This has organically led us to leverage DevOps services driven by industry leaders including Amazon Web Services (AWS).

Innovative Solutions takes these ideas very seriously and their adoption has helped us successfully navigate an increasingly complex ecosystem. In fact, Innovative takes this so seriously we require any engagement to be working toward an end goal of leveraging DevOps processes and tools. We’ve seen time and time again when our partners understand the value in laying a proper foundation, and we all win.

Leveraging automated build and CI pipelines has taken a burden off developers freeing them to spend more time creating rather than waiting to see if tests pass. We heavily use AWS CloudFormation to automate our infrastructure setup in a repeatable manner. This makes spinning up a temporary lower environment almost instantaneously, with just one-click versus the weeks it took just a few years ago.

Advanced monitoring and alerting enables Innovative’s team to identify small problems before they become big ones. Tools like Amazon CloudWatch, AWS X-Ray, and Datadog provide visibility into systems unparalleled to anything we had in the past. We now leverage logs and metrics that previously were discarded to identify areas of opportunity so we can continually improve customer experience while providing tangible value to our clients.

DevOps and Regulation

The complexities of interactions these days are exacerbated by ever-growing regulatory pressures. Innovative Solutions consistently partners with customers who must adhere to HIPAA, PCI, SOCs, and other compliances. Innovative has utilized AWS DevOps tools to create processes and controls making compliance and audits more secure and successful.

The same build and deploy pipelines that facilitate our rapid development cycles also provide immutable packages we can promote from environment to environment. This helps us ensure that no bad actors have the ability to tamper with code on its way to production.

We leverage AWS CodePipeline’s manual approvals ensuring releases have the appropriate sign-off before moving forward. This allows us to put in place appropriate controls and separation of duties.

AWS Config gives us the capability to be notified when any part of our infrastructures deviates from the policies we have defined. If this happens, AWS CloudTrail makes it easy to perform root-cause-analysis and correct the problem quickly. Our applications are assessed by Amazon Inspector to identify any deviations from internal standards.

 

A Must-Have for Any Business

DevOps is not something off on the horizon. The methodologies and tools are mature enough that it should be considered as part of the standard SDLC. If you’re not already practicing DevOps, the time is now. DevOps is not a nice-to-have, but a must-have for any business serious about long-term software development.

 

Do you still have questions about DevOps?

Feel free to contact us, we’d love the opportunity to further discuss anything you have read.

Contact us for more information

Learn more about our AWS Cloud Service offering

Why Innovative uses a CI/CD pipeline

Here at Innovative Solutions we take the mantra “always be learning” to heart. It’s evident in our approach to changing the way in which we deploy our applications. A well-structured CI/CD pipeline improves many aspects of development. Over the years we’ve made massive headway in learning from our previous deployment woes and adopting a more continuous and agile approach.

Before delving into what we’ve changed I’d like to give brief explanation as to why we felt the need to make these changes; what are the pitfalls and how did they impact us?

We’ve always had to be adaptable and agile with our deployment strategies; working with many clients who utilize a variety of technologies practically mandates it. But we found ourselves repeatedly running into the same problems

Previously, one of our deployment strategies included manually combing through changes in our GIT repositories to find updated files and copying them to distribution directories. Another deployment strategy required RDPing into jump-boxes to then RDP once again into each web server and pasting the build artifacts onto each. Then, we manually went into IIS to update the directory that the hosted server pointed at.

Tasks like these were manually intensive and took time away from development. These deployment strategies were prone to error and often introduced defects into the shipped product. The effort involved meant that releases were infrequent leading to a slower time to market. These deployment processes would leave developers scratching their heads and wondering if a certain feature wasn’t working or if it hadn’t made it into the release due to an error in the manual deployment.

We knew something had to change, and fast. We began to implement new deployment strategies focused around continuous delivery and integration. Here’s how:

How does Innovative use CI/CD?

Working with many clients and a variety of technologies gives us a unique opportunity to learn which deployment strategy best suits a client and their needs. As such, we’ve been able to customize each of our CI/CD pipelines around a  client, their needs, and technology stack.

When dealing with a large application, a fully integrated build server with robust testing capabilities is required. We have an application configured to use TeamCity which performs continuous integration on artifacts that are deployed to AWS infrastructure. AWS CI/CD infrastructure allows you to painlessly integrate with third-party solutions such as Jenkins or Travis CI.

For smaller CMS sites, we leverage a cloud-native CI/CD pipeline based entirely in AWS. All the developer has to do is push their code to AWS CodeCommit. This triggers an AWS CodePipeline to orchestrate the CI/CD process. AWS CodePipeline is ideal for organizing the build and deployment flow and allows for easy step-by-step editing and visualization of the process flow.

AWS CodePipeline instructs AWS CodeBuild to both build the package and run specified tests each time a commit is pushed to our AWS CodeCommit repository. This allows for quick and continual testing and immediate notification if a build or test fails.

After all of the AWS CodeBuild tests have passed, we typically include a manual approval step in our AWS CodePipeline flow. As we work with several customers bound by regulations such as HIPAA, manual approvals help ensure that the code we’re deploying meets all process and control requirements.

AWS CodePipeline’s flexibility with manual approval steps and custom defined tests within the pipeline allow us to customize each CI/CD pipeline to specific client needs.

Some of our CMS sites are deployed to AWS Elastic Beanstalk. For these sites, ease of deployment and infrastructure management was key, and AWS Elastic Beanstalk fits those needs perfectly. AWS Elastic Beanstalk takes care to provision all of the underlying resources needed to run these sites, such as load balancing, auto scaling, and storage. Without the need to provision all these resources manually we can reduce the load on developers who can instead focus on rolling out new features instead of worrying about infrastructure.

However, there are cases where you do want more control over your deployment where AWS Elastic Beanstalk isn’t the best fit. Instead, AWS CodePipeline is flexible enough to accommodate these needs. AWS CodePipeline using AWS CodeDeploy allows you to continuously deploy code to Amazon EC2 instances that aren’t managed by AWS Elastic Beanstalk. This allows us to add CI/CD pipelines for customers who are already using Amazon EC2 instances to host their applications. Leveraging AWS CodeDeploy allows us to easily integrate their specific deployment process into a CI/CD pipeline without changing any of their underlying infrastructure.

What has Innovative learned?

While there are upfront costs, strategically focused business stakeholders can’t afford to neglect the long-term benefits of CI/CD.

According to Accelerate: State of DevOps 2018: Strategies for a New Economy, high performing teams are deploying 46x more frequently with 1/7th the error rate, when compared to low performing teams.

CI/CD pipelines save time, effort, and the mental health of developers. Here at Innovative, our team has fully embraced CI/CD and by using AWS technologies, it has made adoption that much easier.

 

Do you still have questions about CI/CD?

Feel free to contact us, we’d love the opportunity to further discuss anything you have read.

Contact us for more information

Learn more about our AWS Cloud Service offering

Why monitoring and logging are crucial to cloud computing success

Gone are the days that monolithic applications run on a single server on-premise. The current landscape of cloud computing and microservices has made many aspects of computing easier but monitoring and logging is not one of them. Instead of having logs on one web server, they are now often highly distributed across many different systems.

With cloud computing and containerization becoming so popular, we often have many short-lived resources that are spun up for short periods of time and then destroyed after they are done serving their purpose. This makes logging even more challenging because you need to capture and store them elsewhere before they are destroyed. Not only are the logs distributed across various systems, but there are also an increasing number of logs.

Computing systems produce logs that can be used to give insight into the current system state. Most companies store log files because “that’s what you’re supposed to do”.  However, the only time companies look at them is when a system fails, at which point they are already in panic mode trying to figure out what went wrong. Then they begin to look at the various systems logs and sort through them, trying to find the relevant ones. This becomes a nightmare when these logs are spread across multiple systems.

If you don’t have all your logs in a centralized place where you can efficiently sort and filter through them, you won’t be able to quickly troubleshoot problems. The information exists but because there is no easy way to decipher it; you’re essentially looking into a black box.

Innovative Solutions legacy process for monitoring and logging

In order to see logs that the web servers and databases were producing, we had to log into web servers and databases. This was very time-consuming and inefficient. We didn’t have a simple way to monitor our applications. We were reactive to issues reported by customers, rather than having a proactive approach to identify and act on issues before they happened.

Current use of monitoring and logging leveraging AWS

One logging and monitoring pattern that we leverage today is to aggregate Iog files and metrics in third-party cloud-native monitoring platforms such as Datadog. Logs are collected in Amazon CloudWatch and then pulled into Datadog via the Datadog AWS integration. Agents running on compute instances also push logs into Datadog. This allows us to monitor and analyze our production environment in near real-time.

Purposeful Logging

The goal for a logging system is not to collect logs like trading cards; it’s to use them to carry out automated actions and achieve high system visibility. Amazon CloudWatch provides log storage in a centralized location, so you don’t need to go searching across various systems. However, just storing logs in a centralized location isn’t enough. Amazon CloudWatch provides services that allow you to monitor your systems and take action based on events and alarms. Amazon CloudWatch events and alarms integrate with many other AWS services such as Auto Scaling Groups, Amazon SNS, Amazon SQS, AWS CodePipeline, AWS Lambda, and many more.

Monitoring in Amazon CloudWatch

With Amazon CloudWatch you can track system metrics for your instances (e.g. CPU utilization) and have them display on a dashboard. This allows you to see the health of your application without needing to dig through thousands of log files. We can check our dashboards to make sure the operation of our systems is nominal. You can see an example dashboard below that shows the healthy host count, consumed RCUs and WCUs, and incoming log events. A simple dashboard like this can give you a quick idea of how your systems are performing with minimal effort.

Amazon CloudWatch Alarms

Amazon CloudWatch alarms enable us to scale up and down based on thresholds for metrics identified as application bottlenecks. In this example, I created an Amazon CloudWatch alarm that watches the average CPU Utilization metric for all the instances in an Auto Scaling Group. If the average CPU Utilization goes above 60%, an alarm will be triggered which will: send a message to an SNS topic and add an instance to the Auto Scaling Group.

This shows the creation of the Amazon CloudWatch alarm which sends a notification to the ‘Default_CloudWatch_Alarms_Topic’ SNS topic.

Then we added the auto scaling action so that our Auto Scaling Group adds one instance when the alarm (awsec2-devops-competency-CPU-Utilization) is triggered.

You can see the graph of the CPUUtilization with the red horizontal line representing the 60% CPUUtilization alarm. The blue line represents the CPUUtilization of the Amazon EC2 instances in the auto scaling group. When the alarm threshold is met the alarm is triggered and subsequent actions are run. In order to test triggering the alarm I logged into the Amazon EC2 instance and created and ran a Python script that simulates high CPU load.

As you can see below for this CloudWatch alarm, we have set up two actions: a message to an Amazon SNS topic “Default_CloudWatch_Alarms_Topic”, and an auto scaling action that will add one instance to the Auto Scaling Group that we have specified ‘devops-competency.’

After the alarm was triggered, the message was sent to the SNS topic which then sent an email to the subscribers of that topic. I set my email as a subscriber to the topic and received the following email:

Having the appropriate people notified of an alarm being triggered is nice, but you also want to pair that with an appropriate action automatically triggered such as scaling up your Auto Scaling Group. As seen in notification below, after the alarm was triggered the capacity of the Auto Scaling Group was increased from one to two instances.

CloudWatch Events

Amazon CloudWatch provides many types of events. Some examples would be events created upon state transition in AWS CodePipeline.

When you create a pipeline via AWS CodePipeline you are given the option to use Amazon CloudWatch Events as a change detection option in the source stage. This means that when I push my code to the source repository, the pipeline automatically starts and runs through all the subsequent stages.

You can also configure Amazon CloudWatch events to watch your AWS CodePipeline and receive notifications based on state changes. I created a CloudWatch event rule that would detect when the AWS CodePipeline Execution State changed to a FAILED state.

After you specify an event source, you can then specify the target of what you want to happen after a rule is matched. In this case I set an Amazon SNS topic (called ‘codepipeline-failed’) as the target:

Then I created an AWS Lambda function that would send a notification to a specified Slack channel, saying that an AWS CodePipeline stage has failed. The members of the Slack channel will see this and can take the appropriate actions to figure out what went wrong.

I also set the AWS Lambda function as a subscriber to the Amazon SNS topic so that when a message is sent to the Amazon SNS topic the AWS Lambda function will be automatically triggered.

You can see the corresponding message in the Slack channel below:

By using Amazon CloudWatch events, we now have a better integrated CI/CD pipeline. If a developer pushes code to our AWS CodeCommit repository and any stage of the AWS CodePipeline fails, our team will be automatically notified in our Slack channel.

Do you still have questions about monitoring and logging?

Feel free to contact us, we’d love the opportunity to further discuss anything you have read.

Contact us for more information

Learn more about our AWS Cloud Service offering

Managing your EC2 Microsoft patch management

Managing Microsoft updates manually or through WSUS can be challenging in large environments.  AWS Systems Manager allows you to manage all your AWS EC2 infrastructure with a single pane of glass.

Within this single pane of glass, you can view your EC2 inventory, patch baselines, compliance against those baselines, and run ad hoc scans and patch installs.  Streamlining the update process with use of maintenance windows and automation will allow for repeatability and stability of the monthly Microsoft update process.

From a high level here are some of the steps to setup this service:

1.  Create an IAM role with the AmazonSSMFullAccess policy to allow Systems Manager to manage your EC2 instances.  The SSM agent is pre-installed on all Amazon created EC2 AMIs which System Manager uses to perform functions.

2.  Add the newly created IAM role to your EC2 instances.

3.  In a short time, you will begin to see the systems check into Inventory under Systems Manager.

4.  Create your Patch Management baseline. You can use the default, or create one and set it as the new default.

5.  Create a schedule so the machines are patched automatically. If you have different times you want to patch instances, consider using tags to setup different schedules.

After configuration, Patch Manager will use Run Command to call the RunPatchBaseline document to evaluate which patches should be installed on target instances according to each instance’s operating system type directly or during the defined schedule (Maintenance Window).

Have A Question?

Innovative Solutions is an Advanced Consulting Partner with expertise in Microsoft Workloads. Innovative is a service delivery partner for Windows on EC2 and part of the Service Delivery Program.

Learn more about our CLOUD services

Contact us to START THE conversation

Download

  • This field is for validation purposes and should be left unchanged.