The Value of an AWS Well-Architected Review

Is your cloud environment architected to meet your desired business and technical goals? Consider a formal evaluation of your cloud infrastructure with an AWS Well-Architected Review. Learn, measure, and build using architectural best practices to enhance and modernize your infrastructure. This assessment will help your business optimize and accelerate your AWS environment to meet your key business objectives.  But what does the phrase “well-architected” mean?

What is an AWS Well-Architected Review (WAR)?

An AWS Well-Architected Review, or WAR, is a framework that was developed by AWS Cloud Architects to help create an efficient and effective infrastructure for applications being used in the AWS environment. The framework is now used globally by AWS Cloud Architect’s to help customers increase the value of their AWS platform for their specific business needs.

AWS Well-Architected Reviews are based on the following five key pillars:

These five key pillars are the foundation of your architecture. Just like buildings, when the foundation is not solid, structural problems can weaken the integrity of the building, leaving you at risk. Incorporating the pillars into your cloud architecture allows you to produce a stable and efficient foundation that can be easily built upon.

Not only do five the pillars allow you to focus on other aspects of software design, such as functional requirements, but it provides a consistent approach to evaluate your infrastructure.  Learn more about the 5 Pillars of an AWS Well-Architected Review

 

What is the Value of an AWS Well-Architected Review?

Conducting a Well Architected Review will help align your technology and business objectives. After this assessment, you will receive direct actionable solutions to strengthen your foundation. These recommendations are highly valuable and if chosen to proceed with the remediations, the benefits your company will experience are very clear.  A WAR can provide value to your business in the following ways:

  • Cut down costs and maximize your company’s IT spend
  • Help leverage cloud technology to improve your cloud usage and modernize infrastructure
  • Address any concerns or questions surrounding security, reliability, and operations.
  • Receive help in navigating the many services provided by the AWS.

 

How can an AWS Well-Architected Review Support Your Business?

A WAR can teach you how to achieve your business outcomes while cost optimizing in four key ways:

  • Right sizing your resources so you only pay for what you use
  • Choosing the right pricing model to meet your cost targets
  • Meeting changes in demand with cloud elasticity
  • Measuring, monitoring, and improving your usage and spending to ensure you are taking the most cost-effective approaches

 

Why Choose Innovative for an AWS Well-Architected Review?

Just like AWS, we are customer-obsessed in everything we do. We want to help customers maximize their AWS platform to get the most out of it. Our experts provide an efficient process to help clients create a roadmap to improve their infrastructure. To help drive confidence in your cloud decisions, we are committed to showing you relentless support. As an AWS Advanced Consulting Partner, we can take your company to the next level through modernizing and transforming your business and technology. We will show you how to harness the power of AWS to experience full business potential.

What Should I Do Next?

There is no better time than now to schedule your AWS Well-Architected Review. Make sure your business is running efficiently in a cost-optimized environment and you are leveraging the right services to meet your key business objectives.

Schedule Your Well-Architected Review

Why you should consider Infrastructure as Code

Infrastructure as Code (IaC) has revolutionized the way that infrastructure is provisioned. In short, IaC is defining your cloud infrastructure (Amazon VPC, subnet, Amazon EC2 instances, security groups, etc.) in a template file or in actual code.

Initially, you could only define the infrastructure in a template using JSON or YAML and then create a stack using AWS CloudFormation. Now, there is another option – the Cloud Development Kit (CDK) – that allows you to write code in common programming languages such as JavaScript and Python to define your cloud infrastructure. Under the hood, the CDK converts the code to an AWS CloudFormation template and then creates a stack from that. No matter which route you choose, IaC provides many benefits such as automation, repeatability, compliance-ready design, and the ability to leverage source control.

Automation

By defining your infrastructure as code with a servicelike AWS CloudFormation you can easily build your entire infrastructure with the click of a button. Before cloud computing platforms, like AWS, the infrastructure team would need to manually spin up each server, configure their settings and services, and install any needed software and packages. This was a manual, time-consuming process with a high risk of human error. By using AWS CloudFormation and its associated helper scripts such as cfn-init and cfn-signal, you can install and configure software packages as the infrastructure is provisioned ensuring everything is built in the correct order.

AWS provides the Metadata section in AWS CloudFormation to define information that can be used to customize the setup of an instance. The AWS::CloudFormation::Init: section under Metadata helps us declare information that we need to help install and configure our instances. For example, we can automate the installation and configuration of a LAMP stack onto our Amazon EC2 instance. As seen below, we declare two configSets: Install and Configure. Under the Install configSet, we declare the packages that we want to install and the package manager we want to use to install them (yum in this case).

Further down in the Amazon EC2 resource definition, the UserData section is where we can define commands to run automatically on startup of an instance. In this case, we update the AWS CloudFormation bootstrap package and then run the cfn-init command, which looks at the AWS::CloudFormation::Init section where we defined the packages that we want to install. It passes in the name of the AWS CloudFormation stack, the name of the resource, the configSets that we want to run and the region as command line parameters.

After the cfn-init command, there is another AWS CloudFormation helper script command called cfn-signal. This command is receiving the output (success or failure) from the cfn-init command and signals to the CreationPolicy if the installation was successful. The timeout in the CreationPolicy section means that AWS CloudFormation will wait for five minutes for a success signal. If it doesn’t receive a signal in that time period, the AWS CloudFormation will stop the stack creation and mark it as “failed to create.”

Repeatability

Once you have defined your infrastructure in an AWS CloudFormation template, you can repeatably create environments anytime. Here at Innovative Solutions, we have a standard networking templates that can be used for any new projects. This removes human error involved with manually provisioning your infrastructure with each new project.

Compliance-ready

By default, an AWS CloudFormation stack allows update actions on all the underlying resources. To solve this, we can define a stack policy that will ensure that the resources in the AWS CloudFormation stack cannot be updated. There are also other tools such as Drift Detection to ensure no one is changing the underlying infrastructure. Ad hoc manual changes to the stack should never be permitted because this could result in a non-compliant environment. Especially for a production environment, all changes should be run through the AWS CloudFormation template via a stack update.

Source control

Another great part of having your infrastructure defined as code is you can check it into source control just as you would with any code. This allows you and your team to be able to see the history of templates and the various changes that happen over time. Also, this allows your team to collaborate on the development of templates.

Organizing and managing templates between teams

When starting out with AWS CloudFormation you will probably put all your resources in one template. However, as your infrastructure gets more complex, this will become unmanageable. For example, a company has three teams working on a given application: a network team, an application development team, and a security team. Each team will have multiple resources that they need to provision for the application. Let’s say the network team needs to make a change to the VPC resource they have defined in the AWS CloudFormation template. If the teams are sharing one template, this could cause confusion and unnecessary overlap. To solve this issue, the best practice is to create three separate templates, one for each of the teams. This way each team can manage their own template without needing to check with and coordinate with the other teams before making changes to their resources.

Certainly, there will be resources that will need to be shared and referenced between the three templates. To solve this, we can use cross-stack references, which allow resources to be exported from one template and imported into another. For example, if the security team needs to reference the VPC defined in the network stack, it can do so by importing the VPC resource (if the network stack exported that VPC resource).

In the network stack, we need to export the ProdVPC resource:

In the application stack, we import the VPC Id for use in defining the target group of our Elastic Load Balancer.

Another best practice is to use nested stacks to re-use templates that are commonly used. Let’s say you have a network stack that is used in all the applications you create. Instead of defining the same network stack in each application template, you can make the network stack its own template, host it in Amazon S3, and then when you require another stack, you can define a resource type AWS::CloudFormation::Stack and then point to the location of the network template.

Here is an example of what this looks like in a template:

We are defining a CloudFormation stack that is referencing a template file that is stored in S3.

How does Innovative Solutions leverage IaC?

Innovative Solutions has been leveraging AWS CloudFormation for years because of the many benefits it provides for our organization. We have developed many different templates for networking, security, and others, that have been incrementally improved through the years. Having mature AWS CloudFormation templates at our disposal makes it easy to build infrastructure quickly and reliably. This allows us to save time and focus on the actual workload.

Our templates are stored in source control so they can be easily updated as services evolve. All past versions are tracked and can be easily updated for collaboration. For each project, we are able leverage our AWS CloudFormation templates to easily deploy multiple identical stacks for each of our environments (dev, staging, production).

Cloud Development Kit

The Cloud Development Kit (CDK) is another excellent way to define your AWS infrastructure as code. In fact, CDK abstracts away a lot of complexity of the AWS CloudFormation template. It allows you to provision AWS resources in popular programming languages such as C#, Java, JavaScript, and TypeScript, instead of creating a separate template file written in JSON or YAML. Using the CDK also allows you to use programming logic (if statements/for loops) that developers are comfortable with to help provision infrastructure resources. Writing ten lines of code using the CDK can produce hundreds of lines of an AWS CloudFormation template.

When you run your CDK app, an AWS CloudFormation template is synthesized (created). This doesn’t create any resources. The cdk deploy command actually creates the stack and the underlying resources.

Below is a sample Python CDK application that creates an SQS queue and an SNS topic. The queue is added as a subscription to the SNS topic so that it will receive messages when they are pushed to the SNS topic.

As seen above, this is simple, easy to understand code to write in Python. These lines of code create a CloudFormation template that is 150 lines long! The CDK provides an amazing level of abstraction that organizations can adopt quickly, if they haven’t already.

IaC has forever changed how we create virtual infrastructure. Once an organization learns how to leverage IaC, they will never go back to manually creating virtual servers and configuring all the settings and services associated with them. Not only is doing all this manual work extremely tedious, it also poses a high risk of human error because of the manual steps involved. With the development of the CDK, there is less of a barrier to entry leveraging IaC at your organization. You can find numerous sample templates online on the AWS website e. There is some up-front work involved with IaC, but once you are up and running you will appreciate the multitude of benefits that come with it.

 

Do you still have questions about Infrastructure as Code (IaC) ?

Feel free to contact us, we’d love the opportunity to further discuss anything you have read.

Contact us for more information

Content Management Systems on Amazon Web Services (AWS)

A content management system (CMS) is an application allowing users to become authors of their own content. An administrator of a CMS site has the ability to add new pages, text, files, and completely own the structure and content of their website without any backend access or development knowledge. CMS sites can be efficiently hosted and maintained on various services provided by AWS. Many of these services improve the SDLC of the application by securely storing application code, enabling frequent releases, providing highly available custom content, and the ability to replicate environments using Infrastructure as Code.

AWS CodePipeline

AWS provides CI/CD tools that work seamlessly with CMS applications. It’s important for any application to have a well-defined release process and AWS CodePipeline streamlines the build and deploy steps. AWS CodePipeline can use AWS CodeCommit, GitHub, or Amazon S3 as sources. Many open source CMS solutions have their source checked into GitHub which makes tying these projects to AWS CodePipeline incredibly simple.

Developers can connect to GitHub or AWS CodeCommit through the AWS console under the AWS CodePipeline service where they can select their source repository. They can then add build, deploy, custom, and manual approval actions as needed. Once in place, when a change is made to a source repository a pipeline can automatically trigger and all defined steps in the pipeline will be executed.

 Environment automation with AWS CloudFormation and AWS Elastic Beanstalk

AWS provides two services to quickly spin up new environments and projects with the click of a button: AWS CloudFormation and AWS Elastic Beanstalk. Both can be used to create environments for CMS sites providing different levels of automation and environmental control. Often a new environment will need to be spun up quickly to provide a new testing or QA site. In other scenarios, a brand-new application may need to be created, but that application’s functionality overlaps with previously created CMS sites and just needs to be customized for a client. AWS CloudFormation allows a developer to create templates that describe in yaml or json code the environment’s specific resources such as Amazon EC2 servers, Amazon S3 buckets, or Security Groups. These templates only need to be created once and can then be reused or modified to quickly create new environments. AWS CloudFormation allows complete control over the environment’s resources whereas AWS Elastic Beanstalk manages more of the environment.

AWS Elastic Beanstalk requires just a few pieces of information about the type of application being created, and then automatically creates all necessary resources for the environment. There is less control over the resources created by an AWS Elastic Beanstalk application, but the speed in creating an entire application stack means less time developers need to spend provisioning and configuring resources by hand. Both environment creation methods decrease the potential for human error caused by a manual process.

Amazon EBS, Amazon EFS, Amazon S3, and Amazon FSx for Decoupled Site Asset Storage

CMS applications often allow users to upload custom files such as media or CSS. By default, most CMS frameworks store these files to local disc. AWS provides many storage options, each with benefits and drawbacks, to store these assets: Amazon EBS, Amazon EFS, Amazon S3, and Amazon FSx.

Amazon EBS (Elastic Block Store) has two main disk type options: SSD and HDD. An SSD disk type will provide faster performance than an HDD. Amazon EBS volumes can be attached to both Linux and Windows servers and it is typically the most performant solution. However, Amazon EBS volumes can only be attached to one EC2 instance at a time, meaning it will not be usable for shared storage in auto-scaling scenarios. If a CMS site will get a lot of traffic and needs to scale to maintain site performance, Amazon EBS would not be a good choice to store dynamic content. Amazon EBS provides a snapshot method of backing up content and restoring to new Amazon EC2 instances.

Amazon EFS (Elastic File System) is similar to Amazon EBS but has the ability to be accessed by multiple Amazon EC2 instances. It is therefore useful when auto-scaling is needed for a heavily trafficked application. However, Amazon EFS cannot be mounted to Windows instances.

If auto-scaling is needed in a Windows environment, Amazon S3 (Simple Storage Solution) and Amazon FSx are viable options. Amazon S3 differs in that it stores files as objects in buckets via an API instead of leveraging a file system mount. Replication can also be configured on a bucket to copy an item from one bucket to another, providing a method of backups or syncing site assets across environments. In general, Amazon S3 will be the cheapest option compared to the alternatives. The biggest decision factor is Amazon S3 likely requires significant application changes to leverage since it can’t be mounted.

Amazon FSx for Windows File Server works similarly to Amazon EFS but for Windows Servers. It provides a managed storage solution that can be attached to multiple instances and also provides some additional functionality like Active Directory integration.

Conclusion

The services mentioned above work together to provide a stable and efficient environment that accommodates a CMS application. AWS CodePipeline can manage the release process and is already integrated with popular version control providers like GitHub. CMS applications can utilize AWS CloudFormation and AWS Elastic Beanstalk which enable new environments to be built and created quickly. AWS provides flexibility when choosing the appropriate storage platform. Amazon EFS, Amazon EBS, Amazon S3, and Amazon FSx cover most storage scenarios when running a CMS.

 

Do you still have questions about custom development or CMS?

Feel free to contact us, we’d love the opportunity to further discuss anything you have read.

Contact us for more information

Learn more about our AWS Cloud Service offering

Why Innovative Leverages DevOps

Innovative Solutions is a mid-sized company, but many times we encounter communication and coordination problems at an enterprise scale. Internally, we have multiple development teams with each team comprised of multiple team members bringing a range of skill sets. Each team interacts with third-party vendors, providers, and clients who often bring their own development teams with whom we collaborate. Quite often clients are directly communicating with third parties as well. As the number of entities in this communication graph increases, the complexity of organizing and interacting grows which requires structures and processes to be put in place to ensure efficient communication.

Over the years we have been employing and maturing our SDLC methodologies, following Agile practices, and incorporating the latest tools to help develop, deploy, and support our products and our clients. This has organically led us to leverage DevOps services driven by industry leaders including Amazon Web Services (AWS).

Innovative Solutions takes these ideas very seriously and their adoption has helped us successfully navigate an increasingly complex ecosystem. In fact, Innovative takes this so seriously we require any engagement to be working toward an end goal of leveraging DevOps processes and tools. We’ve seen time and time again when our partners understand the value in laying a proper foundation, and we all win.

Leveraging automated build and CI pipelines has taken a burden off developers freeing them to spend more time creating rather than waiting to see if tests pass. We heavily use AWS CloudFormation to automate our infrastructure setup in a repeatable manner. This makes spinning up a temporary lower environment almost instantaneously, with just one-click versus the weeks it took just a few years ago.

Advanced monitoring and alerting enables Innovative’s team to identify small problems before they become big ones. Tools like Amazon CloudWatch, AWS X-Ray, and Datadog provide visibility into systems unparalleled to anything we had in the past. We now leverage logs and metrics that previously were discarded to identify areas of opportunity so we can continually improve customer experience while providing tangible value to our clients.

DevOps and Regulation

The complexities of interactions these days are exacerbated by ever-growing regulatory pressures. Innovative Solutions consistently partners with customers who must adhere to HIPAA, PCI, SOCs, and other compliances. Innovative has utilized AWS DevOps tools to create processes and controls making compliance and audits more secure and successful.

The same build and deploy pipelines that facilitate our rapid development cycles also provide immutable packages we can promote from environment to environment. This helps us ensure that no bad actors have the ability to tamper with code on its way to production.

We leverage AWS CodePipeline’s manual approvals ensuring releases have the appropriate sign-off before moving forward. This allows us to put in place appropriate controls and separation of duties.

AWS Config gives us the capability to be notified when any part of our infrastructures deviates from the policies we have defined. If this happens, AWS CloudTrail makes it easy to perform root-cause-analysis and correct the problem quickly. Our applications are assessed by Amazon Inspector to identify any deviations from internal standards.

 

A Must-Have for Any Business

DevOps is not something off on the horizon. The methodologies and tools are mature enough that it should be considered as part of the standard SDLC. If you’re not already practicing DevOps, the time is now. DevOps is not a nice-to-have, but a must-have for any business serious about long-term software development.

 

Do you still have questions about DevOps?

Feel free to contact us, we’d love the opportunity to further discuss anything you have read.

Contact us for more information

Learn more about our AWS Cloud Service offering

Why Innovative uses a CI/CD pipeline

Here at Innovative Solutions we take the mantra “always be learning” to heart. It’s evident in our approach to changing the way in which we deploy our applications. A well-structured CI/CD pipeline improves many aspects of development. Over the years we’ve made massive headway in learning from our previous deployment woes and adopting a more continuous and agile approach.

Before delving into what we’ve changed I’d like to give brief explanation as to why we felt the need to make these changes; what are the pitfalls and how did they impact us?

We’ve always had to be adaptable and agile with our deployment strategies; working with many clients who utilize a variety of technologies practically mandates it. But we found ourselves repeatedly running into the same problems

Previously, one of our deployment strategies included manually combing through changes in our GIT repositories to find updated files and copying them to distribution directories. Another deployment strategy required RDPing into jump-boxes to then RDP once again into each web server and pasting the build artifacts onto each. Then, we manually went into IIS to update the directory that the hosted server pointed at.

Tasks like these were manually intensive and took time away from development. These deployment strategies were prone to error and often introduced defects into the shipped product. The effort involved meant that releases were infrequent leading to a slower time to market. These deployment processes would leave developers scratching their heads and wondering if a certain feature wasn’t working or if it hadn’t made it into the release due to an error in the manual deployment.

We knew something had to change, and fast. We began to implement new deployment strategies focused around continuous delivery and integration. Here’s how:

How does Innovative use CI/CD?

Working with many clients and a variety of technologies gives us a unique opportunity to learn which deployment strategy best suits a client and their needs. As such, we’ve been able to customize each of our CI/CD pipelines around a  client, their needs, and technology stack.

When dealing with a large application, a fully integrated build server with robust testing capabilities is required. We have an application configured to use TeamCity which performs continuous integration on artifacts that are deployed to AWS infrastructure. AWS CI/CD infrastructure allows you to painlessly integrate with third-party solutions such as Jenkins or Travis CI.

For smaller CMS sites, we leverage a cloud-native CI/CD pipeline based entirely in AWS. All the developer has to do is push their code to AWS CodeCommit. This triggers an AWS CodePipeline to orchestrate the CI/CD process. AWS CodePipeline is ideal for organizing the build and deployment flow and allows for easy step-by-step editing and visualization of the process flow.

AWS CodePipeline instructs AWS CodeBuild to both build the package and run specified tests each time a commit is pushed to our AWS CodeCommit repository. This allows for quick and continual testing and immediate notification if a build or test fails.

After all of the AWS CodeBuild tests have passed, we typically include a manual approval step in our AWS CodePipeline flow. As we work with several customers bound by regulations such as HIPAA, manual approvals help ensure that the code we’re deploying meets all process and control requirements.

AWS CodePipeline’s flexibility with manual approval steps and custom defined tests within the pipeline allow us to customize each CI/CD pipeline to specific client needs.

Some of our CMS sites are deployed to AWS Elastic Beanstalk. For these sites, ease of deployment and infrastructure management was key, and AWS Elastic Beanstalk fits those needs perfectly. AWS Elastic Beanstalk takes care to provision all of the underlying resources needed to run these sites, such as load balancing, auto scaling, and storage. Without the need to provision all these resources manually we can reduce the load on developers who can instead focus on rolling out new features instead of worrying about infrastructure.

However, there are cases where you do want more control over your deployment where AWS Elastic Beanstalk isn’t the best fit. Instead, AWS CodePipeline is flexible enough to accommodate these needs. AWS CodePipeline using AWS CodeDeploy allows you to continuously deploy code to Amazon EC2 instances that aren’t managed by AWS Elastic Beanstalk. This allows us to add CI/CD pipelines for customers who are already using Amazon EC2 instances to host their applications. Leveraging AWS CodeDeploy allows us to easily integrate their specific deployment process into a CI/CD pipeline without changing any of their underlying infrastructure.

What has Innovative learned?

While there are upfront costs, strategically focused business stakeholders can’t afford to neglect the long-term benefits of CI/CD.

According to Accelerate: State of DevOps 2018: Strategies for a New Economy, high performing teams are deploying 46x more frequently with 1/7th the error rate, when compared to low performing teams.

CI/CD pipelines save time, effort, and the mental health of developers. Here at Innovative, our team has fully embraced CI/CD and by using AWS technologies, it has made adoption that much easier.

 

Do you still have questions about CI/CD?

Feel free to contact us, we’d love the opportunity to further discuss anything you have read.

Contact us for more information

Learn more about our AWS Cloud Service offering

Why monitoring and logging are crucial to cloud computing success

Gone are the days that monolithic applications run on a single server on-premise. The current landscape of cloud computing and microservices has made many aspects of computing easier but monitoring and logging is not one of them. Instead of having logs on one web server, they are now often highly distributed across many different systems.

With cloud computing and containerization becoming so popular, we often have many short-lived resources that are spun up for short periods of time and then destroyed after they are done serving their purpose. This makes logging even more challenging because you need to capture and store them elsewhere before they are destroyed. Not only are the logs distributed across various systems, but there are also an increasing number of logs.

Computing systems produce logs that can be used to give insight into the current system state. Most companies store log files because “that’s what you’re supposed to do”.  However, the only time companies look at them is when a system fails, at which point they are already in panic mode trying to figure out what went wrong. Then they begin to look at the various systems logs and sort through them, trying to find the relevant ones. This becomes a nightmare when these logs are spread across multiple systems.

If you don’t have all your logs in a centralized place where you can efficiently sort and filter through them, you won’t be able to quickly troubleshoot problems. The information exists but because there is no easy way to decipher it; you’re essentially looking into a black box.

Innovative Solutions legacy process for monitoring and logging

In order to see logs that the web servers and databases were producing, we had to log into web servers and databases. This was very time-consuming and inefficient. We didn’t have a simple way to monitor our applications. We were reactive to issues reported by customers, rather than having a proactive approach to identify and act on issues before they happened.

Current use of monitoring and logging leveraging AWS

One logging and monitoring pattern that we leverage today is to aggregate Iog files and metrics in third-party cloud-native monitoring platforms such as Datadog. Logs are collected in Amazon CloudWatch and then pulled into Datadog via the Datadog AWS integration. Agents running on compute instances also push logs into Datadog. This allows us to monitor and analyze our production environment in near real-time.

Purposeful Logging

The goal for a logging system is not to collect logs like trading cards; it’s to use them to carry out automated actions and achieve high system visibility. Amazon CloudWatch provides log storage in a centralized location, so you don’t need to go searching across various systems. However, just storing logs in a centralized location isn’t enough. Amazon CloudWatch provides services that allow you to monitor your systems and take action based on events and alarms. Amazon CloudWatch events and alarms integrate with many other AWS services such as Auto Scaling Groups, Amazon SNS, Amazon SQS, AWS CodePipeline, AWS Lambda, and many more.

Monitoring in Amazon CloudWatch

With Amazon CloudWatch you can track system metrics for your instances (e.g. CPU utilization) and have them display on a dashboard. This allows you to see the health of your application without needing to dig through thousands of log files. We can check our dashboards to make sure the operation of our systems is nominal. You can see an example dashboard below that shows the healthy host count, consumed RCUs and WCUs, and incoming log events. A simple dashboard like this can give you a quick idea of how your systems are performing with minimal effort.

Amazon CloudWatch Alarms

Amazon CloudWatch alarms enable us to scale up and down based on thresholds for metrics identified as application bottlenecks. In this example, I created an Amazon CloudWatch alarm that watches the average CPU Utilization metric for all the instances in an Auto Scaling Group. If the average CPU Utilization goes above 60%, an alarm will be triggered which will: send a message to an SNS topic and add an instance to the Auto Scaling Group.

This shows the creation of the Amazon CloudWatch alarm which sends a notification to the ‘Default_CloudWatch_Alarms_Topic’ SNS topic.

Then we added the auto scaling action so that our Auto Scaling Group adds one instance when the alarm (awsec2-devops-competency-CPU-Utilization) is triggered.

You can see the graph of the CPUUtilization with the red horizontal line representing the 60% CPUUtilization alarm. The blue line represents the CPUUtilization of the Amazon EC2 instances in the auto scaling group. When the alarm threshold is met the alarm is triggered and subsequent actions are run. In order to test triggering the alarm I logged into the Amazon EC2 instance and created and ran a Python script that simulates high CPU load.

As you can see below for this CloudWatch alarm, we have set up two actions: a message to an Amazon SNS topic “Default_CloudWatch_Alarms_Topic”, and an auto scaling action that will add one instance to the Auto Scaling Group that we have specified ‘devops-competency.’

After the alarm was triggered, the message was sent to the SNS topic which then sent an email to the subscribers of that topic. I set my email as a subscriber to the topic and received the following email:

Having the appropriate people notified of an alarm being triggered is nice, but you also want to pair that with an appropriate action automatically triggered such as scaling up your Auto Scaling Group. As seen in notification below, after the alarm was triggered the capacity of the Auto Scaling Group was increased from one to two instances.

CloudWatch Events

Amazon CloudWatch provides many types of events. Some examples would be events created upon state transition in AWS CodePipeline.

When you create a pipeline via AWS CodePipeline you are given the option to use Amazon CloudWatch Events as a change detection option in the source stage. This means that when I push my code to the source repository, the pipeline automatically starts and runs through all the subsequent stages.

You can also configure Amazon CloudWatch events to watch your AWS CodePipeline and receive notifications based on state changes. I created a CloudWatch event rule that would detect when the AWS CodePipeline Execution State changed to a FAILED state.

After you specify an event source, you can then specify the target of what you want to happen after a rule is matched. In this case I set an Amazon SNS topic (called ‘codepipeline-failed’) as the target:

Then I created an AWS Lambda function that would send a notification to a specified Slack channel, saying that an AWS CodePipeline stage has failed. The members of the Slack channel will see this and can take the appropriate actions to figure out what went wrong.

I also set the AWS Lambda function as a subscriber to the Amazon SNS topic so that when a message is sent to the Amazon SNS topic the AWS Lambda function will be automatically triggered.

You can see the corresponding message in the Slack channel below:

By using Amazon CloudWatch events, we now have a better integrated CI/CD pipeline. If a developer pushes code to our AWS CodeCommit repository and any stage of the AWS CodePipeline fails, our team will be automatically notified in our Slack channel.

Do you still have questions about monitoring and logging?

Feel free to contact us, we’d love the opportunity to further discuss anything you have read.

Contact us for more information

Learn more about our AWS Cloud Service offering

Managing your EC2 Microsoft patch management

Managing Microsoft updates manually or through WSUS can be challenging in large environments.  AWS Systems Manager allows you to manage all your AWS EC2 infrastructure with a single pane of glass.

Within this single pane of glass, you can view your EC2 inventory, patch baselines, compliance against those baselines, and run ad hoc scans and patch installs.  Streamlining the update process with use of maintenance windows and automation will allow for repeatability and stability of the monthly Microsoft update process.

From a high level here are some of the steps to setup this service:

1.  Create an IAM role with the AmazonSSMFullAccess policy to allow Systems Manager to manage your EC2 instances.  The SSM agent is pre-installed on all Amazon created EC2 AMIs which System Manager uses to perform functions.

2.  Add the newly created IAM role to your EC2 instances.

3.  In a short time, you will begin to see the systems check into Inventory under Systems Manager.

4.  Create your Patch Management baseline. You can use the default, or create one and set it as the new default.

5.  Create a schedule so the machines are patched automatically. If you have different times you want to patch instances, consider using tags to setup different schedules.

After configuration, Patch Manager will use Run Command to call the RunPatchBaseline document to evaluate which patches should be installed on target instances according to each instance’s operating system type directly or during the defined schedule (Maintenance Window).

Have A Question?

Innovative Solutions is an Advanced Consulting Partner with expertise in Microsoft Workloads. Innovative is a service delivery partner for Windows on EC2 and part of the Service Delivery Program.

Learn more about our CLOUD services

Contact us to START THE conversation

Desktop as a Service in AWS

DaaS in AWS – A perfect harmony of rapid deployment and IT Management

The IT department of most organizations is responsible for deploying desktops to end-users and can be considered a routine task that can be optimized from a cost and time standpoint.

With AWS Workspaces, deploying desktops via the GUI (up to 20 at a time) or via the CLI (up to 25 at a time) can be accomplished very quickly for any sized organization. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe.

You can pay either monthly or hourly, just for the WorkSpaces you launch, which helps you save money when compared to traditional desktops and on-premises VDI solutions.

Amazon WorkSpaces helps you eliminate the complexity in managing hardware inventory, OS versions and patches, and Virtual Desktop Infrastructure (VDI), which helps simplify your desktop delivery strategy. With Amazon WorkSpaces, your users get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device.

With the end of Windows 7 support in 10 months, there has never been a better time for organizations to migrate their managed IT desktops to Windows 10.

From a high-level, here are the easy steps on how to get started within WorkSpaces which can be accomplished within 1 hour.

1.  Create AWS account and setup with best practices

2.  Create VPC, subnets and any additional networking. For this step, using a CloudFormation template can save time and reduce the potential for provisioning error(s).

3.  When setting up the VPC, configure it with one public subnet and two private subnets using a NAT Gateway as illustrated.

4.  Amazon WorkSpaces uses a directory, either AWS Directory Service or AWS Managed Microsoft AD, to authenticate users. Users access their WorkSpaces by using a client application from a supported device or, for Windows WorkSpaces, a web browser, and they log in by using their directory credentials. The login information is sent to an authentication gateway, which forwards the traffic to the directory for the WorkSpace. After the user is authenticated, streaming traffic is initiated through the streaming gateway.

Note: You can also configure Microsoft AD using a standard EC2 instance and manage it yourself but strongly consider using an AWS service to reduce costs and overall management

5.  Client applications use HTTPS over port 443 for all authentication and session-related information. Client applications uses port 4172 for pixel streaming to the WorkSpace and for network health checks. Each WorkSpace has two elastic network interfaces (ENI) associated with it: an ENI for management and streaming (eth0) and a primary ENI (eth1). The primary ENI has an IP address provided by your VPC, from the same subnets used by the directory. This ensures that traffic from your WorkSpace can easily reach the directory. Access to resources in the VPC is controlled by the security groups assigned to the primary ENI.

6.  This architectural diagram taken from AWS illustrates all components within WorkSpaces

7.  To create a Workspace from the CLI, perform the following:

8.  This example creates a WorkSpace for user jimsmith in the specified directory, from the specified bundle.  aws workspaces create-workspaces –cli-input-json file://create-workspaces.json

9.  This is the contents of the create-workspaces.json file:

Input:

{

“Workspaces” : [

{

“DirectoryId” : “d-906732325d”,

“UserName” : “jimsmith”,

“BundleId” : “wsb-b0s22j3d7”

}

]

}

Output:

{

“PendingRequests” : [

{

“UserName” : “jimsmith”,

“DirectoryId” : “d-906732325d”,

“State” : “PENDING”,

“WorkspaceId” : “ws-0d4y2sbl5”,

“BundleId” : “wsb-b0s22j3d7”

}

],

“FailedRequests” : []

}

Reference link – https://docs.aws.amazon.com/cli/latest/reference/workspaces/create-workspaces.html

Have a question?

Innovative Solutions is an Advanced Consulting Partner with expertise in Microsoft Workloads. Innovative is a service delivery partner for Windows on EC2 and part of the Service Delivery Program.

Learn more about our CLOUD services

Contact us to START THE conversation

Migrating to AWS

May Be Easier Than You Think

Many companies are migrating their workloads to AWS as-is to take advantage of the capabilities of the cloud. Moving your existing workload allows you retire aging hardware that is prone to failures.

Microsoft Windows is typically supported for ~10 years meaning the hardware is due for a refresh before having to go through the process of reinstalling and migration the applications.

AWS has a variety of tools and automation to move an on-premise workload to AWS. One of the most common tools is AWS Server Migration Service. There are no additional fees to use AWS SMS. You pay the standard fees for the S3 buckets, EBS volumes, and data transfer used during the migration process, and for the EC2 instances that you run.

From a high-level, here are the steps.

1.  Create AWS account and setup with best practices

2.  Create VPC, subnets and any additional networking. For this step, using a CloudFormation Template saves a lot of time.

3.  Review SMS requirements and create the necessary permissions. For this step, again a CloudFormation Template will save more time.

4.  Setup the SMS appliance on VMware or Hyper-V

5.  Create a replication job and monitor the progress. For Windows instances, you can choose to bring your own licenses or use pay-as-you go through AWS.

6.  One the job is finished; you can create new instances with the AMIs generated with SMS. This is typically done twice, once to test and a second time after a final sync is complete during the cutover window.

AWS SMS allows you to migrate your existing on-premise workloads to AWS with little to no down time. CloudFormation allows you to automate some of the steps to eliminate mistakes and quickly deploy the basic resources.

Have A Question?

Innovative Solutions is an Advanced Consulting Partner with expertise in Microsoft Workloads. Innovative is a service delivery partner for Windows on EC2 and part of the Service Delivery Program.

Learn more about our CLOUD services

Contact us to START THE conversation

SQL 2008 End of life

Choosing the best path when migrating unsupported Microsoft SQL 2008 to AWS

On July 9, 2019, support for SQL Server 2008 and 2008 R2 will end. After this date, Microsoft will no longer support these products via product enhancements (service packs/CUs) and security updates.

There are two main ways to run SQL on AWS:

Amazon RDS for SQL Server
A fully managed service that offers several versions of SQL while offloading the database administration tasks like managing backups, detecting failures and recovering and much more.
Amazon EC2 running Windows Server and SQL Server
The EC2 option offers you flexibility to run a database and control the underlying operating system. You can run for as much, or as little time as you need with complete control over the Windows settings.

SQL Server Versions

SQL Server comes in a variety of different versions, so it is important to compare the features to make sure you are running the right one. Picking too large of a feature set could end up costing your business unnecessary licensing costs. View a high-level comparison of the different versions

Instance Types

AWS offers a variety of different instance types for both EC2 and RDS. Picking the right instance type can also save you money. More importantly, matching the instance type and size with the workload will better improve performance. Learn more about: EC2 instance types and RDS instance types

Innovative Solutions is an Advanced AWS Consulting Partner and Solution Provider helping SMBs migrate to AWS. Our team of certified cloud architects have the expertise in both Microsoft and AWS to build scalable, reliable, and cost-effective services designed specifically for each business’ need.

Have a question or want to learn more?

Learn more about the end-of-life for SQL 2008 + Microsoft Server 2008 – Click Here

Learn more about our AWS services – Click Here

Contact us today to get the conversation started – Click Here

Download

  • This field is for validation purposes and should be left unchanged.