logo

I thought I would kick off 2022 with a look at one of the new and interesting things announced late last year by AWS. The Well Architected Framework which previously consisted of 5 pillars now has a 6th.

And as Datacom are helping Customers will Well Architected Reviews questions come up. In this case I had someone ask me during a discussion with a Customer if we could include Sustainability in our Review for them. And this was very newly announced by AWS at the time. So it made me think

What is the Sustainability Pillar

In Short the Sustainability pillar is an attactive idea to those who run Cloud workloads and particularly worry about their environmental impact. Green Businesses or Government departments that want to be doing the right thing by the Environment (where possible) and minimize their Carbon Footprint. The sustainability Pillar has brought in a clear understanding of how a customer’s use of Cloud can still have an environmental impact they will want to control

Design Decisions Relevant for Sustainability

For those who like me have spent a lot of time understanding and mastering the Well Architected Framework you might see some overlap between Designing for Sustainability and Cost Optimisation. Some considerations are similar

  • Right Size Workloads – Dont make the old Data Centre Mistake for designing 100% for Peak workload even though you might need it 1% of the time. That leaves CPU and other resources being consumed wastefully. From a Cost Point of view it adds costs and from a Sustainability workload it increases your footprint in the AWS Cloud Data Centres (AWS may need more resources in their data centre such as power to handle that compute in their infrastructure).
  • Scale Workloads – As I said in the preface there is a good amount of overlap here between sustainability and Cost. The general design principle for sustainability is to manage your workload footprint and Scaling is very important here as running workloads when they are not needed is impactful to both Cost and the Environment. If your workloads have a predictable pattern then scale accordingly. If it has an unpredictable pattern then ensure the appropriate dynamic Scaling Capabilities are there for that as well. If your workloads are being over-utilised then automatic scaling in of that workload minimised that impact on the environment (and cost).
  • Use AWS Managed Services – Anyone who knows me or listens to me talk Geek speak would know that I am a big advocate for moving off Infrastructure as a Service (IaaS) where possible. The Sustainability Pillar has considerations for the use of Managed Services as sharing AWS Managed services (such as Serverless/Fargate/FTP Transfer) reduces the power requirements that might be required to deliver that service if it was run on EC2
  • Keep modernising EC2 and other Service Types – AWS Releases new generations of EC2 every 18 months or so and with that comes improved performance and other benefits. In some cases improved efficiencies will also allow you to minimize the footprint of hosting EC2 because you might be able to consolidate
  • Ensure the use of Regions close to End Users – This is not relevant so much for me in Australia yet due to Data Sovereignty (in 2024 there is expected to be a second region opening in Melbourne) but in North America or Europe with multiple regions you can reduce your environmental impact by choosing Regions where AWS have renewable Energy Projects. AWS Use Renewable Energy in a number of Countries to offset its power consumption and the general location of these projects is public (See: Amazon Around the Globe). From my point of view in Australia even though there is only 1 region it is already located near Renewable Energy Projects which means using ap-southeast-2 meets the spirit of the sustainability pillar.

 

 Designing Data solutions with Sustainability

As I have mentioned before when I started to look at how to architect Data handling solutions in AWS factoring in the sustainability Pillar I found significant overlap between Cost management and Sustainability. Whilst the goals may be different a simplified approach can achieve both goals.

SUS 4:  How do you take advantage of data access and usage patterns to support your sustainability goals?

The guts of this question is around lifecycle management and utilisation of storage.  Pretty much all storage is in scope. I found in the past many customers over provisioned EBS Volume capacity (Block Storage) because gp2 had a fixed IOPS rate (3 iops per 100GB), however with the introduction of GP3 this problem is solved. If you want more performance storage capability you no longer need to over provision capacity or buy Provisioned IOP storage to get it.

This means that you dont need to over provision storage anymore and so both costs and sustainability benefit.

AWS Best practices also outlines that you should used shared storage where possible to reduce the waste. With Linux based Operating systems and several managed Services such as ECS, File Transfer, etc. this means using Elastic File Systems (EFS) for performant data or s3 where performance isnt an issue. Using Shared Storage is valuable as it means a single source of truth and that means that duplication of files wont be a burden. Both of these file stoage solutions support lifecycling of data to cheaper (less performant) storage when no longer required for frequent access. Lifecycle Management of data is critical in the sustainability pillar as the above mentioned question really wants to see data moved to different storage tiers which has both a sustainability benefit and cost benefit as well as deleting data that is no longer required.

Design Compute Solutions for Sustainability

AWS are a cloud service provider that really puts an emphasis on using compute power ONLY when you need to.

Lets look at what the Sustainability Question asks

 

SUS 5:  How do your hardware management and usage practices support your sustainability goals?

Anyone who knows me and works with me will no I have a stickler for pushing customers to keep updating and modernising their approach to hardware management. I believe the following key principles apply (particularly around EC2):

  • Keep your workloads at the current generation hardware (or at worst, n-1 generation). AWS consistently improves efficiency in hardware and you may benefit by being able to even downsize. An example I have was I found for a customer with linux appliances I could move from a t2.medium to a t3.small and noticed no performance loss.
  • scale, scale, scale – I am a big believer that when you are not using capacity you should take it out to reduce costs. My old Team leader really stuck the “Pets vs Cattle” idea in my head and I think its right on point. If your application architecture is designed appropriately you should be able to blow away EC2 instances without issue. This is a Key for Autoscaling. Scale in when demand is low and scale out when its high
  • Scale Up – One of the ways that I think you can benefit in designing sustainably is to reduce the nmber of EC2 instances and instead use a larger instance type (instead of 2-3 smaller ones). This will reduce your compute impact on the environment as a larger instance requires less power than multiple smaller ones making up the same resource capacity. This of course requires that you are right sizing the workloads. But if you have 2 instances using 70% CPU you might be better with 1 of the next size up. Ensure that any changes of this nature are tested appropriately.
  • Where possible Leverage AWS Managed Services. I will be writing a blog soon on a plan I am putting together for a customer (linking when I do) on replacing a Linux Based sFTP solution in a high available architecture with the AWS File Transfer service (made possible through improvements made by AWS to the service architecture such as EFS support).

Of course it is important when considering these items to ensure that you are still maintaining reliability (multiple AZ Approach)

 

Thats all for this blog (lengthy one perhaps) but I plan to revisit this topic considerably throughout 2022 and incorporate it into my DevOps architecture blogs as they come up.