There’s a metaphor that has been applied to everything from the human body to rotational motion (in physics) to customer relationship management. It’s the humble bucket, that basic tool you find in most households that helps to hold or carry liquids and other materials. Buckets have been used as a symbol to describe how we can add to or take away from our capacity to handle things, as well as the vessels we use – literally or figuratively – to contain our issues, emotions, basic functions, etc.
Buckets can also apply to IT, representing a type of data barrier or document in which data is divided into different zones or “buckets.” In Amazon Web Services (AWS) environments, Amazon Enterprise Compute Cloud (EC2) does the same thing, providing secure, resizable compute capacity in the cloud. EC2 is designed to make web-scale cloud computing easier for developers by delivering a simple web service interface that allows users to obtain and configure capacity with minimal friction. For businesses using AWS cloud services, EC2 offers reliable, scalable, infrastructure on demand that can increase or decrease capacity within minutes, as opposed to hours or days.
But choosing a virtual machine within AWS environments is not as easy as just deciding on the quantity of virtual CPU storage capacity and the amount of memory you need. Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. “Instance types” comprise variable combinations of CPU, memory, storage, and networking capacity, giving users the flexibility to select the right mix of resources for their particular applications.
There are numerous “instance classes” available with specific characteristics and quirks. If the requirements dictate a machine with high performant memory or require large storage throughput, the choice of instance class is obvious. Often, however, it is not quite that straightforward for general purpose machines. That’s why AWS provides two separate classes for this: T class and M class (we’ll get into the definitions in a minute). At first glance they look similar, but the differences are clear and could have big repercussions on your monthly bill.
M class machines are intended for processor workloads that are relatively high for long periods of time, whereas T class machines are designed for applications where the processor workloads bursts periodically. Although both instance classes are designed for general purpose workloads, the hourly cost for T class instances is reflective of the intended usage.
If you have an application where it is known that the CPU use will maintain a high average, then M class machines provide the best option. However, if your application has high CPU use only for short periods of time, then T class machines can provide a significant cost savings. The AWS Cost Calculator can help you more closely determine your costs.
T class instances are designed to provide a baseline level of CPU performance with the ability to burst to a higher level when required by your workload. This is achieved through the concept of CPU credits. Essentially, for every hour a CPU runs below its defined baseline threshold, CPU credits will be earned, while for every hour a CPU runs above its defined baseline threshold, CPU credits will be spent. There is a maximum amount of CPU credits that can be earned, and any above this limit will be discarded. Each machine type within the class has its own values for the number of CPU credits earned/spent each hour, defined baseline threshold, and maximum CPU credit limit.
Here’s where the metaphor comes in: the concept of CPU credits can be represented as a bucket. A dripping tap fills the bucket at a constant rate (defined by the EC2 instance type). During usage, the actual CPU activity is represented by draining the bucket. The higher the CPU activity, the faster the outflow. As long as there is water in the bucket (available CPU credits), then all is good; but what happens when the bucket is empty? EC2 instances offer two modes for this, Standard and Unlimited.
1. Standard Mode
In standard mode, whenever a particular instance has exhausted the supply of CPU credits, AWS will automatically throttle the CPU utilization to match the baseline threshold. On some of the smaller instances this can be as low 10%. Clearly this could cause some performance issues for an application. As the utilization drops below the baseline, threshold credits will again accrue allowing the machine to burst again.
2. Unlimited Mode
Unlimited mode introduces the concept of Surplus CPU credits. Should the CPU credit pool be depleted – but the CPU still runs above the threshold – the machine will spend Surplus credits. As and when the machine drops below the baseline threshold, the earned CPU credits will repay the Surplus credits. The concept of Surplus CPU credits allows AWS to balance out the average CPU utilization over a 24-hour period.
There’s a catch though: at the end of any 24-hour period, any surplus CPU credit balance is billed at a flat rate per vCPU per hour.
So, which should you choose?
There’s no simple answer to this; the choice depends entirely on how your workload will perform, and what price point you want to achieve. Within the burstable class, each instance has different baseline thresholds and CPU credit earn rates, which means a generic calculation is not possible.
An instance with a high average CPU utilization (above the instance breakeven) would be more suited to an M class machine with a fixed resource cost. An instance with a low average remaining below the baseline will benefit from the low cost of the T class machine. CPU utilization sitting between the baseline and the breakeven would have variable cost dependent on overall usage.
For Sungard Availability Services (Sungard AS) customers, the NextGen Managed AWS team is available to work through these scenarios with you to determine the best initial placement for each of your workloads, and what your potential costs savings could be.