AWS Cost Efficiency

Reducing SQS Costs: Strategies and Pricing Insights

AWS SQS Cost Optimization
Document

Did you know?

AWS SQS can handle massive message throughput with individual queues capable of processing up to 3,000 messages per second and supports up to 1,200 queues per AWS account, enabling flexible management of diverse messaging workloads.

Amazon Simple Queue Service (SQS) is a managed message queuing service utilized by technical professionals and developers for asynchronous sending, storing, and retrieval of messages of various sizes. It competes within the IT Management category, holding a 5.27% market share with 9,733 customers across 10 countries. 

In this blog post, we will explore SQS pricing and effective strategies to minimize AWS SQS costs without compromising performance, offering insights to optimize business expenditures.

AWS SQS Pricing 

These tables below provide a comprehensive overview of Amazon SQS pricing, covering request costs, metered charges, and data transfer rates. It ensures clarity and completeness, enabling users to optimize AWS costs effectively.

1. Requests Pricing (per Million requests)

Region Standard Queues FIFO Queues
First 1 Million Requests/Month Free Free
1 Million to 100 Billion Requests/Month $0.40 $0.50
100 Billion to 200 Billion Requests/Month $0.30 $0.40
Over 200 Billion Requests/Month $0.24 $0.35   

2. Charges Metered

  • API Actions: Every Amazon SQS action counts as a request.
  • FIFO Requests: API actions for sending, receiving, deleting, and changing visibility of messages from FIFO queues are charged at FIFO rates; all other API requests are charged at standard rates.
  • Contents of Requests: A single request can include from 1 to 10 messages, with a maximum total payload of 256 KB.
  • Size of Payloads: Each 64 KB chunk of a payload is billed as 1 request (e.g., an API action with a 256 KB payload is billed as 4 requests).

3. Data Transfer Pricing

Type Category Pricing
Data Transfer IN All Data Transfer In $0.00 per GB

Data Transfer OUT
First 10 TB / Month
Next 40 TB / Month
Next 100 TB / Month
Greater than 150 TB / Month
$0.09
$0.085
$0.07
$0.05

Factors Affecting SQS Costs 

When considering the cost structure of AWS SQS (Simple Queue Service), several key factors play a significant role in determining the overall expenses.

  • Number of Requests: The primary cost driver in AWS SQS. Optimizing request frequency can help manage and reduce overall expenses.
  • Data Transfer Costs: Expenses related to the volume of data moved in and out of SQS.
  • Message Size: Impact of payload size on pricing. Larger messages may lead to higher costs.
  • Long Polling: Affects costs by influencing how frequently messages are retrieved from the queue.
  • Retention Period: Influences costs by determining how long messages are kept in the queue.

The pie chart below visualizes the distribution of these cost factors, providing a clear perspective on their relative contributions.

Pie Chart Demonstrating the Factors of AWS SQS Costs and Pricing

Strategies to reduce SQS Costs

1.Implementing Long Polling

Diagram Illustrating Long Polling Implementation to Optimize SQS Costs

Long polling is a technique used to reduce the number of empty responses (or "polls") when checking for new messages in a queue. Instead of continuously querying the queue for messages, which can result in many "empty" responses (indicating no new messages), long polling allows the SQS service to wait and hold the connection open for a specified period. If a message arrives during this time, SQS immediately sends it back in the response. If no message arrives before the timeout, SQS returns an empty response after the specified wait time.

You can configure the 'ReceiveMessageWaitTimeSeconds' attribute when creating or updating an SQS queue. Here's how you can do it using the AWS SDK for Python (Boto3):

import boto3

# Create SQS client
sqs = boto3.client('sqs', region_name='your-region')

# Create or update queue with long polling enabled
queue_url = 'your-queue-url'
response = sqs.create_queue(
    QueueName='your-queue-name',
    Attributes={
        'ReceiveMessageWaitTimeSeconds': '20'  # Set long polling wait time (up to 20 seconds)
    }
)
print("Queue created with long polling enabled.")

2. Batch Operations

Diagram Illustrating Batching Operations to reduce SQS Costs

Batch operations in Amazon SQS, such as 'SendMessageBatch', 'DeleteMessageBatch', and 'ChangeMessageVisibilityBatch', offer a cost-effective strategy by reducing the number of API requests made to the service. SQS pricing is primarily based on the volume of requests, so consolidating multiple operations into a single batch operation helps lower costs. For instance, instead of sending or deleting messages one by one, you can bundle up to 10 messages into a single batch operation. This approach not only minimizes the network overhead associated with multiple requests but also streamlines message management, making workflows more efficient and cost-efficient.

Scenario - You need to process 100 messages in Amazon SQS with operations such as sending, deleting, and changing message visibility. Performing these operations individually versus using batch operations has a significant impact on cost.

Let's assume the cost per million API requests for Amazon SQS is $0.40. Let us calculate the cost for 300 requests (individual) and 30 requests (batch):

Metric/Operation Individual Requests Batch Requests Cost Per Million Requests Total Cost
(Individual)
Total
Cost (Batch)
Net Savings
SendMessage 100 10 $0.40 $0.04 $0.004 $0.036
(90% saving)
DeleteMessage 100 10 $0.40 $0.04 $0.004 $0.036
(90% saving)
ChangeMessageVisibility 100 10 $0.40 $0.04 $0.004 $0.036
(90% saving)
Total API Requests 300 30 $0.40 $0.12 $0.0012 $0.036
(90% saving)

By using batch operations, you reduce the number of API requests from 300 to 30, resulting in a cost savings of $0.108, which is a 90% reduction in costs for processing 100 messages.

3. Scale Down Consumers 

Scaling down consumers in Amazon SQS is crucial for cost efficiency as it reduces unnecessary ReceiveMessage requests. By aligning the number of consumer instances with actual message volume, especially during periods of reduced activity, you minimize SQS costs associated with request charges. Implementing automation to scale consumers based on queue metrics ensures optimal resource utilization and helps avoid paying for unused capacity, thereby maximizing cost savings in SQS operations.

Here's an example of how you might use AWS CLI commands to adjust the number of consumers (instances) scaling down in an Auto Scaling Group (ASG) based on Amazon SQS metrics:

# Define your Auto Scaling Group name
ASG_NAME="your-auto-scaling-group-name"

# Set the desired capacity to scale down to
DESIRED_CAPACITY=1  # Set to the number of instances you want after scaling down

# Update Auto Scaling Group to set desired capacity
aws autoscaling update-auto-scaling-group \
    --auto-scaling-group-name $ASG_NAME \
    --desired-capacity $DESIRED_CAPACITY

1. Scenario: Order Processing for eCommerce

"eShopMart Inc.," an online retail company, manages customer orders using a microservices architecture. These services are hosted on Amazon EC2 instances and rely on Amazon SQS for processing orders reliably and efficiently.

eShopMart Inc. processes an average of 150,000 order messages during peak hours and 30,000 messages during off-peak hours daily. Initially, they ran 15 EC2 instances continuously to consume messages from the SQS queue, ensuring timely order processing.

To optimize their costs, eShopMart decided to scale down the number of consumer instances during off-peak hours.

2. Current Costs

  • Consumers: 15 instances constantly running.
  • Daily Requests: 1,296,000 requests.
  • Cost per Million Requests: $0.40.
  • Monthly Cost Calculation:  1,296,000 requests/day / 1,000,000 × $0.40 × 30 days = $15.55.

3. Optimized Costs

  • Peak Hours (8 hours): 15 instances (unchanged).
  • Off-Peak Hours (16 hours):5 instances.
  • Daily Requests:720,000 requests.
  • Cost per Million Requests: $0.40.
  • Monthly Cost Calculation:   720,000 requests/day / 1,000,000 × $0.40 × 30 days = $8.64.

4. Savings

  • Monthly Savings: $15.55 (current cost) - $8.64 (optimized cost) = $6.91.
  • Annual Savings:$6.91 × 12 months = $82.92.

By scaling down the number of consumer instances during off-peak hours, eShopMart Inc. can save approximately $82.92 annually, ensuring they maintain cost efficiency while still providing reliable order processing during peak and off-peak hours.

4. Remove Queues that are no longer needed

To reduce AWS SQS costs, start by identifying unused queues. Begin by reviewing metrics in AWS CloudWatch, such as NumberOfMessagesSent, NumberOfMessagesReceived, and ApproximateNumberOfMessagesDelayed. Low or zero activity over a period may suggest that a queue is no longer needed. Additionally, check the LastModifiedTimestamp for each queue to see if it has been accessed recently; a queue with no recent activity might be unnecessary. Analyzing cost reports in the AWS Billing and Cost Management dashboard can also help identify queues with minimal traffic but ongoing costs.Once you have identified queues that are no longer needed, verify that they are not required for any active processes or integrations.

To remove a specific SQS queue, use the AWS CLI command:

aws sqs delete-queue --queue-url <queue-url>

5. Choosing Between SQS Standard and FIFO Queues

Illustration depicting SQS Standard vS SQS FIFO to reduce costs

SQS offers two queue types—standard and FIFO—each tailored to different application needs. SQS Standard Queues prioritize high throughput and scalability, making them ideal for applications where message order is less critical. They efficiently handle large message volumes, reducing the need for additional queues and potential costs. SQS FIFO Queues, on the other hand, guarantee strict message ordering and exactly-once processing, crucial for applications requiring precise event sequencing like financial transactions. However, they have lower throughput and may require more resources to handle high message volumes, potentially increasing operational costs. By aligning your queue choice with your application’s requirements for message order and throughput, you can optimize performance and cost-effectiveness effectively.

Here's a table highlighting the key features of both SQS Standard and FIFO queues to help choose them efficiently:

Feature SQS Standard Queues SQS FIFO Queues
Message Ordering Occasional out-of-order delivery possible Guaranteed strict message ordering
Exactly-Once Processing Not guaranteed Guaranteed
Throughput Higher throughput, supports nearly unlimited Lower throughput, up to 3,000 TPS with batching (300 TPS without batching)
Scalability Highly scalable Less scalable due to lower throughput
Suitability Applications where exact message order is not critical Applications requiring strict message ordering
Use Case Examples Web applications, systems handling large volumes of data Financial transactions, event-driven workflows
Cost Considerations Lower operational costs due to higher throughput Higher operational costs due to lower throughput

6. Setting-up dead-letter queue retention

Configuring a longer retention period for your Amazon SQS dead-letter queue (DLQ) compared to your main queue helps reduce costs by ensuring messages have enough time for reprocessing or review before being deleted. This prevents premature message loss and reduces the need for costly retransmissions. For example, if your main queue retains messages for 3 days, setting your DLQ retention to 7 days ensures ample time for troubleshooting without losing valuable data too soon. 

Here's an example of how to set the retention period for a dead-letter queue using AWS CLI commands:

# Create a dead-letter queue with a retention period of 7 days
aws sqs create-queue --queue-name MyDeadLetterQueue --attributes VisibilityTimeout=60,MessageRetentionPeriod=604800

 

In this command:

`MessageRetentionPeriod=604800` specifies a retention period of 604800 seconds (equivalent to 7 days).

 Adjust the `MessageRetentionPeriod` value as needed based on your application's requirements and the expected processing time for messages in your queues.

7. Using the Amazon SQS message deduplication ID

Using the Amazon SQS message deduplication ID helps save costs by preventing duplicate messages from being processed and stored unnecessarily in SQS queues. For example, in an application handling order processing, setting a deduplication ID based on a unique order ID ensures that duplicate order messages are discarded to avoid redundant operations. This optimization reduces transactional and storage costs in SQS. 

Here's an example of how to send a message with a deduplication ID using AWS SDK for Python (Boto3):

import boto3

# Initialize SQS client
sqs = boto3.client('sqs', region_name='us-east-1')

# Example message with deduplication ID based on unique order ID
message = {
    'OrderID': '123456',
    'CustomerName': 'John Doe',
    'OrderDetails': '...',
}

# Send message with deduplication ID
response = sqs.send_message(
    QueueUrl='https://sqs.us-east-1.amazonaws.com/123456789012/MyQueue',
    MessageBody=json.dumps(message),
    MessageDeduplicationId='unique_order_id_123456',
    MessageGroupId='order_group'
)

However, there are scenarios where this approach might not be applicable or beneficial:

  • Standard Queues: SQS deduplication is only available for FIFO queues, not standard queues.
  • Non-idempotent Operations: Deduplication may lead to inconsistent results or missed operations.
  • Complex Logic: Built-in deduplication might be insufficient for complex cases.
  • Latency: Deduplication adds processing overhead, which could affect real-time applications.
  • Intentional Duplicates: Some scenarios require processing duplicate messages.

In these cases, SQS deduplication might not be suitable or applicable for cost optimization.

Conclusion 

In conclusion, optimizing AWS SQS for cost efficiency involves implementing effective strategies such as batch operations, configuring long polling, choosing between SQS standard and FIFO queues based on application needs, and utilizing features like message deduplication IDs and dead-letter queue retention. These practices not only help minimize operational expenses by reducing the number of API requests and optimizing resource allocation but also ensure high-performance message queuing tailored to specific workload requirements. By adopting these best practices, businesses can achieve significant cost savings while maintaining robust and scalable messaging solutions on AWS.

References

1. 6senseAWS SQS Stats

2.Amazon SQS Pricing | Message Queuing Service | AWS

3. Amazon SQS best practices - Amazon Simple Queue Service

4. Understand Amazon SQS billing and how to reduce costs | AWS re:Post

5. AWS Well Architected Best Practices SQS

FAQ's

1. What is Amazon SQS optimized for?

Amazon SQS is optimized for reliable, scalable, and cost-effective message queuing. It ensures reliable delivery of messages between distributed application components.

2. What is the size of a billed unit in SQS with respect to the message payload?

The size of a billed unit in SQS is based on 64 KB chunks of the message payload. Each chunk is billed as one request, regardless of the actual size of the message.

3. What happens when SQS is full?

When SQS is full, new messages are rejected or returned as errors, depending on the type of queue and settings. It's crucial to monitor queue metrics and adjust capacities to prevent full conditions.

4. What is the main advantage of using Amazon SQS?

The main advantage of using Amazon SQS is its reliability and scalability. It decouples the components of an application so they can run independently, handling any volume of messages without losing them or requiring other services to be available.

5. What is the maximum timeout for SQS?

The maximum timeout for SQS when waiting for new messages using long polling is 20 seconds. This feature reduces the number of empty responses, optimizing message retrieval efficiency.

Subscribed !
Your information has been submitted
Oops! Something went wrong while submitting the form.

Similar Blog Posts

Maintain Control and Curb Wasted Spend!

Strategical use of SCPs saves more cloud cost than one can imagine. Astuto does that for you!