AWS Storage Optimisation

10 Reasons Why S3 Intelligent-Tiering Doesn’t Always Work

PRINCIPLE-FIRST

The right solution, used in the wrong situation, can create more problems than it solves.

S3 Intelligent-Tiering is known for automating cost savings by moving data between storage tiers based on usage. But here’s the thing—while it works great in some scenarios, it can fall short or even cause issues in others.

From small object sizes that incur hidden costs to predictable access patterns that don’t align with its design, Intelligent-Tiering isn’t always the smartest choice for every workload.

Hidden pitfalls behind using S3 intelligent tiering

In this article, we’ll walk you through 10 reasons why S3 Intelligent-Tiering doesn’t always work, helping you avoid common pitfalls and make storage decisions that are as efficient as they are effective.

1. Small Object Sizes

Objects smaller than 128 KB are not tiered or monitored, and they are always charged at the Frequent Access tier rates. While there is no monitoring or automation charge for these objects, their static pricing can make Intelligent-Tiering less effective for workloads with many small objects, as they don’t benefit from tier transitions.

2. High Monitoring Costs

Intelligent-Tiering charges a per-object monitoring fee, which can become expensive when managing millions or billions of objects. For data with low value or infrequent access, these monitoring fees can outweigh any potential savings from transitioning objects to cheaper tiers.

3. Frequent Data Access

Objects that are accessed frequently remain in the Frequent Access tier, preventing transitions to lower-cost tiers such as Infrequent Access or Archive Instant Access. This limits the cost-saving potential, especially if most files are read within 30 days of storage.

4. Short Data Lifecycles

Intelligent-Tiering’s cost benefits rely on long-term storage, as objects need time to transition through the tiers. Data with short lifecycles (e.g., temporary logs or transient datasets) does not remain in storage long enough to justify the tiering and monitoring fees.

5. Predictable Access Patterns

If access patterns are well-defined and consistent, manually selecting appropriate storage classes like S3 Standard, Standard-IA, or Glacier can result in better cost optimization. Intelligent-Tiering automates tier transitions, which might be unnecessary and costly for predictable workloads.

6. Static or Consistent Access Patterns

When data has consistent access patterns over time, such as always being accessed or rarely accessed, Intelligent-Tiering offers no significant advantage. Static data with infrequent access can be stored directly in lower-cost storage classes, bypassing the need for monitoring fees.

7. Challenges with Versioned Buckets

In versioned buckets, lifecycle policies apply to all versions of an object. Intelligent-Tiering forces transitions for all versions, potentially increasing storage costs for old or non-current versions that would be better stored in Glacier or Deep Archive.

8. Lack of Manual Control Over Tiers

Intelligent-Tiering doesn’t allow users to manually move objects between tiers or set tier-specific versioning or lifecycle policies. This lack of flexibility can be a drawback for use cases requiring precise control over storage and transitions.

9. Inefficiency for Expiring Data

For objects with a known expiration date (e.g., temporary files, backups, or cache data), Intelligent-Tiering’s automatic tiering doesn’t have enough time to deliver cost benefits. Such objects may be better suited for other storage classes with no monitoring fees.

10. Complex Lifecycle and Management Requirements

Intelligent-Tiering can complicate lifecycle management in scenarios requiring specific handling of versioning or delete markers.

For example:

  • Applying lifecycle policies in a versioned bucket may make the current version non-current and assign a delete marker as the current version, adding complexity.
  • Emptying a bucket requires steps like expiring current versions and permanently deleting non-current versions, which might be simpler in other storage setups.

S3 Intelligent-Tiering can be a great tool, but as we’ve seen, it’s not always the right fit. Whether it’s small object sizes, frequent access patterns, or short-lived data, there are situations where it might cost more or add unnecessary complexity.

The takeaway? It’s not about using every tool—it’s about using the right tool for the job. Take the time to understand your data, its patterns, and your storage needs. That way, you can make smarter decisions and avoid paying for features you don’t actually need.

Sometimes, simple is better. And knowing when to skip Intelligent-Tiering might just save you more in the long run

Subscribed !
Your information has been submitted
Oops! Something went wrong while submitting the form.

Similar Blog Posts

Maintain Control and Curb Wasted Spend!

Strategical use of SCPs saves more cloud cost than one can imagine. Astuto does that for you!