PRINCIPLE-FIRST
The right solution, used in the wrong situation, can create more problems than it solves.
S3 Intelligent-Tiering is known for automating cost savings by moving data between storage tiers based on usage. But here’s the thing—while it works great in some scenarios, it can fall short or even cause issues in others.
From small object sizes that incur hidden costs to predictable access patterns that don’t align with its design, Intelligent-Tiering isn’t always the smartest choice for every workload.
In this article, we’ll walk you through 10 reasons why S3 Intelligent-Tiering doesn’t always work, helping you avoid common pitfalls and make storage decisions that are as efficient as they are effective.
Objects smaller than 128 KB are not tiered or monitored, and they are always charged at the Frequent Access tier rates. While there is no monitoring or automation charge for these objects, their static pricing can make Intelligent-Tiering less effective for workloads with many small objects, as they don’t benefit from tier transitions.
Intelligent-Tiering charges a per-object monitoring fee, which can become expensive when managing millions or billions of objects. For data with low value or infrequent access, these monitoring fees can outweigh any potential savings from transitioning objects to cheaper tiers.
Objects that are accessed frequently remain in the Frequent Access tier, preventing transitions to lower-cost tiers such as Infrequent Access or Archive Instant Access. This limits the cost-saving potential, especially if most files are read within 30 days of storage.
Intelligent-Tiering’s cost benefits rely on long-term storage, as objects need time to transition through the tiers. Data with short lifecycles (e.g., temporary logs or transient datasets) does not remain in storage long enough to justify the tiering and monitoring fees.
If access patterns are well-defined and consistent, manually selecting appropriate storage classes like S3 Standard, Standard-IA, or Glacier can result in better cost optimization. Intelligent-Tiering automates tier transitions, which might be unnecessary and costly for predictable workloads.
When data has consistent access patterns over time, such as always being accessed or rarely accessed, Intelligent-Tiering offers no significant advantage. Static data with infrequent access can be stored directly in lower-cost storage classes, bypassing the need for monitoring fees.
In versioned buckets, lifecycle policies apply to all versions of an object. Intelligent-Tiering forces transitions for all versions, potentially increasing storage costs for old or non-current versions that would be better stored in Glacier or Deep Archive.
Intelligent-Tiering doesn’t allow users to manually move objects between tiers or set tier-specific versioning or lifecycle policies. This lack of flexibility can be a drawback for use cases requiring precise control over storage and transitions.
For objects with a known expiration date (e.g., temporary files, backups, or cache data), Intelligent-Tiering’s automatic tiering doesn’t have enough time to deliver cost benefits. Such objects may be better suited for other storage classes with no monitoring fees.
Intelligent-Tiering can complicate lifecycle management in scenarios requiring specific handling of versioning or delete markers.
For example:
S3 Intelligent-Tiering can be a great tool, but as we’ve seen, it’s not always the right fit. Whether it’s small object sizes, frequent access patterns, or short-lived data, there are situations where it might cost more or add unnecessary complexity.
The takeaway? It’s not about using every tool—it’s about using the right tool for the job. Take the time to understand your data, its patterns, and your storage needs. That way, you can make smarter decisions and avoid paying for features you don’t actually need.
Sometimes, simple is better. And knowing when to skip Intelligent-Tiering might just save you more in the long run
Strategical use of SCPs saves more cloud cost than one can imagine. Astuto does that for you!