Site icon GIXtools

AWS tries to lure users to its cloud via storage ease of use

At its re:Invent 2024 conference this week, AWS rolled out five new storage tools and services with the goal of making its offerings easier to work with, if not cheaper or better. With the incremental differences in the major enterprise cloud environments today, that may be enough.

Analysts generally praised the rollouts, saying that the company is trying to make things a little easier to use AWS.

“For sure, what AWS is announcing simplifies the life of enterprise IT. Not only that, these should deliver some level of cost savings – both directly and indirectly,” said Matt Kimball, VP principal analyst for Moor Insights & Strategy. “Looking at this holistically, AWS is delivering updates across the data management/storage stack, from ingest to making data useful and usable to management.”

Brent Ellis, a senior analyst with Forrester, also found a lot to like in the rollouts. He said that the new services are, not unexpectedly, trying to get enterprises to align with AWS services, which should make it more difficult to later move to a competing cloud platform.

“Is it vendor lock-in or a trusted partnership? Depends on how you look at it,” Ellis said.

The key announcements included:

Amazon FSx Intelligent-Tiering

This is an AWS attempt to try and whittle down cloud costs at the enterprise level.

“The new storage class is priced 85% lower than the existing SSD storage class and 20% lower than traditional HDD-based deployments on premises, and brings full elasticity and intelligent tiering to NAS data sets,” explained AWS Evangelist Jeff Barr. “Your data moves between three storage tiers (Frequent Access, Infrequent Access, and Archive) with no effort on your part, so you get automatic cost savings with no upfront costs or commitments.”

The three levels are based on data use; the Frequent Access tier contains data that has been accessed within the last 30 days; data that has been not been accessed for 30 to 90 days is stored in the Infrequent Access tier, at a 44% cost reduction from Frequent Access; and the Archive tier contains data that has not been accessed for 90 or more days, at a 65% cost reduction from Infrequent Access.

Kimball applauded AWS for the intelligent tiering in FSx, and for its support for OpenZFS. “Cost savings meets simplicity for enterprise users,” he said. “The whole notion of migrating data and having to manage tiering is time consuming and resource intensive. Which means cost, cost, cost. In this data era, enterprise IT organizations really don’t know what data is critical for feeding models or deep analytics. So the ability to find a more affordable model that can intelligently provision and place data makes life much simpler.”

“Expanding intelligent tiering in FSx is a big plus, especially for businesses looking for ways to reduce EC2-based file servers where they have few optimization tools,” added Brent Ellis, a senior analyst with Forrester. “Updates to storage optimized EC2 instances is a plus for companies with growing low latency data needs.”

Storage Browser for Amazon S3

This browser is described as an open source UI component to add to web apps, and it is supposed to facilitate data interactions on S3. AWS said that the frontend component would allow users to “browse, upload, download, copy, and delete data from Amazon S3 based on their specific permissions, which you control using AWS identity and security services or custom managed solutions.”

The idea, AWS said, is that this should make it easier for developers to grant access to data without the users having to be familiar with the structure or commands within S3.

Ellis said he liked the storage browser, finding it “really interesting, and [it] has a lot of possibilities for web applications, mobile apps, and shared app storage for distributed teams. Basically, a flexible OneDrive/Sharepoint type functionality.”

AWS Data Transfer Terminals 

AWS is creating physical offices — the first are in New York and Los Angeles — with high-speed connections to AWS. The connections won’t be as fast as enterprise connections, but are designed for partners. For example, if a movie producer in LA has crews creating large datasets of video or animation in New York City, this allows them to move the files to the cloud faster.

The physical locations strategy is an interesting one, Ellis said. He said that the Amazon choice of the first cities being New York and Los Angeles suggests a focus on two critical verticals: media/entertainment and financial institutions.

The sites will deliver speeds “several orders of magnitude faster than a T3,” Ellis said. 

AWS said that users will have to bring the data to the centers with equipment capable of handling high speed data transfers. Specifically, those requirements call for “a transceiver type 100G LR4 QSFP, an active IP auto configuration (DHCP) and up-to-date software/transceiver drivers.”

“If you have a device storing petabytes of data, this is not an onerous request,” Ellis said. But, he added, “I am less impressed by the data transfer terminal [because] it seems really limiting and only valuable for select customers local to the transfer terminals.”

Storage optimized Amazon EC2 I7ie instances

This new storage optimized offering delivers “up to 120 TB of low latency NVMe storage and 5th generation Intel Xeon Scalable Processors with an all-core turbo frequency of 3.2 GHz,” and offers the highest storage density available in the cloud today, according to AWS. It is designed to support I/O intensive workloads that need a high degree of random IOPS: NoSQL databases, distributed file systems, search engines, data warehouses, and analytics.

Kimball pointed to the 17ie announcement as a good example of what AWS is trying to deliver.

“The performance improvements AWS touts are pretty impressive, with 50% lower I/O latency and 65% less variability translating into faster, predictable performance that will certainly deliver application improvements,” Kimball said. “This leads to a markedly better TCO/ROI [total cost of ownership/return on investment] equation. More directly is that 20% better price-performance number that the company quotes. This is a more direct and measurable number that should be compelling for any enterprise.”

Amazon EC2 I8g instances powered by AWS Graviton4 processors and 3rd gen AWS Nitro SSDs

AWS paints the EC2 I8g instances as “the first instance type to use third-generation AWS Nitro SSDs. These instances offer up to 22.5 TB local NVME SSD storage with up to 65 percent better real-time storage performance per TB and 60 percent lower latency variability compared to the previous generation I4g instances.”

The company said that the instance, now generally available, features a larger L3 cache and increased memory bandwidth, noting that “the VP2 INTERSECT instruction (part of AVX-512) accelerates machine learning and graph processing workloads” and the Advanced Matrix Extensions (AMX) “increase deep learning training and inferencing performance.”

It also features more than triple the EBS bandwidth offered by the previous generation of storage optimized instances, AWS said. “This accelerates just about every I/O-intensive use case, and is especially helpful when hydrating an in-memory database or caching server.“

Focus on customer experience

Kimball saw the fact that Google and Microsoft were not referenced in the announcements as part of the strategy.

“I don’t see this as a competitive play, necessarily. I see these announcements as being more focused on the customer experience at AWS, and removing the barriers between on-prem and off-prem for the enterprise in what is arguably the most critical areas: data management and cost,” he said. “Managing and using data across applications and AI has to be seamless for the modern enterprise. It has to be easy and cost effective, and it has to deliver value faster. This is what I see AWS doing across the board.”

The new offerings “highlight the AWS on-going goal of reducing the friction for customers to migrate their on-prem data to the cloud,” agreed Jim Hare, a distinguished VP analyst at Gartner. “And, making it simpler and easier for business users to access and manage data without need for IT.”

Still, he said, “migrating applications and data to the cloud does not always result in lower TCO or a positive ROI. Often moving a workload will not solve a business problem, without the additional investment to modernize it. Enterprises should choose to rearchitect their application and migrate the data when the cloud benefits will justify the investment in budget, time and resources.”

Source:: Network World

Exit mobile version