This week, Amazons Web Services (AWS) kicked off its tenth re:Invent conference, an event where it typically announces the biggest changes in the cloud computing industrys dominant platform. This years news includes faster chips, more aggressive artificial intelligence, more developer-friendly tools, and even a bit of quantum computing for those who want to explore its ever-growing potential.
Amazon is working to lower costs by boosting the performance of its hardware. Their new generation of machines powered by the third generation of AMDs EPYC processors, the M6a, is touted as offering a 35% boost in price/performance over the previous generation of M5a machines built with the second generation of the EPYC chips. Theyll be available in sizes that range from two virtual CPUs with 8GB of RAM (m6a.large) up to 192 virtual CPUs and 768GB of RAM (m6a.48xlarge).
AWS also notes that the chips will boast always-on memory encryption and rely on faster custom circuitry for faster encryption and decryption. The feature is a nod to users who worry about sharing hardware in the cloud and, perhaps, exposing their data.
The company is also rolling out the second generation of its ARM-based Gravitron processors and marrying them with a fast GPU, the NVIDIA T4G Tensor Core. These new machines, known as the G5g, also promise to offer lower prices for better performance. AWS estimates that some game streaming loads, for instance, will be 30% cheaper on these new chips, a better price point that may encourage more game developers to move their computation to the cloud. The GPU on the chips could also be attractive to machine learning scientists training models, who will also value the better performance.
This price sensitivity is driving the development of tools that optimize hardware configuration and performance. A number of companies are marketing services that manage cloud instances and watch for over-provisioned machines. Amazon expanded its own Compute Optimizer tool to include more extensive metrics that can flag resources that arent being used efficiently. Theyre also extending the historical record to three months so that peaks that may appear at the end of months or quarters will be detectable.
In addition to addressing price-performance ratios, Amazon is looking to please developers by simplifying the process of building and running more complex websites. A number of the announcements focus on enhancing tools that automate many of the small tasks that take up developer resources.
For instance, the new version of EventBridge, the service used to knit together websites by passing messages connected to events, Amazon says, is directly wired to the S3 data storage layer so changes to the data or some of the metadata associated with it will automatically trigger events. The new version also offers more enhanced filtering, which is designed to make it simpler to spin up smarter code.
Developers who base their workloads on containers will find things a bit faster because AWS is building a pull-through cache for the public containers in the Elastic Container Registry. This will simplify and speed up the work of deploying code built on top of these public containers. Amazon also anticipates that it could improve security by providing a more trustworthy path for the code.
There is also a greater emphasis on helping developers find the best way to use AWS. Code reviews, for instance, can now rely upon AIs trained to spot security leaks triggered when developers inadvertently include passwords or other secrets in publicly accessible locations. This new part of the AWS tool CodeGuru will catch some of the most embarrassing security lapses that have bedeviled companies using AWS in the past. The tool works with AWSs own repository, CodeCommit, as well as other popular version-tracking locations like BitBucket or GitHub.
AWS is also opening up its model version of a modern AWS app, the so-called Well-Architected Framework. Now, development teams will be able to add their own custom requirements as lenses. This will make it simpler for development teams to extend the AWS model to conform to their internal best practices.
Finally, AWS is offering a chance to hit the fast-forward button and experiment with the next generation of technology. Their RoboRunner, first launched in 2018, lets users create simulations of robots working and exploring. Companies adding autonomy to their assembly lines and factories can test algorithms. At the conference, Amazon opened a new set of features that simulate not just single robots but fleets cooperating as they work together to finish a job. This new layer, called IoT RoboRunner, relies upon the TaskManager to organize the workflow that can be specified as Lambda functions.
For those with an eye toward the deepest part of the future where quantum computers may dominate, AWS is expanding and simplifying its cloud quantum offering called Braket. Users can write quantum algorithms and rent time on quantum processors without long-term commitment. This week, AWS announced that this Braket service can now run quantum algorithms as hybrid jobs. After the software is created using a local simulator, it can be handed off to AWS, which will allocate time on a quantum processor and store the results in an S3 bucket. For now, theres no integration with the cost-saving tools like Compute Optimizer, but if quantum computing grows more successful its certain to be announced at a future version of re:Invent.
See the article here:
AWS re:Invent: Faster chips, smarter AI, and developer tools grab the spotlight - VentureBeat