Amazon.com Inc.’s cloud division, Amazon Web Services (AWS), is launching new designs aimed at enhancing data center efficiency to mitigate the increasing demand on the electrical grid. The updates include advanced cooling techniques, alternative fuel options for backup generators, and an improved server rack layout. Some of these components have already been implemented, with additional features set to debut as new data centers open. This initiative responds to the energy-intensive nature of server farms that power on-demand computing services.
AWS launches new designs for data center efficiency
AWS plans to invest approximately $75 billion in capital expenditures this year, primarily directed toward server and chip upgrades, including cooling systems. The investment reflects AWS’s commitment to addressing energy usage while enhancing its AI infrastructure. At its upcoming re:Invent conference, the company is expected to introduce its latest custom-designed chips, including advanced AI products that will compete with established offerings from Nvidia.
New cooling systems for AI servers
One of the most significant updates is the shift to liquid cooling systems for AWS’s AI servers. This technology is essential for maintaining optimal performance in high-powered chips from Nvidia and AWS’s homegrown Trainium devices. AWS emphasizes that the liquid cooling integration is flexible, allowing for both air and liquid cooling in a single system. This multimodal design is intended to maximize performance and efficiency across various workloads, addressing the unique demands of AI applications.
Furthermore, AWS is pursuing a simplified approach to electrical distribution and mechanical designs for its servers. This strategy could enhance infrastructure availability to 99.9999%, significantly reducing the number of server racks susceptible to electrical disturbances by up to 89%. This improvement is likely achieved by minimizing the conversions from AC to DC power, which typically lead to energy losses.
AWS unveils next-level AI tools to modernize customer support
These new cooling systems and streamlined designs aim to support a remarkable sixfold increase in rack power density over the next two years, with additional growth anticipated thereafter. By incorporating AI into its operational strategies, AWS is employing predictive analytics to optimize server rack positioning, thereby further reducing energy waste attributed to underutilized power.
Ian Buck, Nvidia’s vice president of hyperscale and high-performance computing, acknowledged that advanced liquid cooling solutions will efficiently cool AI infrastructure while minimizing energy usage. Both companies are working closely to refine the rack design specifically for liquid cooling applications, which is expected to benefit shared customers significantly.
Prasad Kalyanaraman, AWS’s vice president of Infrastructure Services, stated that these improvements are critical strides toward increasing energy efficiency and modularity.
“AWS continues to relentlessly innovate its infrastructure to build the most performant, resilient, secure, and sustainable cloud for customers worldwide,” stated Kalyanaraman. “These data center capabilities represent an important step forward with increased energy efficiency and flexible support for emerging workloads. But what is even more exciting is that they are designed to be modular, so that we are able to retrofit our existing infrastructure for liquid cooling and energy efficiency to power generative AI applications and lower our carbon footprint.”
Featured image credit: Amazon