Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

How AWS’s $75B plan uses water to cool AI data centers

AWS plans to invest approximately $75 billion in capital expenditures this year, primarily directed toward server and chip upgrades, including cooling systems

byKerem Gülen
December 3, 2024
in News, Artificial Intelligence
Home News

Amazon.com Inc.’s cloud division, Amazon Web Services (AWS), is launching new designs aimed at enhancing data center efficiency to mitigate the increasing demand on the electrical grid. The updates include advanced cooling techniques, alternative fuel options for backup generators, and an improved server rack layout. Some of these components have already been implemented, with additional features set to debut as new data centers open. This initiative responds to the energy-intensive nature of server farms that power on-demand computing services.

AWS launches new designs for data center efficiency

AWS plans to invest approximately $75 billion in capital expenditures this year, primarily directed toward server and chip upgrades, including cooling systems. The investment reflects AWS’s commitment to addressing energy usage while enhancing its AI infrastructure. At its upcoming re:Invent conference, the company is expected to introduce its latest custom-designed chips, including advanced AI products that will compete with established offerings from Nvidia.

New cooling systems for AI servers

One of the most significant updates is the shift to liquid cooling systems for AWS’s AI servers. This technology is essential for maintaining optimal performance in high-powered chips from Nvidia and AWS’s homegrown Trainium devices. AWS emphasizes that the liquid cooling integration is flexible, allowing for both air and liquid cooling in a single system. This multimodal design is intended to maximize performance and efficiency across various workloads, addressing the unique demands of AI applications.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Furthermore, AWS is pursuing a simplified approach to electrical distribution and mechanical designs for its servers. This strategy could enhance infrastructure availability to 99.9999%, significantly reducing the number of server racks susceptible to electrical disturbances by up to 89%. This improvement is likely achieved by minimizing the conversions from AC to DC power, which typically lead to energy losses.


AWS unveils next-level AI tools to modernize customer support


These new cooling systems and streamlined designs aim to support a remarkable sixfold increase in rack power density over the next two years, with additional growth anticipated thereafter. By incorporating AI into its operational strategies, AWS is employing predictive analytics to optimize server rack positioning, thereby further reducing energy waste attributed to underutilized power.

Ian Buck, Nvidia’s vice president of hyperscale and high-performance computing, acknowledged that advanced liquid cooling solutions will efficiently cool AI infrastructure while minimizing energy usage. Both companies are working closely to refine the rack design specifically for liquid cooling applications, which is expected to benefit shared customers significantly.

Prasad Kalyanaraman, AWS’s vice president of Infrastructure Services, stated that these improvements are critical strides toward increasing energy efficiency and modularity.

“AWS continues to relentlessly innovate its infrastructure to build the most performant, resilient, secure, and sustainable cloud for customers worldwide,” stated Kalyanaraman. “These data center capabilities represent an important step forward with increased energy efficiency and flexible support for emerging workloads. But what is even more exciting is that they are designed to be modular, so that we are able to retrofit our existing infrastructure for liquid cooling and energy efficiency to power generative AI applications and lower our carbon footprint.”


Featured image credit: Amazon

Tags: awsdata center

Related Posts

Zoom announces AI Companion 3.0 at Zoomtopia

Zoom announces AI Companion 3.0 at Zoomtopia

September 19, 2025
Google Cloud adds Lovable and Windsurf as AI coding customers

Google Cloud adds Lovable and Windsurf as AI coding customers

September 19, 2025
Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

September 19, 2025
Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

September 19, 2025
DeepSeek releases R1 model trained for 4,000 on 512 H800 GPUs

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

September 19, 2025

LATEST NEWS

Zoom announces AI Companion 3.0 at Zoomtopia

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.