Skip to main content

How increasing power and advanced-cooling techniques are converging for AI, Supercomputing and Cloud Data Centers

In the AI domain, brute force processing power is required to tackle what’s commonly understood to be the most compute-intensive challenges of the modern era. This is being achieved through the implementation of increasingly higher power processors and ever larger memory resources in clustered architectures that reduce the latency between onboard compute engines.

This is driving innovation from companies like Cerebras, whose recently announced Wafer Scale Engine (WSE) is widely regarded as the most powerful processor in AI today. Comprised of 84 processing cells that span an entire wafer and yet function as a single chip, the WSE dramatically reduces the latency associated with a traditional singulated, socket-based chip architecture.

Rated at a massive 15kW,  an order of magnitude greater that legacy processors, the WSE also requires an advanced power architecture whereby power is applied uniformly to each cell at extremely high currents.

Read more about how Vicor is helping Cerebras achieve new levels of processing power

Vertical power deliver image