Cray Adds AMD EPYC Processors to CS500 Cluster Supercomputersby Anton Shilov on April 19, 2018 4:00 PM EST
Cray this week announced plans to offer AMD’s EPYC-based CS500 cluster supercomputers later this year. The Cray CS500 clusters will be based on ultra-dense 2-way servers each featuring up to 64 cores, various storage options, and high-speed network connectivity. In addition, Cray will offer 2U 2-way AMD EPYC-powered systems supporting up to 4 TB of memory.
The Cray CS-series supercomputers are built using ultra-dense dual-socket nodes packed into 2U chassis. The CS-series can scale up to 11,000 nodes to provide the right performance and memory capacity for target applications. The AMD-based CS500 systems come with Cray’s software programming environment and libraries that can take advantage of the EPYC processors and their features to maximize performance. An optimized programming environment is a big deal because AMD’s server CPUs have historically suffered from the lack of optimized software.
The Cray CS500 nodes based on AMD EPYC 7000-series processors are dual-socket machines supporting two PCIe 3.0 Gen 3 x16 slots, eight DDR4 memory channels/slots per socket, and a choice of SSD/HDD storage solutions. Four of such machines can fit into a 2U chassis which is then placed into a cabinet. Two aforementioned PCIe slots per machine can be used to plug in two 100 GbE network cards and provide up to 200 Gb/s network connectivity.
For workloads that demand a huge amount of memory, Cray will offer dual-socket 2U CS500 nodes powered by two AMD EPYC 7000 CPUs and featuring 16 DDR4 DIMM slots per socket, thus supporting up to 4 TB of memory per box.
Cray plans to make its AMD EPYC-based CS500 supercomputers available in summer 2018. Prices will depend on actual configurations, which will be disclosed when the systems become available later this year.
- More EPYC Servers: Dell Launches 1P and 2P PowerEdge for HPC and Virtualization
- Microsoft Announces Azure VMs with Dual 32-core AMD EPYC CPUs
- HPE Unveils ProLiant DL385 Gen10: Dual Socket AMD EPYC
- Dissecting Intel's EPYC Benchmarks: Performance Through the Lens of Competitive Analysis
- AMD Announces Wider EPYC Availability and ROCm 1.7 with TensorFlow Support
- AMD's Future in Servers: New 7000-Series CPUs Launched and EPYC Analysis
Post Your CommentPlease log in or sign up to comment.
View All Comments
phoenix_rizzen - Thursday, April 19, 2018 - linkI'm confused on the RAM.
A 2U chassis holds 4 motherboards.
Each motherboard holds 1 CPU socket and 8 RAM sockets.
Each RAM socket can hold 128 GB.
128 GB * 8 * 4 = 4 TB
Shouldn't the dual-socket motherboard option then support 8 TB of RAM per 2U chassis?
Or are they limited to 64 GB DIMMs?
The article is very choppily written and jumps between the two configurations without really expanding on what each configuration can actually hold.
sbosio - Friday, April 20, 2018 - linkShould be 16 TB per box, not per 2U chassis. The motherboard is dual socket, 8 slots per socket, 256GB per slot, so 2*8*256 = 16 TB per box. Four boxes fit in each 2U chassis, and I guess multiple 2U chassis fit into a cabinet, though it doesn't say how many.
sbosio - Friday, April 20, 2018 - linkSorry, I'm confused too, please disregard the above comment (and miscalculation).
repoman27 - Friday, April 20, 2018 - linkThe article is a bit difficult to parse, but then again, so was the press release. There are two different node configurations. For each 2U chassis you can have either 4 compute nodes, or a single node that supports large memory configurations and GPU accelerators. Although the press release did not specifically say this, I'm guessing the compute nodes support 2 sockets x 8 channels x 1 DIMM per channel x 128 GB per LRDIMM = 2 TB per node, whereas the large memory configuration nodes would support 2 DIMMs per channel for 4 TB per node. So twice the memory per node, but half as much per 2U chassis overall. Original press release here: https://globenewswire.com/news-release/2018/04/18/...
Also, I believe the article here misinterprets the networking situation. There are two PCIe 3.0 x16 slots per compute node for networking cards (so probably HHHL). These could be used for a solution like the Mellanox ConnectX-6 which provides dual 200GbE or HDR InfiniBand ports, but features a 32-lane PCIe 3.0 bus split into 2x x16. Product briefs here: http://www.mellanox.com/related-docs/prod_adapter_... and here: http://www.mellanox.com/related-docs/whitepapers/W...
Infy2 - Friday, April 20, 2018 - linkBut can it run Crysis?
jospoortvliet - Friday, April 20, 2018 - linkYes, raytracing it real time ;-)
zepi - Thursday, April 26, 2018 - linkBut who cares of 5sec input lag anyway if they can have 60fps raytraced?
anoldnewb - Tuesday, April 24, 2018 - linkHow well will it run Crysis
Vernews - Friday, April 27, 2018 - linkI am also heard about this processor but I don't know that it can be used in the supercomputer. So good article I must say. I get much information about this processor and also its using field.
IE 10 Support - Friday, April 27, 2018 - linkThis is the great post about the supercomputer. I learn how it works with processors.So I can say this post well.