I found a great deal on a great condition and rather unique server on eBay. The server appeared to be new/old stock with upgraded config. Afterwards, I was curious and did a bit of searching but didn't find basically any coverage about this server anywhere. So here is my overview/review.
Intro
The ASUS RS70SA-E10-RS12 is a extremely high density 2u 6 node EPYC SP3 server. It supports EPYC Rome/Milan. According to its press release, it is world's first 2U6N Server and to be fair, I haven't been able to find any other 2U6N servers. (Link)(Archive)
Server Exterior
The front of the chassis hosts two 1U nodes, and a cage for 12 2.5' inch SAS/SATA/NVME Drives, two for each node. These are not toolless caddies. These are NVMe internally with a Broadcom SAS3008 converting to SAS/SATA
ASUS RS620SA-E10-RS12 Front The rack ears are each equipped with 5 buttons on each side,
2 buttons for easy access power control for the rear nodes
3 buttons for locating server nodes (They toggle an LED on their respective node)
Additionally, the rack ears each have a pull handle, which is extremely useful with servicing and maintence, as the server is quite heavy and it helps avoid accidental button presses.
ASUS RS620SA-E10-RS12 Front
On the rear of the chassis you can access the redundant power supplies and the other 4 server nodes in this chassis.
ASUS RS620SA-E10-RS12 RearThis particular unit is equipped with a 2200W 80+ Platinum DELTA PSU. According to the QVL, this chassis can also be equiped with a 3000W 80+ Titanium GOSPOWER unit.
ASUS RS620SA-E10-RS12 Power Supply
A rather interesting feature about this chassis is its built-in slide-out cable management channel, which routes cables from the front 2 nodes back to the rear.
ASUS RS620SA-E10-RS12 Cable ManagementMy particular unit's cable management was equipped with two SFP28 male to female cables.
ASUS RS620SA-E10-RS12 Custom SFP28 CablesThese cables are quite interesting as I have been unable to find any other similar male to female SFP28 cables. Additionally, I don't know if these come stock, as my unit does not appear to be the base config.
Chassis Internals
Chassis Backplane
There are four 80mm 16K RPM 6Pin hot swappable Delta fans in this system. Also visible are the CB8LX12G-R2H-B NVME to SAS/SATA boards for the backplane. It is interesting that there are 3 boards, one per 2 nodes. Another notable detail are that there are two midplane boards, for handling the back and front nodes respectively.
ASUS RS620SA-E10-RS12 Backplane
Node Overview
External
Each node has a power button, VGA port, two USB 3.2 Gen 1 slots, a dedicated IPMI port and a 1GB LAN port . The nodes also have a very useful and notable featre: a 7 segment POST-code screen.
ASUS RS620SA-E10-RS12 Node IO (Top IPMI, Bottom LAN)
Internal
Each node has a single SP3 socket and has 8 memory channels supporting DDR4 3200 with up to 4096GB with 512GB LRDIMMs. Note: The node pictured only has 2 DIMMS installed as I used it as a test, and the rest of my ram was deployed elseware.
ASUS RS620SA-E10-RS12 Unpopulated Node
ASUS RS620SA-E10-RS12 Populated Node
As for expansion slots, each node has a PCIE Gen4 x16 OCP 3.0 expansion slot for networking, a PCIE Gen4 x16 half length low profile expansion slot through a riser card.(Since the nodes do not supply any additional PCIE power, cards are limited to 75W.) and finally a riser card with 2 M.2 slots, with each slot supporting up to 22110, with SATA or a PCIE Gen 4x4 link.
ASUS RS620SA-E10-RS12 M.2 Riser Card Side 1
ASUS RS620SA-E10-RS12 M.2 Riser Card Side 2
Misc
It is interesting to see a physical TPM module instead of a firmware fTPM. Also the VGA comes from a header instead of being soldered.
ASUS RS620SA-E10-RS12 VGA and TPMAlso the Node backplane connectors
ASUS RS620SA-E10-RS12 Node Backplane Connector
Management
Each node has a ASPEED AST2500 BMC running ASUS's management solution which appears to be based on MegaRAC SP-X. Note the LANNCSI Jumper on the right of the AST2500, which controls if management is accessible from the Gigabit i210 LAN or the OCP NIC.
ASUS RS620SA-E10-RS12 Node ASPEED AST2500I won't review or show the IPMI interface as it has been extensively covered by STH here
Note: I have/had a couple of issues with managing this server.
I have been unable to get access the IPMI from the IPMI interface. I have only been able to get the IPMI to work through a setting to have the LAN share the IPMI access. (I fully acknowledge it may very well a PEBCAK.)
I have an issue where the BIOS is horribly slow when physically connected to the VGA output where I can actually see the interface slowly update line by line. This is particularly a problem when moving across menu tabs as each menu tab renders sequentialy until you reach your desired tab. Notably, this does not show when accessing the BIOS via the IPMI iKVM.
Performance
Now Lets talk about performance numbers: ...I don't have any. I can't stress test this system in any scenario.
This system is designed for high density usage. But I can't run more than 4 nodes as I don't have access to a 16A outlet.
For those curious on how I run this server without 16A despite the PSU requiring a 16A C19 cable: I use a C19 to C14 cable and never more than 3 or 4 nodes at once.
Each node was seemingly designed for GPU inference or significant CPU compute. But, simply put, I don't have the CPU or GPU hardware to test a node properly and do it justice as its expensive. If I ever get the hardware to give it a proper shot, I may update this blog appropriately.
Power Usage/Efficiency
Sorry, I can't test this either. None of my PDUs have power-monitoring and I can't use a standard power meter as I use a C19 to C14 cable. At release, ASUS did claim this to have world’s highest SPEC Power ranking for energy efficiency for a high-density server powered by AMD® EPYC™ 7002 processors.(Link)(Archive) Although I can't verify this, I don't see any reason why this would be wrong given their careful wording, but time passes and a newer Rome/Milan server may have beaten their title.
Conclusion
This is a very interesting if obscure server. It is a first of its kind, (2U6N) and perhaps the last of its kind. ASUS has transitioned back to 2U4N for its latest generations (Link). I strongly suspect the reason was due to the much larger SP5 socket. Although in my opinion, one of the possible factors is that I don't think the RS620SA sold particularly well considering that there is the lack of BIOS/BMC updates since 2022 despite other ASUS EPYC servers released in the same timeframe getting updates.
Despite this, I am very happy with my purchase, I got it for a relative bargain, and it is perfect for my needs: a capable server platform which I can easily spin up Nodes as easily as I can spin up VMs. Which is particularly useful as I am increasingly experimenting with interesting 75W PCIE cards, and this system is amazing as essentially 6 workstations in a tiny 2u form-factor with different pieces of hardware, which I can spin up/down over the network at any time.