
ijustpostwhenimhi
2971
39
1

prev: https://imgur.com/gallery/1cSP6JB
Got around to doing some performance and power draw tests.
Results:
Disks asleep idle: 36.5w
Disks asleep iperf3: 39.5w
Disks awake idle: 65w
Disks awake net file transfer: 68.5w
These are SAS HDDs so take that into account for power draw while they're spinning.
Samba network file sharing:
Samba upload from array: 435 MB/s (3.48 Gb/s)
Samba download to array: 422 MB/s (3.37 Gb/s)
Samba upload from ram to ssd: 604 MB/s (4.83 Gb/s)
Samba download to ram from ssd: 665 MB/s (5.32 Gb/s)
iperf3 upload: 6.44Gb/s
iperf3 download: 7.26Gb/s
I'm very pleased with the performance of the PLX switch, even with only 1 gen3 lane for the upstream port
The PCIe testing phase is complete at this point. I have a 12-bay rack mount SAS enclosure coming tomorrow. So next on the menu is to test the enclosure, and then 3D print a case (or cases) for everything.
Requested Tags:
@StackMySwitchUp




NOYLL
Looks like a serious upgrade to the imgur servers
Hawkgirl203
I have no idea what the post is about. I don’t understand any comments. Oooh a cat!
c3l3x
In the end do you this this will cost less than a nas from qnap?
ijustpostwhenimhi
After looking into QNAP the TS-1655 would be the closest equivalent for drive capacity and connectivity. I see one for $1755. I am up to $379 (Pi, Pi accessories, PCIe components, 10GbE NIC, SAS Controller, SAS Enclosure), but the TS-1655 has an 8 core cpu vs my 4, upgrade options, like 25GbE, full speed NVME, and up to 128GB RAM, so It's tough to compare. The base TS-1655 with 2.5GbE has similar read/write speeds to my build, and is rated at 68W in sleep mode, and 104.65W for typical operation
StackMySwitchUp
I would like to add that one could build a more traditional pc for a nas out of second hand parts,and get a faster build that draws less. This one is just way more fun
ijustpostwhenimhi
Yup ;) Depending on how the budget looks in the end, I'd be curious to run head-to-head tests with an x86 build. Same HBA, NIC and Backplane
StackMySwitchUp
Have you configured jumbo frames? You might get even more out of it. You got 8GB/s which should be plenty for max transfers unless your cpu gets bogged down.
My own adventure took a twist; been running a proxmox cluster and wondering why i only got 1Gb/s. Unplugged my management NIC and now I have a broken cluster with 10Gb,even though the cluster should run on the fast NICs. Gonna pick up the remaining 64 of my 128GB RAM today and fix that dumb cluster this weekend.
Keep em coming!
ijustpostwhenimhi
Yup, 9000 mtu, going through a MikroTik CRS309. When I was on the pi4 that helped a lot with smb transfers, so I always enable them
StackMySwitchUp
Your choice in networking hardware is impeccable
[deleted]
[deleted]
ijustpostwhenimhi
- signed me
amp99
That sounds like a really high idle draw. I wonder if there's an ASPM issue with the Pi/ARM.
ijustpostwhenimhi
Possibly. When I was getting the sas controller to work, I disabled ASPM and besides not being the solution, it didn't make a difference in power draw, but having it enabled did cause the 10GbE NIC to initialize as a Gen 2 device. I didn't test to see if it ramped up to gen 3 during usage though
amp99
As for the NIC, you should be able to tell via lspci output. See here: https://www.reddit.com/r/homelab/comments/ybrdsn/downgraded_pcie_lane_speedwidth "LinkCap" shows the actual capability of the device, and LinkSta shows the current state. When the device is running at a lower link state than available, it should be denoted with "(downgraded)" in the LinkSta output.
amp99
For info, you can also see this behaviour on Windows if you use some software such as HWiNFO to monitor the "PCIe Link Speed" of your GPU. When the GPU isn't under much load it runs in the lowest available link (to save power), then when you put a demand on the GPU (e.g. by playing games or using hardware-accelerated video playback/browsing), the link speed ramps up. In the attached image, it looks like mine spend most of its time at 5GT/s.
StackMySwitchUp
I think because @op is measuring at the socket, the efficiency of the psu is a factor
amp99
BTW, just for reference, here's some data for one of my old (2015) x86 servers. It shows 20W AC input power (from the wall) and 18W DC output power (to the server), so a ~2W loss:
amp99
Even with a really low efficiency PSU that wouldn't account for such a high power draw. As mentioned in their reply, they disabled ASPM, meaning the controller(s) are running in full power mode constantly, which is ballooning the draw. That SAS controller uses ~10W on its own, for example: https://docs.broadcom.com/doc/BC00-0448EN#page=3
ijustpostwhenimhi
To clarify, I did not see a power draw difference with aspm enabled for the sas controller. I will further test the NIC with aspm when I get back to working on it and get back to you. the PSU is an EVGA 500GD 80+ Gold
amp99
If you didn't see a difference that means it wasn't working (property) when enabled. I'm out walking now but will post more details later.
ijustpostwhenimhi
even with pcie_aspm=force, there is no difference in power draw, and the link stays at 8GT/s for both the HBA and the NIC.