Note while you are here click the "Subscribe" button in the bottom corner for updates to changes on the VCG and note this uses the inbox driver, with the newest supported firmware being HPK5. Also certified for ESA.
While I"m here I'll check what that firmware version fixed. Looks fairly serious from a stability basis so I'll make sure to use HPE's HSM + vLCM to patc this drive to the current firmware. support.hpe.com/hpesc/public/dโฆ
Next up let's look at P48896-B21: 1 HPE ProLiant DL360 Gen11 8SFF x4 U.3 Tri-Mode Backplane Kit
So this is the drive cage you NEED to use NVMe in a DL360 as it gives 4 PCI-E lanes to each drive (vs the cheaper basic one that only is 1x and only supports SATA in pass through.
Looking at the QuickSpecs There are a 3 other options for ESA vSAN.
LFF 3.5'': Can't do NVMe pass through.
24G x 1 NVMe/SAS U3. Can't do NVMe pass through, and frankly will underperform with NVMe drives even if used for RAID.
20EDSFF - Supported for ESA.
LFF Server (not supported for VSAN ESA)
Rambling out loud, I think E3 form factor stuff is a better play in the long term as it allows more density. 2.5'' SFF really is going to end up legacy for greenfield that shouldn't be needed.
Note the E3 config will support 300TB, 2x the SFF ones.
(Please go 100Gbps networking if your doing something that dense!)
We do have a NS204i-u, but that is only for a pair of M.2 boot devices (and a GREAT idea for boot, stop doing SD card, boot from SAN weird stuff!). This WILL NOT and cannot be used with the larger SFF or E3 format drives (and that's a good thing!).
Next up what's missing. There is NOT a RAID controller (Generally starts with MR or SR). If there's one of these the NVMe drives will potentially be cabled to it (and that's bad, and not supported by vSAN ESA).
Per the quickspecs:
"Includes Direct Access cables and backplane power cables. Drive cages will be connected to Motherboard (Direct Attach) if no Internal controller is selected. Direct Attach is capable of supporting all the drives (SATA or NVMe)."
Now I'll note this BOM only supports 8 drives, but if your willing to not have an optical drives, or a front USB/Display port There is a way to get 2 more cabled in:
One other BOM review item. They went 4 x 25Gbps. If you don't already have 25Gbps TOR switches I would honestly go 2 x 100Gbps. It's about 20% more cost all in with cables and optics, but it's 2x the bandwidth and the rack will look prettier.
There's also not an Intel VROC license/config item on here. This is a "software(ish) RAID option for NVMe. We don't need/want this for vSAN ESA. In theory there might be a way to use this for a boot device but use the NS controller instead for now.
In general talk to your HPE Solution architects, their quoting tools should be able to help (HPE always had really good channel tools), if possible start with a ReadyNode/vSAN ESA option to lock out bad choices.
Thanks to Dan R for providing me some insight into this.
I'm sure @plankers already noticed the lack of a TPM.
It's now embedded, and disabled if your servers is going to China.
I'm glad HPE stoped making this an removable option.
@plankers Another point, for anyone playing with the new memory Tiering, you also are going to want that cabled this way, as that feature is not supported through a RAID controller either.
@plankers @threadreaderapp unroll
โข โข โข
Missing some Tweet in this thread? You can try to
force a refresh
I got a quote from <Redacted> and the regular server of the same exact build as the ready node was 10k more. The parts/bom are 100% identical aside from the vSAN ready node identifier in the one quote?
Customers get really angry when they discover servers they bought are not supported by the newest vSphere/OS release:
A ๐งต
Warning, it may get ๐ถ๏ธ, and anger some vendorsโฆ
Hereโs how to get the best bang for buck when buying servers:
Buy the newest cpu generation. Yes someone may sell you an Ice Lake (2021) in 2024. No, you shouldnโt buy it. Even for a discount. Emerald rapids will get you 3 more years of Microcode updates and chances at OS/HV support.
Paying 20% less on a CPU may sound attractive โitโs cheaper, I donโt need the speed!โ But pushing back a refresh 3 years is huge for TCO.
Me: Why do you think it's slow?
Ohhhh You see LOW IOPS on the back end!
Let's address this one...
IOPS is not A unit of good or bad performance. It's a measure of use of a system. In a vacuum seeing this number low doesn't actually tell us anything for how good transactional application performance.
Let's talk about RFPs for server hosts, and why I hate them but how you can do VMware related RFPs better...
When I see some AWFUL design or purchasing decisions, I don't always know why, but I can generally assume terribly written, no context RFP was involved. Let's try to fix that!
1. Buy TPMs, and TPM 2.0...
I don't think TPM 1.2 has been relevant since vSphere 6.0 or something.
Useful for config encryption, host attestation, Native key provider key caching and tons of stuff and cost ~$50.
It's Tax Season and for my Tech Worker friends who sold RSUs last year, I need to BEG YOU not to just blindly copy your 1099B/C from @etrade into TurboTax unless you want to overpay the IRS ๐งต
On the 1099B/C from your broker you'll see $0 Assigned as the cost basis for RSUs at vest. You'll also find the Box 12 comment "Basis not reported" which is the root of the issue. This is why the cost incorrectly says zero.
RSUs are taxed as regular income, and you will see them show up on your pay stub, as well as called out in box 14 on the W-2. IF YOU'RE LOOKING FOR YOUR ADJUSTED COST BASIS, YOU CAN FIND IT UNDER "Stock Plan >> My Account >> Gains & Losses >> Download >> Download Supplement PDF
It's time for a ๐งตon Boot devices. No we are not talking about SD cards, instead we are going to talk about encryption and security of boot devices!
One trend lately has been to use PCI-E attached RAID controllers for a pair of M.2 SATA/NVMe devices that boot the server. Example Dell BOSS (Great option!). One challenge is these controllers often lack encryption support.
So first off. Do you even need to worry about this? What is the attack surface of an ESXi device.