It would be kind of an awkward to adapt a new and fast NVME drive to a clunky old SATA controller. M.2 conversions would typically not have the physical space required for any active conversion circuitry, and it would be more expensive than buying a SATA drive. If you've got a full 2.5" bay, you can get native 2.5" consumer SATA SSDs up to 16TB... which is more than I want to read/write at SATA3 speeds. And if you want to take advantage of fast storage, you can just skip the whole SATA controller and use PCIE.
In an enterprise environment, nobody is really hooking up fast new storage to old slow storage controllers. They are either maintaining old systems, where they will use the legacy storage technologies, or they are deploying entirely new systems.
The problem as pointed out by the author in the link, is that there simply is no storage solution that sits between a SATA HDD (cheapest 0.15 Euro/TB) > NVME (0.45 Euro/TB).
Technically, SATA SSD's need to fill this spot as a cheaper alternative but with their prices being just as expensive (often only 5% cheaper) as m.2. If SATA had a price range in the 0.30 EUro/TB, it will have been a great alternative to HDD based storage.
And you can go really crazy with bifurcation > 4/4/4/4x and then converting all those new m.2 slots to 6 SATA (ASM controller for cheap and work great). Plop, 24 SATA ports for 4W power draw.
But nobody is going to pay for SATA SSD's when they can just buy m.2 for the same price. So the result is that the SATA SSD market is "kind of dying", and manufactures look at it like 2.5" drives got looked at from HDD manufactures.
It does not make sense for SATA SSDs to be priced 1/3 cheaper than NVMe as they use the same NAND flash as NVMe drives. NAND flash is a commodity so doing artificial segmentation with it is tricky. Controllers, I imagine, are cheaper since SATA is capped at 6Gbps (much lower than NVMe) but even if a SATA controller were half the price of a top end NVMe one, the controller would need to constitute 2/3 of the BOM cost of the SSD to enable a price reduction like that, which it does not.
And that is the core of the problem. Your buying a storage media that is a lot slower in sequential read/write, and also on random's. But your paying the price of the superior m.2 NVME's.
SSDs need to sit more between HDD/NVME's, but they are on the same level as NVME.
Another issue, is just like with 2.5" drives, you see manufactures really only focus on specific drives. Its going to be 3.5" or U.2/U.3 and now NVME NAS solutions. But you do see any 2.5" / SATA solutions?
I mean, the only thing i remember seeing is the Synology DiskStation DS620slim that is now like 5 years old product. And still expensive as hell. Nobody makes any SATA products.
The market is now being a ton of Chinese brands / mini-pc makers that offer 4, 5, 6 NVME products. And even with PCIe3.0 x1 lane support, they are faster then SATA SSDs. And benefit from the massive better random/lower latency.
I love to shove a ton of SATA SSDs in a system, instead of HDDs but the prices need to be somewhere in the middle of HDD/NVMEs per TB. Not the same as NVMEs.
Yes, when you connect expensive NAND to a slow SATA interface, you get something expensive and slow. That's a fundamental issue with the choice of technology, not an issue with the market. That's why nobody is doing this with new deployments.
Because there is no demand for it. 2.5" HDDs stopped to grow in size 10 years ago. If you need capacity you use 3.5" HDDs, if you need speed you use M.2/2.5" SATA SSDs and 2.5" SATA HDDs... what are these could be used for?
There is no capacity with 2.5" HDDs, it's only 1TB CMR HDDs x numdrives, ie 5TB RAW for 620slim, 4TB RAW for TS-410E.
Or you risk it all and use a 4/5TB SMR HDDs for a ~20~25TB of RAW capacity and now you can't use RAID.
So what the point?
> but the prices need to be somewhere in the middle of HDD/NVMEs per TB
... and why a SATA SSDs should be cheaper than the NVMe ones? Especially if they are both use the same flash modules?
Yeah, there is very little reason to want to do this. It may become more of an issue if new SATA/SAS drives stop being produced anymore for maintaining those legacy systems - but at that point something based on an FPGA (as was suggested in another comment) is probably going to be more economic than a company bothering to spin an ASIC for the purpose. But I don't see SATA drives dying out for a fair while yet.
Usually the tail end of legacy systems are adequately supported by new-old-stock for quite a while.
e.g. 3.5" floppies are 40 years old, were obsolete 25 years ago, went out of production 15 years ago, and are just about ready to deplete their stock today. Yes, there are flash-to-floppy adapter, and a similar thing may happen for SATA, but we may not see that as a necessity until 2050.
> Since (M.2) NVMe to USB adapters exist, protocol conversion is certainly possible, and since such adapters are surprisingly inexpensive, presumably there's enough demand to drive down the price of the underlying controller chipsets.
> (These chipsets are, for example, the Realtek RTL9210B-CG or the ASMedia ASM3242.)
The NVMe to USB adapters aren't converting the NVMe protocol to another disk access protocol. They are USB3-connected PCIe endpoints, which allow the PCIe NVMe drive to connect to the host as an NVMe device.
This isn't equivalent to the protocol conversion the author is seeking, which would accept SATA commands on one end and translate them to NVMe on the other end. I would actually call that SATA drive emulation, not protocol conversion, as SATA and NVMe aren't 1:1 such that you can convert SATA commands into NVMe commands and vice versa.
This is an RTL9210B NVMe enclosure. It's a UAS device (and I believe it supports BOT too.)
I have not examined this in much detail but I believe these converter ICs are actually rather powerful SoCs with PCIe host, SATA host, and USB device peripherals. The existence of firmware (several hundred KB!) for them is further evidence of this fact.
I don't believe this is true. Something like thunderbolt could allow this but the cheap commonly available ones "simply" appear as a USB disk and can't use any of the NVMe specific features like HMB, can't be secure erased with nvme-cli, etc. Lack of HMB support is a particularly painful limitation these days as dram isn't available on most new drives.
> there also doesn't currently seem to be any high capacity M.2 SATA SSDs
I have a high capacity M.2 SATA in my computer. It's 4TB which I think qualifies for high. I bought it because I found out about that empty slot in my computer and wanted to fill it, not because of a particular need. Having a rare part in my computer gives me an indescribable sense of joy. And don't worry it's entirely used for extra redundancy so I won't lose data even if it dies.
They don't stop at 4TB though... consumer SATA drives go up to 16TB. And I've heard there are even larger enterprise drives. https://www.amazon.com/dp/B09FJFMQBB
From the article I linked: "...capacities larger than that only available from a few specialty vendors at eye-watering prices"
From what I can tell, this was true when it was written, but has changed since with the Samsung QVO 8TB SATA only at a roughly 25% markup compared to a 8TB QLC M2 2280 drive. When I searched last winter it was 2-4x expensive as soon as you crossed the 4TB mark, with 4TB and under being similarly priced between SATA and nvme drives.
That's the reality of low-volume products, they're expensive. Most people dropping that much cash on NAND will just drop the cash to plug it into something that can access it quickly. If you want big slow storage for cheap, just get a spinning disk.
Random IOPS through a SATA interface is orders of magnitude faster on NAND vs a spinning disk. A SATA SSD and a spinning disk might as well be two completely different categories of device for any non-sequential workload.
You can do it with PCIE switches, but you shouldn't. If you need 16 NVME drives, you need to put them in an appropriate host system.
To properly design a computing solution, first you define the requirements, and then you select components with specifications that will fulfill your requirements.
If you try to work backwards, you are destined for failure. Dropping big cash on 16 high-capacity SSDs just to ham-jam them in an old system is a really really dumb idea, especially if you're concerned about IOPS.
Writing SATA to NVME adapter is nonsensical endeavor. It is as writing RoCE to HTTPS adapter. Makes no sense at all. And im not even talking about voltages etc.
Any NVME disk can be connected even over PCIE3 x1 so there is plenty of capability on DESKTOP computers he is "managing".
And what is he writing and how is he writing it is unbelievable that he can not seem to understand what SAS expander is etc.
No, it makes as much sense as USB-NVMe, which does exist, and as the article mentions, so does the other direction (a PCIe SATA controller in the shape of an M.2 SSD) but there's just not enough of a market for NVMe-SATA yet. They're just block device protocols, and the conversion between them is well-defined.
A bidirectional example is IDE/SATA, for which plentiful cheap adapters in both directions (one IC automatically detects its role) exist; IDE host to SATA device, or SATA host to IDE device.
For another "directional" example, it's worth noting that SATA to MMC/(micro/mini)SD(HC/XC)/TF adapters exist which let you use those cards (often multiple, even in RAID!) as a SATA drive, but the opposite direction, exposing a SATA drive as an SD card, does not (yet).
I don't think that's true. USB is accessible in lots of places where SATA and PCIe is not, i.e. as external connectors. Yes eSATA is a thing but eSATA without being able to use USB or PCIe?
Or in other words, SATA->NVMe would at best serve users unwilling to upgrade their legacy racks while USB->NVMe has plenty of non-legacy use cases.
Oddly enough, I bought a bunch of M.2 format adapter things from the overseas fleamarket. One includes 9 SATA ports in a 2280 form factor. I've also seen PCIe x8/x16 expansion boards that connect via M.2.
If I had transfinite funds, I would make a video about turning a dual socket motherboard+CPU combination with the most PCIe lanes with the goal to connect maximum GPUs via Thunderbolt 4 hubs and enclosures, PCIe bifurcation cards, and M.2-to-PCIe adapters (whichever method maximizes GPU count) all powered by many PSUs.
I think You're looking for something called a "tri-mode HBA." They run about $200 on ebay and as you mentioned- the m.2 to u.2, u.3, etc is passive (minus power)
That's not really a M.2 nvme to sata converter though. That's just something that can take pci-e lanes and either give them to nvme drives through a pci-e multiplexer or convert the pci-e lanes to saas or sata. It also isn't passive, there's lots of processors to make it happen.
Yeah, it explains why I can't buy one of these. Now, what I am puzzled by is why none of the various PCIe (standard form factor) cards that offer m.2 nvme slots on them will work with any of my older computers; they have PCIe lanes to spare but I suppose the system firmware itself just doesn't know what to do with them.
Adapting a PCIe slot to a single M.2 slot should always work, so long as the SSD is actually using the PCIe lanes and not the SATA port that M.2 can also support.
Depending on how old the motherboard, or UEFI, is - it may not be able to directly boot from NVMe. Years ago I modified the UEFI on my Haswell-era board to add a DXE to support NVMe boot. You shouldn’t need to do that to see the NVMe device within your OS though.
As the other reply mentioned, if you want to run multiple SSDs on a cheap adapter your platform needs to support bifurcation, but if it doesn’t support bifurcation hope is not all lost. PCIe switches have become somewhat cheaper - you can find cards based on the PEX8747 for relatively little under names like PE3162-4IL. The caveat here is that you’re limited to PCIe 3.0, switches for 4.0+ are still very expensive.
If you want to (for example) put 4 NVME drives (4 lanes each) in a 16x slot, then you need two things:
1. The 16x slot actually needs to have 16 lanes (on consumer motherboards there is only one slot like this, and on many the second 16x slot shares the lanes, so it will need to be empty)
2. You need to configure the PCIe controller to treat that slot as 4 separate slots (this is called PCIe bifurcation.
For recent Ryzen CPUs, the first 16x slot usually goes directly to the CPU, and the CPU supports bifurcation, so (assuming your BIOS allows enabling bifurcation; most recent ones do) all you need to do is figure out which PCIe lanes go to which slots (the motherboard manual will have this).
If you aren't going to use the integrated graphics, you'll need a 16x slot for your GPU. This will at best have 4 lanes, and on most motherboards (all 800 series chipset motherboards from all major manufacturers) will be multiplexed over the chipset, so now your GPU is sharing bandwidth with e.g. USB and Ethernet, which seems less than ideal (I've not benchmarked, maybe someone else has?).
In the event that you want to do a 4x NVME in a 16x slot I found that the "MSI X670E ACE" has a 4x slot that does not go throught the chipset, and so does the "ASRock B650M Pro x3D RS WiFi" either of those should work with a 9000 series Ryzen CPU.
ThreadRipper CPUs have like a gajillion PCIe lanes, so there shouldn't be any issues there.
I have also been told that there are some (expensive) PCIe cards that present as a single PCIe device to the host. I haven't tried them.
I have an older workstation with no NVMe sockets. You can find 4 socket NVMe PCIe cards that let you bifurcate a 16x PCIe socket into four 4x NVMe sockets. Great way to make a local flash ZFS pool.
> suppose the system firmware itself just doesn't know what to do with them.
Anything since socket 1155 can work and even Westemere/Nehalem should work too, except you don't really want a system that old.
I have a 4Tb M.2 drive in x1 slot in my GA-Z77X, works fine.
Not a boot drive and I didn't bother to look if the BIOS supports booting from NVMe, though a quick search says the support for that was started with Z97 chipset/socket 1150.
> use a little sata SSD as /boot
For Linux you can even use some USB thumbdrive, especially a small profile like Kingston DataTraveler Micro G2.
Honestly I don't quite get why they do have a problem here.
That chassis sports a proper 16-port SAS backplane so they can just use... SAS drives?
Sure, SAS 7.68Tb drives cost a bit more than some shit like 870 QVO 8Tb SATA drive, but:
you will have at least 12Gbps instead of 6Gbps of bandwidth so your storage would be faster;
you will not have a shitshow of STP so your storage would be faster;
you will not have a USB thumbdrive speeds if you exhaust the SLC cache so your storage would be faster;
you will have a better DWPD (1 vs 0.3) so your storage would be faster for a longer time.
But okay, even if you don't go the SAS way... I'm again not sure what is going on here, but besides 870 EVO (desktop SATA QLC shit) there are Kingston DC600M, Solidigm D3-S4520 and Samsung PM893 which are the enterprise SATA drives and they cost only 10% more than 870 EVO (and only 10% less than Kioxia PM6-R SAS).
Oh, by the way: don't do U.2 in 2025 and later. It would bite you later.
It would be kind of an awkward to adapt a new and fast NVME drive to a clunky old SATA controller. M.2 conversions would typically not have the physical space required for any active conversion circuitry, and it would be more expensive than buying a SATA drive. If you've got a full 2.5" bay, you can get native 2.5" consumer SATA SSDs up to 16TB... which is more than I want to read/write at SATA3 speeds. And if you want to take advantage of fast storage, you can just skip the whole SATA controller and use PCIE.
In an enterprise environment, nobody is really hooking up fast new storage to old slow storage controllers. They are either maintaining old systems, where they will use the legacy storage technologies, or they are deploying entirely new systems.
I looked into this as an option because (similar to the author, from the sounds of it[A]):
1. I already had a 2.5" hotswap setup
2. 2.5" 8TB SSDs are 4x as expensive as 8TB NVMEs.
A: https://utcc.utoronto.ca/~cks/space/blog/tech/NVMeOvertaking...
The problem as pointed out by the author in the link, is that there simply is no storage solution that sits between a SATA HDD (cheapest 0.15 Euro/TB) > NVME (0.45 Euro/TB).
Technically, SATA SSD's need to fill this spot as a cheaper alternative but with their prices being just as expensive (often only 5% cheaper) as m.2. If SATA had a price range in the 0.30 EUro/TB, it will have been a great alternative to HDD based storage.
And you can go really crazy with bifurcation > 4/4/4/4x and then converting all those new m.2 slots to 6 SATA (ASM controller for cheap and work great). Plop, 24 SATA ports for 4W power draw.
But nobody is going to pay for SATA SSD's when they can just buy m.2 for the same price. So the result is that the SATA SSD market is "kind of dying", and manufactures look at it like 2.5" drives got looked at from HDD manufactures.
It does not make sense for SATA SSDs to be priced 1/3 cheaper than NVMe as they use the same NAND flash as NVMe drives. NAND flash is a commodity so doing artificial segmentation with it is tricky. Controllers, I imagine, are cheaper since SATA is capped at 6Gbps (much lower than NVMe) but even if a SATA controller were half the price of a top end NVMe one, the controller would need to constitute 2/3 of the BOM cost of the SSD to enable a price reduction like that, which it does not.
SATA tops out at 600MB/s meaning you can use cheaper denser NAND.
The cheapest NAND used on NVME drives is the same NAND used on the cheapest SATA drives.
> 2. 2.5" 8TB SSDs are 4x as expensive as 8TB NVMEs.
Huh?
870 QVO 8 TB SSD SATA III 2.5 inch: $629.99
Which is in the same range as M.2 ones.
Sure, you are getting gouged if try to buy it on Amazon but then... just don't buy it on Amazon?
https://www.samsung.com/us/computing/memory-storage/solid-st...
And that is the core of the problem. Your buying a storage media that is a lot slower in sequential read/write, and also on random's. But your paying the price of the superior m.2 NVME's.
SSDs need to sit more between HDD/NVME's, but they are on the same level as NVME.
Another issue, is just like with 2.5" drives, you see manufactures really only focus on specific drives. Its going to be 3.5" or U.2/U.3 and now NVME NAS solutions. But you do see any 2.5" / SATA solutions?
I mean, the only thing i remember seeing is the Synology DiskStation DS620slim that is now like 5 years old product. And still expensive as hell. Nobody makes any SATA products.
The market is now being a ton of Chinese brands / mini-pc makers that offer 4, 5, 6 NVME products. And even with PCIe3.0 x1 lane support, they are faster then SATA SSDs. And benefit from the massive better random/lower latency.
I love to shove a ton of SATA SSDs in a system, instead of HDDs but the prices need to be somewhere in the middle of HDD/NVMEs per TB. Not the same as NVMEs.
Yes, when you connect expensive NAND to a slow SATA interface, you get something expensive and slow. That's a fundamental issue with the choice of technology, not an issue with the market. That's why nobody is doing this with new deployments.
> But you do see any 2.5" / SATA solutions?
> Nobody makes any SATA products.
Because there is no demand for it. 2.5" HDDs stopped to grow in size 10 years ago. If you need capacity you use 3.5" HDDs, if you need speed you use M.2/2.5" SATA SSDs and 2.5" SATA HDDs... what are these could be used for?
There is no capacity with 2.5" HDDs, it's only 1TB CMR HDDs x numdrives, ie 5TB RAW for 620slim, 4TB RAW for TS-410E.
Or you risk it all and use a 4/5TB SMR HDDs for a ~20~25TB of RAW capacity and now you can't use RAID.
So what the point?
> but the prices need to be somewhere in the middle of HDD/NVMEs per TB
... and why a SATA SSDs should be cheaper than the NVMe ones? Especially if they are both use the same flash modules?
https://www.qnap.com/en/product/ts-410e/specs/hardware
Yeah, there is very little reason to want to do this. It may become more of an issue if new SATA/SAS drives stop being produced anymore for maintaining those legacy systems - but at that point something based on an FPGA (as was suggested in another comment) is probably going to be more economic than a company bothering to spin an ASIC for the purpose. But I don't see SATA drives dying out for a fair while yet.
Usually the tail end of legacy systems are adequately supported by new-old-stock for quite a while.
e.g. 3.5" floppies are 40 years old, were obsolete 25 years ago, went out of production 15 years ago, and are just about ready to deplete their stock today. Yes, there are flash-to-floppy adapter, and a similar thing may happen for SATA, but we may not see that as a necessity until 2050.
For what it is worth, M.2 NAS devices with 4-6 slots can be had for ~$210 USD / ~$290 CAD or less: https://www.jeffgeerling.com/blog/2025/mini-nases-marry-nvme...
> Since (M.2) NVMe to USB adapters exist, protocol conversion is certainly possible, and since such adapters are surprisingly inexpensive, presumably there's enough demand to drive down the price of the underlying controller chipsets.
> (These chipsets are, for example, the Realtek RTL9210B-CG or the ASMedia ASM3242.)
The NVMe to USB adapters aren't converting the NVMe protocol to another disk access protocol. They are USB3-connected PCIe endpoints, which allow the PCIe NVMe drive to connect to the host as an NVMe device.
This isn't equivalent to the protocol conversion the author is seeking, which would accept SATA commands on one end and translate them to NVMe on the other end. I would actually call that SATA drive emulation, not protocol conversion, as SATA and NVMe aren't 1:1 such that you can convert SATA commands into NVMe commands and vice versa.
They are USB3-connected PCIe endpoints, which allow the PCIe NVMe drive to connect to the host as an NVMe device
No?
https://us1.discourse-cdn.com/flex001/uploads/framework3/ori...
This is an RTL9210B NVMe enclosure. It's a UAS device (and I believe it supports BOT too.)
I have not examined this in much detail but I believe these converter ICs are actually rather powerful SoCs with PCIe host, SATA host, and USB device peripherals. The existence of firmware (several hundred KB!) for them is further evidence of this fact.
I don't believe this is true. Something like thunderbolt could allow this but the cheap commonly available ones "simply" appear as a USB disk and can't use any of the NVMe specific features like HMB, can't be secure erased with nvme-cli, etc. Lack of HMB support is a particularly painful limitation these days as dram isn't available on most new drives.
> there also doesn't currently seem to be any high capacity M.2 SATA SSDs
I have a high capacity M.2 SATA in my computer. It's 4TB which I think qualifies for high. I bought it because I found out about that empty slot in my computer and wanted to fill it, not because of a particular need. Having a rare part in my computer gives me an indescribable sense of joy. And don't worry it's entirely used for extra redundancy so I won't lose data even if it dies.
The author uses a number larger than 4TB as "high" (source https://utcc.utoronto.ca/~cks/space/blog/tech/NVMeOvertaking..., where they lament that SATA SSDs stop at 4TB).
They don't stop at 4TB though... consumer SATA drives go up to 16TB. And I've heard there are even larger enterprise drives. https://www.amazon.com/dp/B09FJFMQBB
From the article I linked: "...capacities larger than that only available from a few specialty vendors at eye-watering prices"
From what I can tell, this was true when it was written, but has changed since with the Samsung QVO 8TB SATA only at a roughly 25% markup compared to a 8TB QLC M2 2280 drive. When I searched last winter it was 2-4x expensive as soon as you crossed the 4TB mark, with 4TB and under being similarly priced between SATA and nvme drives.
That's the reality of low-volume products, they're expensive. Most people dropping that much cash on NAND will just drop the cash to plug it into something that can access it quickly. If you want big slow storage for cheap, just get a spinning disk.
Random IOPS through a SATA interface is orders of magnitude faster on NAND vs a spinning disk. A SATA SSD and a spinning disk might as well be two completely different categories of device for any non-sequential workload.
If you need NAND for the performance, why use a SATA controller? NVME is more available, faster, and cheaper.
How do you put 16 NVME drives on a system with 24 total PCIe lanes?
You can do it with PCIE switches, but you shouldn't. If you need 16 NVME drives, you need to put them in an appropriate host system.
To properly design a computing solution, first you define the requirements, and then you select components with specifications that will fulfill your requirements.
If you try to work backwards, you are destined for failure. Dropping big cash on 16 high-capacity SSDs just to ham-jam them in an old system is a really really dumb idea, especially if you're concerned about IOPS.
Tri-mode HBA/RAID, eg https://www.broadcom.com/products/storage/host-bus-adapters/...
> And don't worry it's entirely used for extra redundancy so I won't lose data even if it dies.
Just make sure to still back up the data!
You could do this on an FPGA dev board with the right connectors. Might be a nice project for someone with the time
Writing SATA to NVME adapter is nonsensical endeavor. It is as writing RoCE to HTTPS adapter. Makes no sense at all. And im not even talking about voltages etc.
Any NVME disk can be connected even over PCIE3 x1 so there is plenty of capability on DESKTOP computers he is "managing".
And what is he writing and how is he writing it is unbelievable that he can not seem to understand what SAS expander is etc.
No, it makes as much sense as USB-NVMe, which does exist, and as the article mentions, so does the other direction (a PCIe SATA controller in the shape of an M.2 SSD) but there's just not enough of a market for NVMe-SATA yet. They're just block device protocols, and the conversion between them is well-defined.
A bidirectional example is IDE/SATA, for which plentiful cheap adapters in both directions (one IC automatically detects its role) exist; IDE host to SATA device, or SATA host to IDE device.
For another "directional" example, it's worth noting that SATA to MMC/(micro/mini)SD(HC/XC)/TF adapters exist which let you use those cards (often multiple, even in RAID!) as a SATA drive, but the opposite direction, exposing a SATA drive as an SD card, does not (yet).
> No, it makes as much sense as USB-NVMe
I don't think that's true. USB is accessible in lots of places where SATA and PCIe is not, i.e. as external connectors. Yes eSATA is a thing but eSATA without being able to use USB or PCIe?
Or in other words, SATA->NVMe would at best serve users unwilling to upgrade their legacy racks while USB->NVMe has plenty of non-legacy use cases.
Oddly enough, I bought a bunch of M.2 format adapter things from the overseas fleamarket. One includes 9 SATA ports in a 2280 form factor. I've also seen PCIe x8/x16 expansion boards that connect via M.2.
If I had transfinite funds, I would make a video about turning a dual socket motherboard+CPU combination with the most PCIe lanes with the goal to connect maximum GPUs via Thunderbolt 4 hubs and enclosures, PCIe bifurcation cards, and M.2-to-PCIe adapters (whichever method maximizes GPU count) all powered by many PSUs.
I think You're looking for something called a "tri-mode HBA." They run about $200 on ebay and as you mentioned- the m.2 to u.2, u.3, etc is passive (minus power)
That's not really a M.2 nvme to sata converter though. That's just something that can take pci-e lanes and either give them to nvme drives through a pci-e multiplexer or convert the pci-e lanes to saas or sata. It also isn't passive, there's lots of processors to make it happen.
What a surprisingly insightful post
Yeah, it explains why I can't buy one of these. Now, what I am puzzled by is why none of the various PCIe (standard form factor) cards that offer m.2 nvme slots on them will work with any of my older computers; they have PCIe lanes to spare but I suppose the system firmware itself just doesn't know what to do with them.
Maybe I need to use a little sata SSD as /boot?
Adapting a PCIe slot to a single M.2 slot should always work, so long as the SSD is actually using the PCIe lanes and not the SATA port that M.2 can also support.
Depending on how old the motherboard, or UEFI, is - it may not be able to directly boot from NVMe. Years ago I modified the UEFI on my Haswell-era board to add a DXE to support NVMe boot. You shouldn’t need to do that to see the NVMe device within your OS though.
As the other reply mentioned, if you want to run multiple SSDs on a cheap adapter your platform needs to support bifurcation, but if it doesn’t support bifurcation hope is not all lost. PCIe switches have become somewhat cheaper - you can find cards based on the PEX8747 for relatively little under names like PE3162-4IL. The caveat here is that you’re limited to PCIe 3.0, switches for 4.0+ are still very expensive.
Author has an article on that too[1]
To explain what I learned on my own:
If you want to (for example) put 4 NVME drives (4 lanes each) in a 16x slot, then you need two things:
1. The 16x slot actually needs to have 16 lanes (on consumer motherboards there is only one slot like this, and on many the second 16x slot shares the lanes, so it will need to be empty)
2. You need to configure the PCIe controller to treat that slot as 4 separate slots (this is called PCIe bifurcation.
For recent Ryzen CPUs, the first 16x slot usually goes directly to the CPU, and the CPU supports bifurcation, so (assuming your BIOS allows enabling bifurcation; most recent ones do) all you need to do is figure out which PCIe lanes go to which slots (the motherboard manual will have this).
If you aren't going to use the integrated graphics, you'll need a 16x slot for your GPU. This will at best have 4 lanes, and on most motherboards (all 800 series chipset motherboards from all major manufacturers) will be multiplexed over the chipset, so now your GPU is sharing bandwidth with e.g. USB and Ethernet, which seems less than ideal (I've not benchmarked, maybe someone else has?).
In the event that you want to do a 4x NVME in a 16x slot I found that the "MSI X670E ACE" has a 4x slot that does not go throught the chipset, and so does the "ASRock B650M Pro x3D RS WiFi" either of those should work with a 9000 series Ryzen CPU.
ThreadRipper CPUs have like a gajillion PCIe lanes, so there shouldn't be any issues there.
I have also been told that there are some (expensive) PCIe cards that present as a single PCIe device to the host. I haven't tried them.
1: https://utcc.utoronto.ca/~cks/space/blog/tech/PCIeBifurcatio...
I have an older workstation with no NVMe sockets. You can find 4 socket NVMe PCIe cards that let you bifurcate a 16x PCIe socket into four 4x NVMe sockets. Great way to make a local flash ZFS pool.
https://www.ebay.com/sch/i.html?_nkw=Asus+Hyper+M.2
I still have to boot off a SATA SSD.
> suppose the system firmware itself just doesn't know what to do with them.
Anything since socket 1155 can work and even Westemere/Nehalem should work too, except you don't really want a system that old.
I have a 4Tb M.2 drive in x1 slot in my GA-Z77X, works fine.
Not a boot drive and I didn't bother to look if the BIOS supports booting from NVMe, though a quick search says the support for that was started with Z97 chipset/socket 1150.
> use a little sata SSD as /boot
For Linux you can even use some USB thumbdrive, especially a small profile like Kingston DataTraveler Micro G2.
https://www.aorus.com/motherboards/ga-z77x-ud3h-rev-10/Key-F...
Honestly I don't quite get why they do have a problem here.
That chassis sports a proper 16-port SAS backplane so they can just use... SAS drives?
Sure, SAS 7.68Tb drives cost a bit more than some shit like 870 QVO 8Tb SATA drive, but:
you will have at least 12Gbps instead of 6Gbps of bandwidth so your storage would be faster;
you will not have a shitshow of STP so your storage would be faster;
you will not have a USB thumbdrive speeds if you exhaust the SLC cache so your storage would be faster;
you will have a better DWPD (1 vs 0.3) so your storage would be faster for a longer time.
But okay, even if you don't go the SAS way... I'm again not sure what is going on here, but besides 870 EVO (desktop SATA QLC shit) there are Kingston DC600M, Solidigm D3-S4520 and Samsung PM893 which are the enterprise SATA drives and they cost only 10% more than 870 EVO (and only 10% less than Kioxia PM6-R SAS).
Oh, by the way: don't do U.2 in 2025 and later. It would bite you later.
[flagged]
Could you please not fulminate like this on HN? It's not what this site is for, and destroys what it is for.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
I don't understand this title.