I want more storage, but the 4TB one is just too expensive. I plan to buy 1TB, use this card, and just add more SSDs if I need them

  • PCI-E x16 to M.2 adapter?

    your board needs to support x4/x4/x4/x4 bifurcation though

    you also need to have an x16 slot that have the full 16 lanes

    There are cards that have a built in switch so you don't need bifurcation. They get expensive though.

    And this card looks cheap

    it's a very simple cart tbh. just a passive card with electrical wiring to connect the nvme. the bifurcation happens on the computer side not on the card itself.

    I've been using this exact one with two nvme cards for about a year now without any issues so far. again I'm using these as installation for games and nothing important is stored on them that I don't mind losing if a drive fails due to some reason

    Yeah I have the QNAP QM2-4P in my Unraid server and it was $288 CAD after taxes earlier this year.

    Just looked out of curiosity and it's currently listed for $389+tax.

    Damned good card tho.

    I never listened to talk about this card, then I googled it to see the price here in Brazil, 5000 fucking reais, or 922 USD. This country is a mess.

    just get a used one for 50-100$ easy

    Assuming you were building with a consumer desktop platform, it might've just been worth it to buy a board with bifurcation which is why this is an annoying mess because lots of board manufacturers won't tell you if it does or doesn't have it.

    This one is an actual server - HP DL380 Gen9. I use it as my media server.

    It said it supported bifurcation specifically in the second PCIe slot so I enabled it and I bought the Asus Hyper m.2 card first but couldn't get it to work in any of the three PCIe slots. So I disabled bifurcation and just bought the QNAP card and been working just fine that way.

    Was $150 more but it works.

    yup this card doesn't have that

    Can you tell me how to tell that? Recently tried the Asus 4 slot card and my mobo wouldn't detect the 4 drives.

    If you got a link for one, that would be amazing.

    Thank you, I had no idea these existed. I appreciate your help!

    And basically any consumer desktop board is anemic when it comes to PCIe. So say goodbye to GPU bandwidth if you want this in your main PC.

    Good thing the budget GPUs are all 8 lane now

    Even then it's rare to find any consumer motherboard with a full x16 slot and a x16 x8 slot. I tried only ones I found were server grade ones

    My 3570K build from 2012 has 3 x16 slots. What happened?

    3 x16 slots running at x16 or one running x16 and two running x4?

    huh yeah your board has a 16, 8, and a 4 slot don't see any of those for a reasonable price now a days except the very expensive high end boards so not bad. would make a good home lab board.

    I believe the consumer grade CPUs from that period only had 16 lanes to give, so you either ran one card at 16, two at 8, or two at 4 with one at 8.

    So yeah you could run an 8 lane device alongside your GPU, but the GPU's link would be cut down as a result.

    There may have been a couple chipset connected slots that would generally perform slower than CPU connected ones, but wouldn't cut into the available lanes on the CPU. Idea being those slots are for small things like a USB or network card without losing GPU performance.

    My 3570K build from 2012 has 3 x16 slots. What happened?

    A few things together added up.

    1. Few people used SLI due to costs and driver problems. Low demand led to Nvidia killing it and AMD followed along. This essentially killed any remaining demand for a second x16 slot.

    2. Everything went to onboard / built-in. Sound, networking, USB, storage, etc. Nobody has a printer, USB storage largely disappeared, etc. For 60+% of consumers, expansion slots are an anachronism and for gamers, they only need one for a GPU, one for storage.

    3. PCIE 5.0 happened. Hi speed signaling is expensive af to implement. There's always been a large crowd of "digital is ones and zeroes, so you just need a wire and nothing else" who've been around since the DVI/HDMI days who will argue to death that cable (and trace) quality have zero effect, but physics always wins. The lack of demand for high speed interconnects at the consumer level (because, frankly, even 4.0 isn't really a bottleneck for consumers) also means you can't really leverage economy of scale for stuff like redrivers. And with so many other functions / circuits on the motherboard, it also means extra care (labor) and layers (labor) to design a motherboard to support multiple PCIE 5.0 expansion slots. There's a reason ATX12V0 hasn't had much traction outside of OEM office machines.

    4. Broadcom and Nvidia happened. They basically bought up other companies and patent portfolios related to PCIE switching and signaling, making it that much harder for other companies to manufacture the other chips required for PCIE 5.0 expansion slots. Broadcom especially fully intended to kill the cheap consumer market and focus on enterprise to squeeze corporate spending.

    So overall, expansion slots are an endangered feature for consumer boards. Enterprise still uses it because enterprise actually uses I/O. With consumers, the fastest we need is the GPU and that can actually still work fine with 4.0 for a couple more years. (cue the "But my 5090" crowd who thinks everyone should be doing 200+ fps at 4k today)

    Isn't the 9060 16 lanes?

    even a 5090 only loses a few % of performance on PCIe 3.0x16/4.0x8/5.0x4 for gaming though

    on a cpu slot. if you move it to a chipset slot it tends to be a different experience.

    Why would anyone put a 5090 on a chipset/southbridge slot

    Because only top slot is 16x for this expansion card, so you HAVE to move the GPU down if you want to use all 4 NVMe in this one

    on a b550 even an rx570 4gb causes enough issues to have usb, wifi and sata disconnections, so it's not an issue limited to high end cards. in fact it's even more of an issue on lower end cards with less vram as there's a lot more swapping to system ram.

    Wait wtf? Please can you explain or give me a link or something? I have an Asrock B550 pro4 with an RTX 3080 as the main card and RX 5600XT as the secondary lossless scaling frame gen card. And I just put this together a week ago so I'm not really experienced with the setup yet

    https://preview.redd.it/rbyuev5z4z6g1.jpeg?width=1920&format=pjpg&auto=webp&s=994eae5b745a243b0c4ac6d606bdd3f098427aa1

    with lossless scaling on the pcie bus, you're just streaming the video output to the rx5600xt. this isn't the same as running a game directly on the rx5600xt.

    as far as i can tell, you're using the chipset 4x gen3 lanes. if you run directly a game on the rx5600xt, you're likely to observe some issues. if you're running some 4k 120fps (about 24gbps) on the rtx 3080 and send it to the rx 5600xt for some frame gen, you might actually notice some effects of the limited chipset bandwidth (32gbps)

    if you're running some 4k 120fps (about 24gbps) on the rtx 3080 and send it to the rx 5600xt for some frame gen, you might actually notice some effects of the limited chipset bandwidth (32gbps)

    Yeah, this part I already knew, in r/losslessscaling there's actually a cool guide about slot speeds, the 3.0x4 can handle around 80fps of one-way data in 4k. It's fine since I'm only using the slot for incoming data and not sending anything back, the 5600XT handles the frame gen and output.

    So the wifi or usb issues would only arise if I used the bottom gpu for actual gaming, right? That's all right then

    Are we not assuming it's going into a second x16 slot?

    cpu usually have an x16 for gpu, an x4 for m.2 and an x4 for chipset.

    https://preview.redd.it/pugeo33fky6g1.jpeg?width=925&format=pjpg&auto=webp&s=73508345f3f5f303beb0257bcccbc35b886b7db0

    a few (usually higher end) boards will split the x16 into 2 slots with x8. this often works very well.

    otherwise, there's one extra 4x and if it goes to a slot, it does ok. frame rate arent' much affected the only noticeable thing is some pop in and longer load times.

    if it's dedicated to anything else (the majority of affordable boards), the card ends up going thru a slot on the chipset and on the majority of affordable boards, it's gen4. but it's not dedicated to the gpu, it's shared with everything else. lan/wifi, sata, usb. it could be already at 50% capacity leaving the equivalent of 2 lanes. but it gets much worse because it's shared and priorities need to be managed but a chipset does a fairly poor job at this, so your storage and internet can cut out when it gets solicited the most by the gpu like when it loads a game. you can actually crash your pc from timeouts to essential things.

    Example: B650 GAMING PLUS WIFI, the most popular board on pc part picker.

    PCI_E1 Gen PCIe 4.0 supports up to x16 (From CPU)
    PCI_E2 Gen PCIe 3.0 supports up to x1 (From Chipset)
    PCI_E3 Gen PCIe 4.0 supports up to x4 (From Chipset)

    If i do that and the M.2 becomes PCI X4, this means that the performance of the M.2 becomes approximately 30% because it is supposed to be PCI X16 to be 100%, right?

    M.2 only use 4 lanes. Not 16.

    Well in this instance you're not using one device that requires 16 lanes. You're splitting a x16 slot into four individual x4 connections. If you put that card in a x4 slot you'd only see one SSD since the other three would not be physically wired to anything.

    For it to work how you're describing you would need an adapter card with a PCIe switch on it or some such. My knowledge on those is limited other than they're expensive.

    But realistically you would probably never use all 4 ssd’s at max bandwidth? Two sounds probable if one is OS/app and the other contains data that needs such bandwidth.

    I am partly posting this comment to be disproven as I am interested in usecases.

    Except you wouldn't have your OS on an SSD on this but on a m.2 slot that's directly on your motherboard, I'd wager all new motherboards have at a minimum 1 m.2 slot on them.

    Why vote down ? I am asking question

    Why vote down ? I am asking question

    I didn't vote, but personally, I can say it really irked me to see you guess 30% for what I'm assuming to be 4 out of 16 lanes. That's straightfoward arithmetic picked up in 3rd or 4th year of public schooling. There shouldn't be a need to guess, let alone guess wrong.

    Also, don't take downvotes personally. People vote for multiple reasons. In this case, it's likely they downvoted your comment for just being plain wrong, so it gets buried and others who don't know better don't need to see it.

    Wouldn't make much difference in gaming anyway.

    I had a flagship 4790k and evga classified motherboard ( the big dog) and it didn’t support it and only has like 12 pcie lanes total. I now have an asus prime and an 11900k that also doesn’t work properly with the included m.2 card adapter unless you put the gpu in the x8 slot at the bottom and the m.2 card in the first slot.

    Does it simply not work without bifurcation present, or does it switches between the storage? If it is just for large media storage I won't mind performance drop.

    Does it simply not work without bifurcation present, or does it switches between the storage?

    PCIE is fairly flexible when it comes to lane allocation. You can essentially think of it as a root lane that does the initial connection / negotiation with your CPU or chipset. So at minimum, all PCIE devices get x1 lane to work with, actual speed is negotiated to lowest common. After that, if it's, say, a GPU, it asks for another 15 lanes and they're available, the controller allocates them. If not, the GPU tries to negotiate for another 7, and so on until it and the controller agree. Worst case, it only gets that 1 lane that it started with.

    What bifurcation does is it allows a second (or more) device to use those extra lanes in the slot. So if the controller supports bifurcation at the 8th lane, it means you can have a second PCIE device use that as their first lane and start negotiating for the other lanes that follow it.

    Most CPU's support at minimum x8/x8 bifurcation, which means the lower 8 lanes go to one device and the higher 8 lanes to another. Some CPU's will allow x8/x4/x4, which means lower 8 go to one device, next 4 to another, highest 4 to a third. So on and so forth.

    If your CPU does not support bifurcation, it means that 8th lane in the middle cannot be used by a second device to negotiate access to the controller. It will always be reserved for the device that connects on the first lane of the slot.

    So overall, it means if your CPU does not support bifurcation, with an adapter like this that simply connects traces to 4 M.2 NVME cards, the only drive the controller will see is the first. The other 3 slots connect to lanes 4-15 which the controller thinks is device 1, so it won't do anything except wait for device 1 to send data, which it will never do because it's only physically connected to the first 4 lanes.

    If it is just for large media storage I won't mind performance drop.

    If you want to use NVME for (relatively) low speed speed storage, you need to get a PCIE switch. PCIE is like ethernet in that you can switch / route traffic to different ports / slots. (similar to bifurcation)

    QNAP makes a card (QM2-4P) that is essentially a PCIE switch which lets you connect 4-8x NVME drives to x8 PCIE 3.0 slot and there are a bunch on Ali and Amazon that use the same PLX8747 (because this concept used to be common for servers). The reason it's only PCIE 3.0 is because Broadcom bought up PLX, the company that owned most of the patents for high speed PCIE switch technology and jacked up pricing. So the most affordable PCIE switch chips are the older pre-Broadcom designs. Not a problem for slower warm/cold storage (and, frankly, most consumer peripherals, too). PCIE 4.0 and faster switch chips are prohibitively expensive for consumers, but if you're lucky, you can find obsoleted server gear and adapt it for home use. (They don't use M.2 so you'll need adapters).

    Cheap ones like this one don't work without bifurcation, more expensive ones that have a PCIe switch do, but affect the performance slightly

    I meant will the card only display one m.2 or simply not show any; or will it show all drives, just won't have simultaneous operations. I can live with the latter just like I can use USB hubs to have multple storage but with total max of 10gbps

    The latter requires PCI-E multiplexer which this doesn't have. So most likely you would see no drives at all, maybe one at best. If you'd unplug all drives but the first indexed drive you'd probably see at least that. Never would your computer register all 4. This is only really usable if you have 16× PCI-E lanes with bifurcation support, that's it.

    You can get a card like this with a PCI-E multiplexer that doesn't require bifurcation and can work well on just 4 PCI-E lanes, though more is better, but they cost about 7× more than this.

    Thanks, cleared a lot of doubts I had for a long time. One question though, m.2 uses 4 lanes, does that automatically multiplex/allocate if needed within. Like in Rog Ally I had seen couple of people trying to use the single m.2 slot to do both an egpu and a USB(for primary storage) with an adapter. Would the GPU and USB each use the 4 lanes in concurrent context switching fashion, or would they get two lanes each. Would a pcie x4(x1?) slot with 4 lanes behave the same way

    Automatic multiplexing like you described here doesn't exist. Such an adapter can either work through bifurcation or using a PCI-E multiplex chip. If it's using bifurcation then they would get 2 lanes each. If it's using a multiplex chip they would get 4 lanes each but with a combined bandwidth limit of 4 lanes.

    Bifurcation consumes basically no power, meaning minuscule cooling requirements, and is dirt cheap. Many multiplex chips consume enough power that they often need decent cooling and typically add 100€ to final product price for products like these. Not to mention that products with multiplexer chips are just rare. Given that I'd wager that what you have seen is almost certainly using bifurcation. Though no way to tell for certain what type of adapter it is without knowing which adapter they actually used.

    One of the more efficient uses of a multiplexer chip is when they are on the motherboard itself. Used to be common on top-end motherboards when SLI/CFX was trendy as it's needed there. Combined with bifurcation you can get some very nice behavior as you get a shit-ton of lanes you can use for bifurcation. Though given that consumer demand for it has mostly disappeared and HEDT/server already has a ton of CPU lanes they are very rare these days. That said, it wouldn't help in your m.2 splitting case as you still only have access to 4 PCI-E lanes on that port.

    Also yes, this has nothing to do with m.2, it's just PCI-E in general. m.2 is just a connector type, the signal is the same when used for PCI-E.

    Without bifurcation you’ll only see the first ssd

    I have something similar but my board z490 support only 8x/4x/4x so I can use 3 ssd in slot 1,3,4

    And a cpu with at least 32 lanes to have that and a gpu at full bandwidth.

    Either bifurc or a PCI-E switch (that will get damn hot). Surprisingly the best support for bifurc I've had is my AliExpress special boards for X99, plus my TR4 board. I assume HEDT and server gear has better support for such features.

    Lots of long words in there lad. We’re naught but humble pirates..

    I think if I remember right the most you get out of these adapters is 4x per drive, but one of the m.2 slots on the motherboard gets a full 16x lane.

    Just make sure you don't pay for over spec drives which wouldn't get their full speed on 4x.

    Mostly found in workstation Boards! Yay….

    Greatest technician’s that ever lived

    Buys a thread-ripper.

    You can also use 8x lanes and plug only 2x SSDs instead of all the 4, I ran that kind of setup with an Asus card until this week.

    i mean ur not really gonna me using all of them at the same time. realistically its gonna be 1 and sometimes 2 at a time.

  • The downsides are:

    1- Your motherboard must support bifurcation of the main 16 lane pci-e port. Google bifurcation and your motherboard name after you check the manual and the BIOS itself to make sure you can find the option, to make sure there isn't like years of forum people unable to make it work.
    2- You can't run a real graphics card at the same time, unless you have a lot of pci-e lanes. Like did you buy a threadripper and a really expensive motherboard? If not, you won't have your graphics card running at full speed, and it won't be on the main slot.

    Damn, there’s a lot to consider. I thought it only required a full ATX mobo

    If you really want 2-4 ssd on your pcie slots, look for ASM2812 and ASM2824 cards on AliExpress. These are switch cards that lets you run up to 4 x4 on a x1 x4 x8 slots(check their specs from asmedia website).

    But once you check the prices on AliExpress, you'll realise it is cheaper to buy a 4tb ssd than to create your own. People use these cards when they are already using 4TB ssds to create 16TB cards.

    On consumer boards you see the x16 slot, but that's just an illusion as under the hood they're X8, X4 and X1 on the lower slots.

    If you only care about storage <<archiving>> I don't think it would matter. Just like you can have multiple USB to M.2 enclosures.

    Modern GPUs don't really need 16 lanes, especially with PCIe 4.0/5.0

    If the boards were built with an 8 lane dude right below in general, then it might not be so bad. But usually you will be seriously downgrading your GPU if you don't put it in the GPU (16 lane) slot.

    Absolutely not an issue for any PCIE 5.0 GPU that is not capping VRAM.

    5.0 x8 is equal to 4.0 x16 and pretty much the only card that actually gets bottlenecked by that difference is the 5090 and it's still single digit loss.

    Usually there is another x8 isn't there? Or I guess it's not always direct to the CPU. I guess that could matter.

    Usually there is another x8 isn't there?

    Normally that 16 eats up so many lanes that there isn't an x8- I know some motherboards will downgrade the 16x to 8x to support a second 8x, whereas others will just leave it at 16x and then have a 4x.

    5.0 x8 is equal to 4.0 x16 and pretty much the only card that actually gets bottlenecked by that difference is the 5090 and it's still single digit loss.

    Certainly true. The losses are more sizable for other uses of compute. Many people reading this will be going from pci-e v4 16x to pci-e v4 4x though, just because of how most motherboards implement all those other slots, and that can be noticed on cards that aren't totally top of the line.

    Yeah I suppose most people looking at an M.2 expansion probably aren't rocking a top of the line GPU/mobo combo in the first place

    We're talking about a couple percentage points, pretty much unnoticeable

    doesn't some of them use a pcie switch thing that makes it work on a x1 x2 x4 or x8 slot?

    There are ones that work on smaller slots, but they don't support all the drives. Amazon has a zillion things that hook to a pci-e x4 slot and have two SSDs- but you read the fine print and only one of them is M.2, the other is some SATA thing and you have to run a SATA cable from the card to the mobo.

    Basically the rule is, one x4 slot gives you one M.2 SSD, and that actually works everywhere. Anything else, I mean, good luck.

    x1 also works but u dont get full speed

    What are you guys doing that requires more than 16GB/s anyways? Like damn, you're bottlenecked by the nand r/w not pcie even at x1 lmao

    x299 finally squeaks out a win

    Does it simply not work without bifurcation present, or does it switches between the storage? If it is just for large media storage I won't mind performance drop.

    Why can't you run a graphics card at the same time? Do motherboards share PCIe lanes between slots?

    You'll plug a graphics card into the second slot and it won't be running at 16x. At best it'll have four pci-e lanes normally, so the card will work, and four lanes is still a lot, but it won't be really pumping data the way it could.

    This all goes out the window if you have some specialty threadripper or xeon or something with a large number of pci-e lanes and a motherboard that supports it- but that's not a common use case for gaming.

    so a 5900X checks out?

    Sure all of AMDs recent top level desktop chips have at least 24 usable lanes, including that one, but the motherboard is the bigger question. You're still gonna be giving up that very important 16 lane pci-e port and not using a graphics card there.

    PCI lanes are used for data transfer. That doesn't just generally affect GPU performance.

    A GPU typically has everything it needs in VRAM and communication to the rest of the PC is extremely limited. So for most of it, it wouldn't affect GPU performance "at all". Like it's not "computing slower". Mostly it would be highly problematic when stuff gets swapped between RAM and VRAM a lot, which means you ran out of VRAM in the first place and the performance is already terrible. That's A LOT worse then. But regularly I couldn't even swear it affects texture load times as they typically would be loaded from mass storage.

    What about something like the Godlike?

  • Poke around your BIOS and see if you can find the 4x4x4x4 bifurcation option as one of the bifurcation selections. I'm pretty sure they don't offer it up if it's not supported by that slot. I have one and it seems to work just fine on my boards that say 4x4x4x4 under the bifurcation dropdown in bios.

    Can you tell me what your motherboard is? Just for reference

    Yes, the Asus TUf Gaming WIFI x570 and also, if I am remembering correctly, the asrock b550 matx steel legend.

    My current asus prime b650m-a csm in my nas supports it but it's manual and webpage would lead you to believe otherwise. It's on a supported list now that someone has put together. I can't remember if it was my current nas build or my last, with the TUF gaming x570, but I was using it. I'm pretty dang sure current asus prime b650m-a csm works fine too.

    EDIT: None of my fancy intel boards seem to allow it. They will do 8x4x4x sometimes... so you could run three drives in there I guess.

    I have an asus 590g and an 11900k and it requires you to put your gpu in an x4 slot at the bottom and use the traditional gpu slot 4x bifurcation, and then it still doesn’t work right.

    Yea, AMD seems to give a little more love to the pcie slot 1 4x4x4x4 config. OWC sells some nice multi nvme pcie cards that just switch so dang fast you don't even need bifurcation. They are a big jump in price though.

  • While many Redditors ask about bottlenecks in PC build, this is the legit bottleneck

    Why? If its in a full fat 16 lane then all 4 SSDs get their 4x.

  • Just get the 4tb. You would need a high end motherboard/chipset to get these to run without splitting the x16 from your GPU. Also, they are predicted to go up next in price after the RAMs. The 1tb that you would buy by then could very well be a 2tb/4tb drive right now.

  • I can't believe that no one has answered your question. The side with the gold connectors is the downside.

  • As long as your PC actually gives 16x to the slot and your motherboard supports bifurcation, it's great, otherwise it's a bit of a bottleneck on the PCI lanes

  • Your motherboard needs to support bifurcation, also you have to make sure the slot you plug this card into is an actual x16.

  • Nothing really, unless your PC doesn't have the slots/lanes to support this type of card.

  • For this board in particular, there's not really much downside to the board itself since all drives get x4 lanes. But your motherboard will need to support x4/x4/x4/x4 bifurcation. Which these days is hit or miss outside of ASRock. Gigabyte sometimes enables it on their expensive boards. MSI are scattershot, sometimes you find it on cheap boards, sometimes not on high end boards. And ASUS largely don't give a fuck anymore.

  • If you need more storage just getting a 4tb SSD will likely be cheaper. 1tb SSDs go for around 80-100 and a 4tb is 300-350. If you don't have anymore M2 slots then get a SATA ssd they go for similar prices. They're slower but just for storage they'll be fine

    Edit: also most motherboards come with SATA slots so if you can't afford a 4tb outright then get 4 1tb sata SSDs mount as many as you can and double sided tape the rest

  • Start saving for the 4Tb version

  • its the side with the small yellow pins on >.<

  • Board must support bifurcation. Most basic boards do not

  • Onboard PLX? I work with thousands of these at work.

    The major issue here is the single point of failure. The microcontroller on the carrier will be susceptible to heat related failure if you’re working it hard. Make sure to keep it cool.

  • So disappointed in the availability of pcie lanes. I would have been fine having one or even two m.2 slots for another pcie x16. And nearly every board now have built in wifi. My pc is the one thing I don't want on wifi.

    I'd rather motherboards shipped with cards like this instead of integrated m.2 slots, currently there's potential to overlook checking all the m.2 slots if you need to swap/give/rma the motherboard.

    If you RMA a motherboard and you forget to remove your m.2 before sending it… I hate to say that’s more of a you problem than a motherboard design problem.

  • Checkout jayztwocentz video on PCIE bifurcation

  • Use a brand-name one and there's basically no downsides besides using up a precious x16 slot.

    I find it inconvenient to juggle multiple drives and if you use a quality one you're spending $50-$60 so at that point just get a quality 2TB drive and see if it works for you.

  • I've used the same card with my x470 board. had the second pcie slot bifurcated to x4+x4 to get two drives to work (also means my gpu is currently running at pcie x8). you should be able to run x4+x4+x4+x4 with pcie 1 slot if your motherboard supports it, but I think you'd have to use your IGPU

  • You must use a full x16 PCIe lane to make it work after setting them from 16x to 4x4 configs in BIOS. Simply plug this card into the x16 slot and move your GPU to the x4 slot. I'm assuming you're using a standard desktop ATX/ mATX motherboard, typically has both x16 and x4 PCIe slot.

    1. I like having a video card

    2. I dont really need NVME speeds for all my storage and SATA is cheaper and easier

    3. I'm not buying a more expensive/rig motherboard for that

  • Problem is that you will never get a quad card to run with full lanes enabled unless you use onboard graphics only. Essentially this replaces your GPU connection-wise, there are just not enough lanes given with common enduser hardware.

    On my board i.e. however i got 3 unused PCIe x1 connectors. Got myself 3 solo add in cards for 3 additional m.2s.

  • Rather than that, get a nas with nvme slots.

    It's not the best solution, but its better than fitting 4 nvme on a board. My 20 port nas has 4 x nvme slots.

  • Go for a NAS raid setup, it's better in the long term, pick cheaper options (SATA or even enterprise mechanical 7200 RPM drives) if all you want is storage, going with the route you proposed requires a bit of thinkering+more money and you could even lose bandwith making them useless.

  • Because you need 32 pcie lanes for that to work and have a graphics card

  • Just save for the 4TB drive.

    Or if you only need the storage, get a cheap 4TB hard drive and keep the things you truly need fast access to in your main SSD.

  • I just use this. https://www.amazon.com/dp/B0BVBPH5FG Sure its not as convenient has having 4 drives in your sistem at all times but I use SSDs like microSDs. I'm lucky enough to work in an IT dept and all computers get their SSDs removed before going to ewaste. We usually break them into a few pieces or hammer the nand chips themselves but whenever we need one, we just grab what we want. They are all 256gb in size so you aren't getting huge drives but if you aren't using it for your Steam Library or storing game recordings, 256gb will take care of many needs.

    There's better solutions than this

    I use this https://www.amazon.com/dp/B096LMGTNG and fit this https://www.amazon.com/dp/B0CH84PBD9 to fit in 3 old SSDs. Only annoying thing is this dock needs a 5V power though a separate USBC port. I can understand it would require if an actual 2.5inch HDD was present but nah its a hard lock and uses up an extra port from my laptop or primary dock or power bank. I had come across mpcie to 2x m.2 mkey slots but could never get it to work over USB to mpcie. It did work inside but needed two cable and pointless

    I actually have something like the first link connected to my MacMini.

  • I had this same problem. As others have said, these cards can be expensive, can require a lot of PCIE bandwidth that most affordable modern motherboards don't have, and often requires the motherboard to support bifurcation. The best solution I found to this problem was the Glotrends PA20, it only requires a x4 PCIE slot, it can run 2 SSDs, and it does the bifurcation for you. Plus, it's not that expensive.

  • Wtf i legit thought this was a joke at first. I did not know this was a thing 

    They are more common than most people think.

    The Asus Z690 ROG Strix Gaming-E motherboard came with one as an accessory. Some other Asus boards as well. You used to be able to find them a dime a dozen because so many people didn't use them.

    On a Z690 ROG Strix Gaming-E you could use it to split your lower GEN4 PCIEx16 slot into x4/x4, instead of the normal plain x4 the motherboard allowed at the slot. Letting you use 2 PCIE 4.0 x4 NVMEs.

    There is a second version of the ROG one which does PCIE 5.0 x4/x4 right to the CPU depending on the mother board, which is extremely useful for content creation. If you do not mind in that case going x8 do your GPU, which barely impacts performance.

    Asus sells a lot of 4 NVME slot versions that are of probably more use to the workstation crowd than the gaming crowd.

    https://preview.redd.it/djjvk4gk2x6g1.jpeg?width=1200&format=pjpg&auto=webp&s=e5a4def977f908a1144a81cb162c6cd36f7b0e70

    pci is neat, you can do all sorts of unusual things with it. there's a pretty big market for weird pci adapters on aliexpress

    In this case, it's not that weird. I actually have something similar that came with the motherboard I got for my wife.

    Yeah it's literally just a way to connect one device to another. Just so happens that that is usually a graphics card

    I do kinda know that. I got one that gave me a lightning port... but it was usless because it only ran in integrated graphics instead of my dgpu

    I was pretty fucking pissed about that one lol

    The main downside is if you want to use a dGPU you’re nerfing tf out of it. So, don’t do this unless you have a very specific need lol.

    "Nerfing the dGPU"

    1. That entirely depends on the motherboard and how lanes are configured for the chipset/CPU.
    2. OP has a 7600 XT, x8 PCIE16 GEN4-5 would be more than sufficient regardless as gaming isn't going to saturate shit. The 4090, or 5090 won't either. It's not even a problem. Relax with this rhetoric.

    Oi.

  • Make sure your board has bifurcation. Also, keep in mind that each one is x4 lanes, and you need a proper x16 slot for it to work. I saw a video where only 2 worked because he put it in a 8x slot.

  • It requires a mobo that has another full x16 slot (where most new consumer mobos only come 1 max) and prolly need also the mobo to have PCIE lane bifurcation (where its rare for consumer mobos to have). You could get a Xeon or TR to have those requirements if you also need that x16 slot for your GPU.

  • You have to make sure you get the right card though, most cheaper 4 slot m.2 cards require the motherboard to support bifurcation and it has to support 4x4x4x4x mode, some do support bifurcation but don't support that mode, the ones that don't need bifurcation are going to want your kidney, maybe just half of it at least.

  • Why not just save up for the higher capacity drive. Or use SATA drives. Do you need the high bandwidth?

  • It doesn't come with SSD card 😂

  • tbh, Yeah, but if you're going for cheaper options, bifurcation might be the way to go. Just have to check compatibility…

  • No downsides really.. people are overreacting. There's no way you're going to max out all your lanes or perceive any slowness.

    Check your manual and there's likely a slot that's separate from the one your GPU uses as well. (Chipset vs. CPU connection)

    With the way most chipset connected PCIe slots are wired (1, 2 or 4 PCIe lanes), buying a cheap quad M.2 adapter makes no sense, as there aren't enough lanes to connect the other 3 slots.

    If someone wants to use one of these cards and a GPU on a consumer platform with the smallest loss in performance, they'll need a board that has dual x16 slots connected to the CPU, and supports x4/x4 bifurcation on the second slot.

    Even then, only two slots on the quad M.2 adapter will work.

    The available solutions to use all four slots on that adapter are:

    A. Stick it in the primary x16 slot and use integrated graphics.

    B. Stick it in the primary x16 slot and the GPU in one of the chipset PCIe slots.

  • I use pci-ex 1x slots for SSD NVMe disk. It does not interfere GPU but limit SSD to 200-300 mbit/s. Why do you need 4 ssd?

  • I'd say save the hassle and go for direct-mobo connected 2TB drives. Not just for the convenience and having to not deal with bifurcation etc, but also because 1TB drives are slower than 2+ TB drives even if both are the exact same spec drive, the 2TB capacity one will be faster in read and write speeds compared to the 1TB model.

  • Downsides? 4 drives are 4 times more expensive than 1 drive. Quick math, not a rocket surgery. 

  • I’ll Add one thing. Bifurcation is just a setting two specific bits in a bios. Those two bits are in processor, so it’s not only necessary to have this option in bios, but also it’s processor dependent. Usually consumer grade processors have only 8+4+4 split. And you can „enable” it by shorting 2 processors pad to ground. So technically you can use it in normal motherboard, but you have to use it top pcie slot, where’ś usually 16 pcie lanes.

  • Buy some SATA SSDs if you don't require burning fast speeds, you should have 2 to 4 SATA slots in addition to your NVME slots.

  • Totally agree! You might end up with plenty of drives, but if the card can’t keep up, it’s kinda pointless!!

  • Totally! If you're not careful, you might end up with slower speeds than expected. Just keep that in mind!

    Sometimes you need low profile storage due to not enough physical space and don't care about the speed.

  • make sure your board support pci bifurcation or else only one of them will work

  • Not much tbh. Your motherboard needs to support bifurcation, and some only support it on the 16x slot, so you may need to move your GPU to a different slot.

  • Other than the rising cost?

  • It takes up pcie lanes

  • They require an actually wired up x16 slot. Many 2nd PCIe slots on motherboards are only wired up for 4x or 8x connection which would only let the first or second drive work. Your motherboard also needs to support PCIe bifrucation or else only the first slot will work no matter what.

    If you do meet all the requirements tho there isn't really a downside. It's 4 full speed m.2 ports from there on. Like an extension cable the card has to be rated for the PCIe generation you are trying to use. So gen4 drives in a Gen4 x16 slot needs a gen4 card.

  • Better to just get the highest capacity drive you can, even 8TB, eBay has some cheaper open box ones if you look but still theres always going to be a slight price increase.

    You can also get a couple x4 single expansion cards for your free pci-e slots which you should have 2 or 3 of.

  • I have a smaller one for a single m.2 SSD and it blocks one of the three GPU fans, so yeah that is the only downside I have noticed

  • Most desktop CPUs don't have enough PCIe lanes for this and a x16 GPU.

  • You need a spare x16 slot, which you won’t have unless you have an X299 or Threadripper

  • Did this on a dual xeon server.  It rocks.

  • I suggest you wait with the storage if you can.

  • You'll lose some speed. You'll want to think about heat management. It's not a terrible trade-off if your PCI-E slot can handle the throughput.

  • https://preview.redd.it/06mq8ptpqz6g1.jpeg?width=3000&format=pjpg&auto=webp&s=74ae1226805ac9be8ccd6d3fd698ea743af45570

    I use 4x4 cards in my servers and RIG.

    They are super cool but the downside is that you need to have a motherboard that can support 4x4x4x4 bifurcation. Most consumer motherboards will only do 8x8 at most. And most Peosumer motherboards that do are ungodly expensive and use CPUs that are equally expensive 😅

    I run a 3rd Gen 7r13 Epyc in my main desktop and these things are awesome for giving me resilient storage arrays that I can run in hard parity or mirrors.

    This Asus Hyper M2 Gen5 unit ran me $85 and replaced my aging Asus Gen3 card

  • I have two, use neither. Their width and height creates a huge wall that restricts the intake fans in the bottom of the case, raising temps. I moved to PCIE 3.0 U.2 drives in that box and I’m saving them for a future NAS build that doesn’t require hot GPUs

  • Ssd's get slower when fuller. You'll need a buffer in each as opposed to 1.

  • You need the motherboard to have something called bifurcation

    I haven't seen how to know before buying it, but is randomized sometimes it has it and sometime it doesn't, and is not related with chipset because I have seen entry level chipset having that feature.

    Otherwise, you can get one of those cards with something called "switcher" which allows you to connect to any motherboard whether it has the bifurcation or not

  • The problem is your plan is a dumb idea. Just buy a 4TB. This is for people who want max storage and MAYBE a select few trying to squeeze some life out of some old SSDs, but that feels like a lot of trouble.

  • Block part of the airflow to gpu

  • Finding enough pcie lanes and a motherboard that also implements bifurcation properly.

  • Buy a plx card instead

  • If you want to use a graphics card at the same time as this, you need a CPU with 36 PCIe lanes, which basically locks you into a threadripper platform. I don't know any AM5 boards that support X8 (for the GPU) + X16 (4X4 bifurcated) for this. And I don't understand why you need this, to be honest, you can get motherboards with 4-5 NVMe slots and you can populate all of them while being able to use a GPU with at the very least X8 connection.

    Your PCIe lanes are one of the most expensive and limited resources in your PC, wasting them on 1TB drives is very counterproductive. You can get an 8TB drive for cheaper than what you'd need to spend to run both the GPU and that expansion card would cost.

    I think you'd need to address why you need more storage. If you are storing media, you can just build a NAS with 10-30 TB of capacity from old hardware and high-ish capacity disks, slap a 2.5 Gbps or faster NIC in there and then you can access your files from anywhere.

    If you are storing games, consider SATA SSDs instead. You get 4-8 drives from your motherboard, plus you can add a X4 add-in card for extra slots, set up a raid array for them and you get basically the same performance from them as you'd get from NVMe. There are only a handful of games that take advantage of NVMe drives, and even then, the difference is not that huge.

  • All depends on CPU and MOBO.

    Many modern CPUs work workstations support it. Think Xeon or Ryzen. Check the specs. Make sure you get a motherboard that will have many pcie slots that can ultilize the Pcie lanes too.

    I've done this at work so I have a high end GPU, a GPU running GRAID for my raid adapter and a liqid honeybadger card giving me a 28TB drive that hits 1M IOPS and 28GB/s transfers. Lol.

    Keep in mind I also am connected directly to a fiber switch with another full lane PCIE card so you need a large tower and board. There's heat considerations too with all the M2 drives in some cases as well.

    Do your research and plan ahead as if you don't have the lanes your throughput will be half or 1/4 if anything is off.

    Especially with a GRAID card as the GPU for raid and the drive carrier both need all the lanes.

    Took a while to figure that out when I first got the cards.

  • Can this be upgraded with copper knives and bearskins?

  • buy HDD for storage ... why do you need 4TB SSD for ?

  • Your not gonna have enough Bifurcation in a secondary 16x slot. You'll usually only get X8. It's not gonna run all 4 nvme's.

  • the downside is the side with the connector you put into your mainboard. the side you screw into your case is the upside.

  • There’s no downside. Don’t install a graphics card so you can use a vertical mount to flex your storage

  • No real downsides other than you'll lose an x16 slot.

    Speed is fine, stability is fine, and easier to put cooling on the SSD's

  • You will need to buy a workstation motherboard and a cpu that can handle 2 pcie x16 gen 4-5 as x16 or will your pcie x16 runs as x8 gen 3-4 half speed and will cap the speed of the ssds, but remember in normal motherboards will it runs with the slowest speed your using in the pcie slot

  • For this to work it needs to have some kind of raid controller so the mb recognises it as one drive or it needs to be able to splt the x16 to x4 x4 x4 x4. In all cases its a hella big and fast storage. If speed is not that needed(multiple Gbit/s) its better to use a normail raid.

  • You are broke after buying it