[HN Gopher] Building a fast all-SSD NAS on a budget
       ___________________________________________________________________
        
       Building a fast all-SSD NAS on a budget
        
       Author : walterbell
       Score  : 41 points
       Date   : 2022-07-26 06:46 UTC (1 days ago)
        
 (HTM) web link (www.jeffgeerling.com)
 (TXT) w3m dump (www.jeffgeerling.com)
        
       | [deleted]
        
       | sorenjan wrote:
       | > Every minute of 4K ProRes LT footage (which is a very
       | lightweight format, compared to RAW) is 3 GB of space
       | 
       | Do you really need to save all of that footage? I would think
       | keeping the pro res footage for the current projects on the
       | workstation and reencoded archive video on a NAS would be
       | sufficient. I'm not a video professional, but I suspect it's easy
       | to fall in the trap of thinking that you need to save everything
       | in highest possible quality in case you need it later, but what
       | are the realistic chances of that? If you end up needing some old
       | footage again, AV1 coded 4K or even HEVC 1080p would probably be
       | just fine. The final result are Youtube videos after all.
       | 
       | I know he mentions editing from it, but that's enough space for
       | more than a week of pro res video.
        
       | gigatexal wrote:
       | Anyone know if any core level work on ZFS where there's an effort
       | to audit the code base for speed ups given the big differences in
       | designs between spinning rust and SSDs?
        
         | mastax wrote:
         | I do know that if you have very fast NVMe SSDs (>6000MB/s or
         | so) ZFS is not currently able to give you the whole
         | performance, due to time spent memcpying to/from the ARC[0].
         | Direct IO support could eventually alleviate this[1].
         | 
         | [0]: https://github.com/openzfs/zfs/issues/8381
         | 
         | [1]: https://github.com/openzfs/zfs/pull/10018
        
         | jsmith99 wrote:
         | Oracle themselves have been selling all flash ZFS appliances
         | for a long time so I imagine this is a development focus.
        
           | mbreese wrote:
           | I doubt any Oracle SSD performance enhancements will make it
           | into OpenZFS though.
        
       | CharlesW wrote:
       | In this case "on a budget" means $4,329. That's reasonable if it
       | speeds up billable work, but sadly the cost puts it a bit out of
       | reach for my home office.
        
         | PaywallBuster wrote:
         | the budget option is < 800$ ?
        
           | magicalhippo wrote:
           | Still quite a lot. All you really need is an old i7 and a
           | 10GbE Mellanox Connect-X2 or Connect-X3 card from eBay for
           | $10-20.
        
             | CTDOCodebases wrote:
             | I agree with your message but it's hard to find an i7 that
             | supports ECC RAM and the Mellanox Connect-X has lost
             | support in modern distros.
             | 
             | Best bet is to pick up an old HP Z620 of find someone who
             | is upgrading their old Xeon homelab. Generally its a choice
             | of cheap, quite, energy efficient and you can only pick two
             | of these options.
        
         | dylan604 wrote:
         | "I edit videos non-stop nowadays."
         | 
         | For a video editor, at least one that's been around long enough
         | to remember DAS and SAN solutions, $4300k for 40TB of edit
         | capable storage is cheap.
         | 
         | Perspective is everything.
        
       | jeffbee wrote:
       | I would have been pretty tempted to build this with a W480 Xeon
       | platform having 2x thunderbolt ports. Conceivably that could have
       | broken through the 1GB/s ceiling the article is seeing with 10g
       | ethernet.
        
         | mbreese wrote:
         | 1.1GB/sec throughput is pretty good over a 10Gb/sec network.
         | That's 88% saturation. Right?
        
       | liuliu wrote:
       | TBH, not sure if spending $3500 on 40TB storage v.s. ~$800 with
       | rotating disks at the same storage capacity. You can put $200 on
       | top with a 2TB NVMe SSD as cache.
       | 
       | The reason to question this is that 40TB seems small if you want
       | to have a NAS for small video editing studios. And for personal
       | use, you probably not going to need more than 2TB work set paged
       | in at any given moment.
        
         | jjcm wrote:
         | Possibly for smaller projects, for anything remotely sizable,
         | 2TB is likely not going to cut it. 5k prores is 1TB for every
         | 30min of footage, which means you're only getting an hour out
         | of a 2TB drive.
         | 
         | Storage needs for any pro video workflow get very large, very
         | quick.
        
         | rektide wrote:
         | For compare $16/TB is pretty awesome[1], which for that budget
         | would be about 217GB. ~5.5X.
         | 
         | [1]
         | https://diskprices.com/?locale=us&condition=new&capacity=12-...
        
         | sorenjan wrote:
         | I'm not familiar with NAS file systems. Is it fairly straight
         | forward to use hard drives with SSDs as transparent cache, and
         | make it look like a single file share?
        
         | walrus01 wrote:
         | personally if I had to do this I would go with rotating disk
         | for bulk storage in a NAS, and something like two 2TB to 4TB
         | size NVME SSDs in a proper video editing workstation
         | motherboard directly attached to pci-e 4.0 bus.
         | 
         | This will be considerably faster for working with "immediate"
         | needs of video files rather than over a 10GbE network.
         | 
         | like, a difference of 900MBps over network vs 2500MBps with
         | local sequential read/writes on NVME SSD on same motherboard.
        
       | aetherspawn wrote:
       | At this speed I'm thinking you're probably going to bottleneck on
       | the network/switch.
       | 
       | Ubiquiti have a cheap fiber optic switch you could try. You could
       | also try a 2x 10G SFP+ configuration, which would give you 20
       | Gbps (but only 10Gbps per client).
        
       | alexk307 wrote:
       | Huge fan of Jeff's work on YouTube! Highly recommend checking it
       | out if this blog interests you
        
       | walrus01 wrote:
       | I would be extremely cautious about using any consumer grade TLC
       | or quad-level-cell SSD in a "NAS" for serious purposes because of
       | well known write lifespan issues.
       | 
       | There's a reason that a big difference in price exists between a
       | quad-level-cell 2TB SSD and an expensive enterprise grade one
       | with a much higher TB-write-before-dead rating.
       | 
       | This might look cool but check back in a few years and see how
       | much of the drives' cumulative write lifespan is worn out.
       | 
       | I also cannot even _imagine_ spending $4000+ on a home file
       | server /NAS with copper only 10GbE NIC and it not having at least
       | one 10G SFP+ interface network card.
       | 
       | Okay, so he wants it to be tiny? But in a home environment the
       | major problem is more power consumption and noise, so you can
       | often go with a well ventilated 4U height rackmount case for full
       | size ATX motherboard, which is roughly the size of a midtower PC
       | case turned on its side.
       | 
       | This lets you use motherboards that will have enough PCI-E 3.0 x8
       | slots for at least one dual-port Intel SFP+ 10G NIC which are
       | very, very cheap on ebay these days.
        
         | hatware wrote:
         | This is definitely an engineering disaster. Sometimes we get
         | too caught up in how to do something that we never ask
         | ourselves if we should.
        
         | nichch wrote:
         | I was thinking the same thing, but wouldn't these be okay if
         | his workload is mainly WORM?
        
         | mbreese wrote:
         | TrueNAS used to be designed to boot off of smaller SataDOMs
         | that were used only for boot. They were effectively WORM. At
         | least, it used to be a few years ago. Everything that was
         | written for the server was either a RAM disk or spread out
         | amongst the RAID drives (as a separate partition, which has its
         | own issues, but still).
         | 
         | I had assumed this is what he was using the TLC SSD for. If
         | that's the case, so long as there isn't much writing to it, it
         | should be fine.
        
       | rr888 wrote:
       | Has anyone tried to replace NAS with a cloud service? If you have
       | gig internet it should but I'm not sure if dropbox etc can keep
       | up.
        
         | hatware wrote:
         | Usually you go the other way with that.
        
       | karmicthreat wrote:
       | I just went through getting the parts for my own NAS. All SSD was
       | way overkill for my needs, I ended up going with spinning disks
       | and a SLOG cache. I kept waffling about the motherboard to use
       | but I ended up with a x470d4u motherboard with a Ryzen 7 4700GE
       | which brings the TDP down to 35W. I wanted this to be kind of
       | quiet. I will put a 10Gb network card on it eventually.
       | 
       | Maybe not THE BEST (tm) choices. But I was getting bad decision
       | paralysis choosing parts.
        
       | neilv wrote:
       | That's a neat 2U case design, and will fit in some very shallow
       | wall-mount network switch cabinets.
       | 
       | For installing outside of a machine room/closet/center, if you're
       | using 2U of height, you might also fit a PSU with a larger and
       | quieter fan, since all the Flex PSUs I've had come with
       | noticeably loud fans. (I replace them with Noctuas, but it isn't
       | a fun kind of soldering, IMHO.)
       | 
       | The components from the build would also fit in a Supermicro 1U
       | short-depth chassis, especially if you can go a little deeper in
       | your cabinet. (My new K8s server got a used Supermicro 1U chassis
       | for ~$60 shipped, including a PSU. In the photo on
       | https://www.neilvandyke.org/kubernetes/ , it's the 1U immediately
       | below the 4U.)
        
       ___________________________________________________________________
       (page generated 2022-07-27 23:00 UTC)