• Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About jcarmi04

  • Rank
    Advanced Member


  • Gender
  1. @Frank1940 thanks for posting. In the process of doing a RFS to XFS conversion in the background to see if this finally licks my issue(s). Regarding the SMART reports, I did a quick scan and didn't notice anything too alarming...but would obviously lean on you/others for recommendations as to what might get me prepared for failures. (Most of the 196+ rows are "Old Age" and not reporting anything crazy and I've posted my row 5 values below that "may" look a bit wonky.) Wrt 6 drives for 25 TB vs 13 for 24 TB, I WISH...and eventually will. Just been using unRAID since approx 2010 and purchased what was available then. Hence, having wayyyyyy too many 1 TB HDDs kicking around my place without a purpose Here are my higher WORST/THRESHOLD ratios. With the exception of disk4 (Toshiba 5 TB), all others are WD drives...so those values may be "normal"!? disk3 5 Reallocated sector count 0x0033 200 200 140 Pre-fail Always Never 0 disk4 5 Reallocated sector count 0x0033 100 100 050 Pre-fail Always Never 0 disk5 5 Reallocated sector count 0x0033 200 200 140 Pre-fail Always Never 0 disk8 5 Reallocated sector count 0x0033 200 200 140 Pre-fail Always Never 0 disk9 5 Reallocated sector count 0x0033 200 200 140 Pre-fail Always Never 0 disk11 5 Reallocated sector count 0x0033 200 200 140 Pre-fail Always Never 0 disk13 5 Reallocated sector count 0x0033 200 200 140 Pre-fail Always Never 0
  2. Thanks @bjp999! If it makes sense, I can update to the latest version of unRAID. I don't want to bite off too much...but don't think there'd be a downside to doing this. (I wouldn't have to rebuild Parity, if I'm remembering correctly, right?!)
  3. @jonathanm I was reading that when converting from rfs to xfs it was potentially finicky; is this at all accurate (@bjp999)? Would happily choose the easiest option, at this point...
  4. @bjp999 I kinda figured you'd recommend that, so been locating the drives. Both had previously been unRAID disks, so I'll plan add both the 2 and 3 TB drives to the array and format XFS. Should I copy files between both XFS disks to test this out or just unload the 5 TB to these (and then format the 5 TB as XFS)? Any other thoughts or recommendations? I'm thinking I'd run into rsync issues if trying to go from a 5 TB to a 2+3...so might have to manually copy stuff...!?
  5. @bjp999 4.86 TB I'm pretty full up: 1: 1.88 of 2 TB 2: 2.95 of 3 TB 3: 1.49 of 2 TB 4: 4.86 of 5 TB 5: 1.17 of 2 TB 6: 1.90 of 2 TB 7: 2.97 of 3 TB 8: 1.34 of 2 TB 9: 2.94 of 3TB 10: 2.68 of 3 TB 11: 1.64 of 2 TB I haven't been able to move stuff around for a long time to free things up better ...
  6. Thanks @Frank1940 ! Will be reading up on it today...
  7. @bjp999 Thanks...catch you later. Happy Father's Day!
  8. @bjp999 I do have some slow access times (seems like the server is getting choked...but no reason it should), but I think my main faults result in writes. Since all of my disks are, in fact, over half full should I proceed as follows for testing: 1. Add new RFS-formatted drive to the array (I think I have a 1, 2, and 3T available I could use for testing) 2. Copy files to the drive and watch performance 3. Add new XFS-formatted drive to the array (will have to purchase a 5T) 4. Copy files to the drive and watch performance I'm trying to understand the relevance of a RFS over half full and whether to include steps 1 and 2 or to exclude. Also, I can format a 1, 2, or 3T as XFS and replace any steps above or include (new steps 3 and 4, bumping the others to 5 and 6). *My largest drive is a 5T Parity and Disk 4 is also 5T.
  9. @bjp999 I haven't noticed predictable or consistent problems (either single disk share or multi-disk share), but can run through a few tests to rule in/out things if you think.
  10. @bjp999 v6.1.9. Thanks, rfs for all disks except cache which is btrfs. I actually preclear all disks on a separate box, so it unfortunately won't factor in.
  11. @bjp999 I'm assuming RFS as I've precleared all drives with @Joe L. old preclear utility. How can I check? I reckon that's gonna be a BEAR to redo all drives....
  12. @Frank1940 It is a Share that includes Disks 1, 5, 8, 10, and 11 and hosts my movies for Plex, Kodi. Allocation: High-water Min free: 0 Split: Auto split any dir as req Inc: See above Ex: None Cache: No
  13. I just double checked and all copies I did today were from User, not Disk. (In the past I may have done differently, but have not in a while.) Any thoughts, tests, etc I can try?
  14. Just figured I'd round it out with: 7. MC "d1" to the original multi-disk share: SUCCESS UGH!
  15. Well, I have no idea what's going on... I just created a couple single disk shares (d1 is to the M/B (Disk 1), d9 and d10 are to the SuperMicro card (Disks 9 and 10)). 1. Win10 multi-disk share (Disks 1, 5, 8 (M/B), 10 and 11 (SM)) to "d1": SUCCESS 2. Win10 "d1" to "d9": FAIL 3. Win10 "d1" to "d10": SUCCESS 4. MC "d1" to "d9": SUCCESS 5. Win10 "d1" to "d9: SUCCESS 6. Win10 "d1" to the original multi-disk share: FAIL The speeds aren't fast, but right now I'm going for operational. Log attached and server details below. 2x 5T (Toshiba Parity and disk) 2x 2T (Hitachi) 2x 3T (Seagate) 4x 2T (WD) 1x 3T (Toshiba) 1x 3T (WD) 1x 250G (Crucial Cache) 1x 4G (JD FireFly Flash) 1x ASRock Z87 Extreme M/B 1x Intel I5-4570 @ 3.2GHz CPU 4x 8G Kingston RAM 1x Cosair 650w PSU 2x AMD Radeon HD Video Cards (VMs) 1x SuperMicro AOC-USAS2-L8i (8 port) unRAID Server Pro v6.1.9 HVM: Enabled IOMMU: Enabled Docker: Installed Docker Containers: -BTSync (gfjardim/btsync:latest) (Config using Cache and data stored to a Share) -Plex (limetech/plex:latest) (Config using Cache and data accessed via Shares) VMs: -1x Win7 (Using Cache only) -1x Win10 (Using Cache only) tower118-diagnostics-20170617-0643.zip
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.