itimpi

Members
  • Content count

    5017
  • Joined

  • Last visited

Community Reputation

35 Good

About itimpi

  • Rank
    Advanced Member
  • Birthday 06/10/50

Converted

  • Gender
    Male
  • Location
    United Kingdom
  1. You should be good to go and do whatever you would normally do. Parity is maintained during the repair (as long as the array was started in Maintenance mode at the time) so it is not necessary to run a check. However periodic parity checks are good housekeeping so It might be time to do one as part of your normal process?
  2. Running with the -L option ty[ically works fine and repairs without any data loss. i have also found that a mount from the command line normally works even though it fails from the unRAID Level. I put together a little script for myself that does a mount/umount on each drive and then runs xfs_repair without needing the -L option
  3. Try toggling it off, rebooting, and then setting it on again. There was another report earlier today where the cache disk setting was showing as on but it was not being used where toggling the setting cleared the issue.
  4. It sounds as if using the cache disk for Shares has not been enabled under Settings>>Global Share Settings.
  5. Have you got the cache disk enabled under Settings->Global Share Settings?
  6. If you install the Unassigned Devices plugin that provides a nice GUI based way of doing it from the unRAID side.
  7. I have found that with 6.4 rc7a if I do a New Config and leave all disks set to the default of ‘auto’ then the first time I start the array all disks show as unmountable. Stopping and restarting the array resolves the problem and they now all show as XFS. I find this behaviour completely reproducible. Whether it was there in an earlier release I am not sure as it is not often that I do a New Config. It looks like there must be some sort of race condition between the code that detects the format type and the mount operation? Although the array stop/restart is an easy workaround I could see this leading to some users panicking and therefore taking inappropriate action that might lead to data loss.
  8. You are asking for a very unusual combination of capabilities! You normally control mover via the share setting and tell to ignore folders by using the cache=no setting, but you are wanting a strange combination of cache=yes and mover ignoring that setting on the folder. The vast majority of people want mover to run without any manual intervention and for that the current settings are adequate. Asking for an option to disable mover via the GUI is reasonable, but beyond that is (I think) beyond what most people want. Have you tried having a setting of cache=only on the ‘downloads’ share and then manually copying the files to a ‘downloads’ folder on particular disks thus bypassing the User Share settings? The way unRAID works I think these files would still be part of the ‘downloads’ share for read purposes because the cache settings only control writing and not reading. Whether this is practical in your particular scenario I am not sure. Having said that LimeTech have stated that they intend to enhance the mover tool in the future. I believe for instance they are thinking of it supporting most (if not all) of the functionality of the unBalance plugin. You could at least register what you want as a Feature request to see if it can be incorporated into that work.
  9. 1). This is not an option. Yours seems rather an unusual requirement so it is unlikely to be added I would think. 2). Others have replaced the script with their own customised version. No plugin for this though. 3). Although you cannot disable it completely via the GUI you could set it to only run monthly. Not sure if it is possible to manually edit a configuration file to pick a value (e.g. 32) for a day-of-the-month that cannot occur. Another possibility is to add an entry to your ‘go’ file that renames the mover file which stops it running although it is a bit of a heavy-handed approach as it also then stops you running it manually.
  10. The problem is that the process of assigning the disk will have caused unRAID to realise the disk is not partitioned correctly for unRAID so will have rewritten the partition table to conform to what unRAID expects. The rest of the disk (including your data is still there unchanged. To get the data back you will need to use one of the many disk recovery tools for Windows that can find where the partition should be and set it back to what it used to be.
  11. That is not a recursive option so it will only set the permissions on the top level folders. Rebooting will ensure they are reset to defaults as unRAID is unpacked into RAM each time it loads. Having said that I doubt having those permissions on the top level folders matters - it is their contents that matter and they will be unchanged.
  12. You could also follow the Parity swap procedure which allows you to in one operation replace a parity disk with a (new) larger one and use the old parity disk as the replacement data disk. This is not an infrequent operation. This operation actually has 2 phases - the first one copies the old parity information to the new parity disk, and then the second one does the rebuild to the old parity disk. The big advantage is that there is no requirement to have a spare unused drive of the size of the failed data drive.
  13. I believe there is a bug in the current release (fixed for the next one) where a disk is not included in the free space calculation until the top level folder for the share has been created on that disk.
  14. Did you select SeaBIOS OR OVMF when creating the VM? Whichever you selected I thinks you may need the other one.
  15. A point to remember is that every ‘write’ operation to a data drive under unRAID actually involves 4 I/O operations (reads from parity and target to establish the ‘before’ state followed by writes to parity and target). Therefore there is always more than 1 write running in parallel. That is why using a cache disk speeds up the perceived speed - it is a single write to cache (without parity being involved) and no need for the initial read.
Copyright © 2005-2017 Lime Technology, Inc. unRAID® is a registered trademark of Lime Technology, Inc.