cnrd

Members
  • Posts

    10
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

cnrd's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Any way to check this without the webui? Can't remember when I last updated it. My primary goal right now, is gaining access to the webui. (Hopefully without having to reboot). EDIT: Not sure if relevant, found this in /tmp/plugins/unassigned.devices.plg: <!ENTITY version "2017.03.23">
  2. I have a problem with the plugin: (I'm trying to explain the steps I took below) I had a CIFS share mounted using Unassigned Devices, the share is on another machine in the house. The machine was offline, when I tried to unmount the share, which in some wierd way has caused the following problems: I can still connect to unraid using SSH $ ls /mnt/disks: causes a hang and I'm unable to exit out of the command. Connecting to any of my shares using samba causes a hang on the machine accessing the share, and an error message saying the share is unavailable. I'm unable access the webui. I got the first two problems fixed by running the following command: $ umount -f -t cifs -l /mnt/disks/name_of_share If I ran it without -l (lazy) I got this error: umount -t cifs -f /mnt/disks/name_of_share umount: /mnt/disks/name_of_share: target is busy (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1).) The output from fuser: fuser -l /mnt/disks/name_of_share HUP INT QUIT ILL TRAP ABRT IOT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLT CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH IO PWR SYS UNUSED But I'm still unable to access the webui. I suspect that this is caused by a hang in Unassigned Devices, after trying to unmount an "offline" share. Any chance someone knows how to restart the plugin or something, that will bring back the webui without having to reboot? Pretty sure this is the relevant part of the logfile (names are changed): May 29 12:32:18 Mount SMB/NFS command: mount -t cifs -o rw,nounix,iocharset=utf8,_netdev,file_mode=0777,dir_mode=0777,username=this_is_a_username,password=and_a_password'//SOME/SAMBA_SHARE' '/mnt/disks/name_of_share' May 29 12:32:18 Successfully mounted '//SOME/SAMBA_SHARE' on '/mnt/disks/name_of_share'. May 29 12:32:18 Defining share 'name_of_share' on file '/etc/samba/unassigned-shares/name_of_share.conf' May 29 12:32:18 Adding share 'name_of_share' to '/boot/config/smb-extra.conf' May 29 12:32:18 Reloading Samba configuration... May 29 12:32:18 Directory '/mnt/disks/name_of_share' shared successfully. May 29 12:32:18 Device '//SOME/SAMBA_SHARE' script file not found. 'ADD' script not executed. Jun 01 14:31:31 Removing Remote SMB share '//SOME/SAMBA_SHARE'... Jun 01 14:31:31 Device '//SOME/SAMBA_SHARE' script file not found. 'REMOVE' script not executed. Jun 01 14:31:31 Unmounting Remote SMB Share '//SOME/SAMBA_SHARE'... Jun 01 14:31:31 Unmounting '//SOME/SAMBA_SHARE'...
  3. Hi I'm trying to setup a watch folder that will run a command whenever I add a file to that folder over SMB, I got it somewhat working, but I still have some problems with timing (or multiple runs on the same file). The script is supposed to convert any mkv or avi file I add to the folder into an MP4 using Don Melton's transcode-video script (For which a docker container have already been created. The idea is, that I can just convert my DVDs to MKV files and dump them onto the server for automatic processing. This is what I currently have: inotifywait -m -r convert_movies -e close_write | while read path action file; do echo "The file '$file' appeared in directory '$path' via '$action'" if [[ $file =~ .*\.(mkv|avi) ]]; then #docker run --rm -it -v /mnt/user/Misc/convert_movies:/data ntodd/video-transcoding transcode-video --crop detect --mp4 "$(echo $path | sed 's/convert_movies\///g')$file" echo "File $file was written to $(echo $path | sed 's/convert_movies\///g')$file, converting..." fi done I commented out the docker run command, for testing that the script actually work. The problem is, that every time I add a file, I get multiple "close_write" triggers: The file 'test.mkv' appeared in directory 'convert_movies/test/' via 'CLOSE_WRITE,CLOSE' File test.mkv was written to test/test.mkv, converting... The file 'test.mkv' appeared in directory 'convert_movies/test/' via 'CLOSE_WRITE,CLOSE' File test.mkv was written to test/test.mkv, converting... I'm not really sure how to avoid this, or if there is something better than using inotifywait for the trigger, I would really appreciate if someone could help me getting this to work correctly.
  4. I just want to be sure that a set the split levels correct: My movies are organized as: Movies (share)/genre/moviename/moviefiles.* I'm setting it to split level 2: Does that mean that all files in the folder "moviename" are on the same drive? And that movies of the same genre does not necessarily reside on the same drive? (Which as I understand it would be level 1?)
  5. Thanks, I'm pretty sure that's what I'm doing: 1. Mounted the USB 3.0 drive (Currently contaning all data). 2. Created a share called Share which includes all drives. 3. rsync -avh /mnt/disks/[NAMEOFUSB30]/ /mnt/user/Share/
  6. I finally got the system up and running, and I'm ready to copy the files over. Now I have another question, is there a path somewhere in the file system that will show the array as just a pool of space? As I'm trying to copy about 7TB onto an array of 2 x 4TB drives, I just wanted to point the copy destination somewhere and let it do the copy.
  7. The drives are currently formatted as NTFS so if unRAID supports mounting NTFS in read mode (I'm guessing it does), then I'm just planning on using rsync to copy all of the files over to the array or is there an easier / better way to do it?
  8. Yeah I know, most of the data contained on the drives are "just" media, that would take me a lot of time to rerip and name. All important data (Pictures, documents and so on) are all backed up to multiple different off site solutions. Really I just want the parity to (in some cases) be able to restore data faster than ripping, converting and naming files. (Also way less manual labor).
  9. Yes I do have online backups, but downloading 8 TB of data would take way more time than just copying them over a USB 3.0 connection. Just to be sure, having one parity drive would "save" me from downloading all of my data back in case just a single drive goes bad (And two in case I had 2 parity drives), right? (i.e. faster recovery time). Thanks for confirming :-)
  10. Hi I'm currently in the process of planning my unRAID server. I currently have 1x 8 TB WD RED and 2x 4 TB WD Green, which are currenly all just running as USB drives connected to a laptop, which I'm planning to switch out to a Skylake based system. Two questions: First: I currently have a little under 8 TB of data distributed across the drives (I would be able to contain all data I need to keep on the 8 TB drive). When setting up unRAID would the following be possible: 1. Copy all important data to 8 TB drive. 2. Add the two 4 TB drives to the unRAID "array" (giving me about 8 TB of storage space). 3. Copy all data from the 8 TB drive to the "array" using USB. 4. Add the 8 TB drive as a parity drive for the "array" Doing it this way, I would be able to avoid having to purchase yet another 8 TB drive just for contaning the data temporary. Second: How much power would my PSU need for each drive? I'm planning on a 550W 80+ Gold (EVGA SuperNOVA, never had problems with those). Thanks