rinseaid

Members
  • Posts

    39
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

rinseaid's Achievements

Rookie

Rookie (2/14)

17

Reputation

  1. Sorry to reply in English, I've been reading through the translated thread as I've had the same issues with NFSv4 on Unraid 6.11.5. I could get it to work intermittently by remounting the path manually, but after rebooting the PBS server it would go back to error 13 permission denied. However, I was able to get NFSv3 working reliably in my setup with the following configurations. I adjusted the mount command on my PBS server to force the use of NFSv3: # <file system> <mount point> <type> <options> <dump> <pass> unraid.lan:/mnt/user/pbs /mnt/unraid nfs defaults,nfsvers=3 0 0 The NFS export in Unraid needs to be set to Private with options similar to these, of course adjusting the CIDR for your network: 192.168.0.0/24(rw,sec=sys,insecure,anongid=34,anonuid=34,all_squash) Note - I'm not sure if all those options are absolutely necessary, but it seemed to work for me. I also needed to modify ownership of the pbs directory on the Unraid server: chown -R 34:34 /mnt/user/pbs For me, with all of the above I was able to create a datastore and it persisted a reboot. It's an inelegant solution, and I really wish I could get it to work with NFSv4, but I've spent enough time on this to move on for now. Update: After playing around with this some more, the good news is that I was able to switch back to NFSv4 with an already created datastore in an Unraid share. I adjusted the NFS export options as follows: 192.168.0.0/24(rw,sec=sys,insecure,anongid=100,anonuid=99,all_squash) Then changed permissions for the PBS share so it's owned by nobody:users (99:100): chown -R 99:100 /mnt/user/pbs After this I was still able to create new backups and restore old ones from PBS. For reference her's my updated fstab mount entry: # <file system> <mount point> <type> <options> <dump> <pass> unraid.lan:/mnt/user/pbs /mnt/unraid nfs defaults 0 0 PBS server is working well to both read and write to the datastore, using NFSv4 protocol: root@pbs01:~# mount | grep unraid unraid.lan:/mnt/user/pbs on /mnt/unraid type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.xxx,local_lock=none,addr=192.168.0.xxx) root@pbs01:~# ls -la /mnt/unraid total 8 drwxrwxrwx 1 99 users 60 Dec 18 13:53 . drwxr-xr-x 5 root root 4096 Dec 18 20:50 .. drwxr-x--- 1 99 users 524288 Dec 8 12:33 .chunks drwxr-xr-x 1 99 users 48 Dec 18 14:00 ct -rw-r--r-- 1 99 users 297 Dec 18 00:01 .gc-status drwxr-xr-x 1 99 users 54 Dec 18 14:00 host -rw-r--r-- 1 99 users 0 Dec 8 12:25 .lock drwxr-xr-x 1 99 users 42 Dec 18 14:02 vm Permissions on Unraid server: root@unraid:/mnt/user# ls -la pbs total 4 drwxrwxrwx 1 nobody users 60 Dec 18 13:53 ./ drwxrwxrwx 1 nobody users 6 Dec 18 17:26 ../ drwxr-x--- 1 nobody users 524288 Dec 8 12:33 .chunks/ -rw-r--r-- 1 nobody users 297 Dec 18 00:01 .gc-status -rw-r--r-- 1 nobody users 0 Dec 8 12:25 .lock drwxr-xr-x 1 nobody users 48 Dec 18 14:00 ct/ drwxr-xr-x 1 nobody users 54 Dec 18 14:00 host/ drwxr-xr-x 1 nobody users 42 Dec 18 14:02 vm/ The bad news is that this same configuration won't work for an new NFS share. I tried to create a new one using the options listed above, and I couldn't recreate my initial success. I just hope the existing datastore keeps working...
  2. Hey - great plugin, much appreciated for all the work from both @Squid and @hugenbdd. I wanted the ability to move files based on minimum number of links. My use case is that I hard link all torrent files downloaded, and want to keep seeding torrents (and their hard linked media) on my cache pool. In combination with the exclude file list (adding the directory my torrents download to) any media file with less than 2 links will be moved to the array. I made the necessary adjustments to add this functionality and have attached patch here in the event that you want to incorporate this into the plugin. Note that the lowest setting is '2' links, since all files have at least 1 link. Setting to 1 would therefore find no eligible files to move. movertuning.patch
  3. Just installed this Docker and loving it. Having issues getting pyrocore tools to work. When I add these commands into rtorrent.rc, rtorrent never loads and the watchdog-script keeps repeating "Failed to start rTorrent, skipping initialisation of ruTorrent Plugins...". If I open a shell into the container and attach to the tmux session, I see the console message that Pyroscope loaded but rtorrent restarts shortly thereafter. Anyone else have this issue or can point me in the right direction? Edit: I figured out what was happening and worked around it for now. rTorrent-PS takes quite a lot longer to start with the PS extensions loaded - at least in my environment with a large number of torrents in the session. While it would eventually come up (4+ minutes), the rtorrent.sh script kept trying to restart rTorrent and would delete the rtorrent.lock file in the session directory - which would in turn prevent things like rtcontrol from running. I modified /home/nobody/rtorrent.sh to increase retry_wait=1 to retry_wait=30. This allows everything to come up. I'm running everything from NVMe SSDs on a 10th Gen i7 with plenty of RAM and no obvious chokepoints. Not sure of a permanent fix for this? I do have a ton of hash checks running (importing from Transmission) - so maybe I'll wait that out and see if it's the issue - I'll update here on what happens. Update- after all hash checks finished, the issue continues. For now I am just updating the retry_wait value via script on container startup. Seems I'm the only one with the issue so probably not valuable to try and accommodate. I'll leave this all here in case anyone else has the issue in the future.
  4. Unfortunately I'm not sure as it's been years since I used this. I don't recall seeing those log entries. Either zed or the unRAID notification system may have changed in the time since I last used this. Sorry I can't be of more help.
  5. Hello, I haven't used ZFS on unRAID in quite some time - so not sure if it still works, but basically here's what you'd run in the terminal based on the instructions I provided. nano /boot/config/go This will open up the nano text editor. If there's anything in the file, use the cursor keys to navigate to the bottom of the file. Copy and paste the below (you can paste by right clicking and selecting paste using the unRAID web browser terminal): #Start ZFS Event Daemon cp /boot/config/zfs-zed/zed.rc /usr/etc/zfs/zed.d/ /usr/sbin/zed & Press CTRL+x, then type 'y' and press enter. This will save the file and exit the nano text editor. Type the below to create the /boot/config/zfs-zed/ directory: mkdir /boot/config/zfs-zed/ And finally, copy the default zed.rc file into the new directory cp /usr/etc/zfs/zed.d/zed.rc /boot/config/zfs-zed/ You would then use nano to modify the zed.rc file if desired to use unRAID notifications or configure email notifications: nano /boot/config/zfs-zed/zed.rc Find the two lines mentioned (ZED_EMAIL_PROG and ZED_EMAIL_OPTS) and modify as required. Reboot after the above changes for the settings to take effect. Side note: you can also turn on SMB sharing of the flash drive from the unRAID web gui (Main -> Boot Device -> Flash -> SMB security settings -> Export yes, security public), which would allow you to access the /boot/config from SMB - e.g. \\yourserver\flash\config - you could then use whatever text editor you like to modify these files.
  6. For anyone else seeing this- I had these errors in my logs as well, and found that there were orphaned .cfg files in /boot/config/shares for shares that no longer existed. After deleting them, the errors went away.
  7. I've been happy with 2 of these boards, both running the i7-8700 CPU. One is used with a PCI-E GPU and the other without. The performance for me has been great. One thing I wanted to note, I recently updated one of the boards to unRAID 6.7.2. The server started segfaulting/hard crashing a lot. I applied BIOS update 1.0c and so far, so good. Not sure if it was just a coincidence.
  8. Quick update for anyone else looking into this board. In BIOS revision 1.0a, when activating the iGPU any external GPU would be disabled and not show up in lspci. After applying BIOS 1.0b, this now works. So, it's possible to have the BMC VGA adapter for iKVM, iGPU for hardware transcoding/passthrough, and at least one additional PCI-E GPU (I've only tested a single GPU).
  9. That's a linux kernel option, so unfortunately I'm not sure how you would achieve this with a Windows server.
  10. I had a feeling the whole time that I was probably reinventing the wheel... but once I started I was determined to find a way 😂. A few posts back in this thread I posted the output of smartctl -A from a SAS drive. Totally agreed that it's a better approach to get data directly. Derived data is usually a recipe for disaster. Just takes a while to come to a boil.
  11. I am using the i7-8700 in my build, so you'll be good with that CPU. As long as you use the BIOS settings and syslinux.cfg listed in this thread, you will be able to expose the iGFX to unRAID, and should then be able to map the devices to the Emby Docker. This has been explained in other threads, so I'm just confirming that this will work in theory specific to this hardware configuration.
  12. Warning: I've never written anything in PHP before, but I modified the get_highest_temp function to read the temp from /var/local/emhttp/disks.ini. This allows the script to detect the temp of both SAS and SATA drives. function get_highest_temp($hdds){ global $hddignore; $ignore = array_flip(explode(',', $hddignore)); $highest_temp = 0; $lines = file_get_contents('/var/local/emhttp/disks.ini'); $lines = explode("\n", $lines); $pattern = '/^temp="([0-9]+)"/'; foreach ($hdds as $serial => $hdd) { if (!array_key_exists($serial, $ignore)) { $temp = 0; $line_number = 0; for ($line = 0; $line < count($lines); $line++) { if (strpos($lines[$line], $hdd) > 0) { $line_number = $line + 6; } } ob_start(); echo $lines[$line_number]; $templine = ob_get_contents(); ob_end_clean(); preg_match($pattern, $templine, $tempnum); $temp = $tempnum[1]; $highest_temp = ($temp > $highest_temp) ? $temp : $highest_temp; } } debug("Highest temp is ${highest_temp}ºC"); return $highest_temp; }
  13. Ok, did some further digging and I think I found the issue preventing the script from reading my drive temperatures. The output of smartctl -A -n standby /dev/sdx on a SAS drive is different than on a SATA drive, and thus the script can't parse the temperature. Here's sample output from a SAS drive: smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.20-unRAID] (local build) Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === Current Drive Temperature: 37 C Drive Trip Temperature: 60 C Manufactured in week 14 of year 2015 Specified cycle count over device lifetime: 50000 Accumulated start-stop cycles: 95 Specified load-unload count over device lifetime: 600000 Accumulated load-unload cycles: 823 Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 5494811087863808
  14. I can confirm that Plex hardware transcoding continues to work with the above settings. Very happy with my setup now.