• Content count

  • Joined

  • Last visited

Community Reputation

18 Good

About ken-ji

  • Rank
    Advanced Member


  • Gender
  • Location
  1. maybe your work place is intercepting those ports as they seem to be the common internet service that many work places do no want employees to be able to access directly.
  2. Do you have any pc that's wired? because when your only client is wireless, the issue becomes impossible to troubleshoot. as a possible alternative, try plugging the tower directly to the router and see it you still have issues.
  3. Yes. Typically you will only need rw or ro
  4. well by default the shares are still mountable, they will just be read-only shares. The rules will allow you to grant write access to certain machines.
  5. NFS does not have concept of passwords for security. it relies on network ACLs and filesystem ACLs Security = Secure make eveything readonly Security = Private requries a rule to allow a machine to read and write (cf the linked post). A rule can also be set block access Finally the mounting machine will send the accessing userid and groupid to the server, which will then do a regular filesystem ACL check to determine if you can read or write to which files and directories.
  6. Hmm. Kinda assumed those were set since you were running a VM/container host under a hypervisor. But yeah. that's the last thing most people will think off.
  7. Can you also show the outputs of docker ps -a and docker exec [container] ip route The errors seem to be related to something else. Also, your unraid server is in the same group of addresses you told docker to use. This is not a problem yet, but could be when something decides to use the same address by chance (unless all your dockers will have static ips)
  8. I have this: and these user scripts using User Scripts plugin # slocateInitialize @ Array startup #!/bin/bash cat << EOF > /etc/updatedb.conf # /etc/updatedb.conf: slocate configuration file PRUNEFS="devpts NFS nfs afs proc smbfs autofs iso9660 udf tmpfs cifs" PRUNEPATHS="/dev /proc /sys /tmp /usr/tmp /var/tmp /afs /net /media /var/lib/docker /mnt/user0 /mnt/user /boot" export PRUNEFS export PRUNEPATHS EOF [ -f /mnt/user/appdata/slocate/slocate.db ] && cp -p /mnt/user/appdata/slocate/slocate.db /var/lib/slocate/slocate.db # slocateBackup @Custom "0 6 * * *" #!/bin/bash cp -vp /var/lib/slocate/slocate.db /mnt/user/appdata/slocate/slocate.db What these extra scripts do is tell slocate where to search (and where not to search) after the plugin is installed and to backup the search database daily @ 6am local time. (about 1hr after the default run of slocate cronjob at 4am (AFAIK that's what it is on my system)
  9. Actually, unRAID zeroes a new disk so that it doesn't have to rebuild parity. The mathematics for computing parity is a simple as count the bits for a specific address of all data drives and if odd write 1 in the parity drive. So if you are adding a zero, then parity is still good. but if you are adding some random data, parity needs to be rechecked, thus waking up all the drives and going through all of them. I suppose they could have just read the data on the new drive and update parity, as if new drive data was written and parity updated, but I'm sure zeroing a new drive is a simpler decision and step to take.
  10. Is that a UD bug/limitation, about the complex password issue, I wonder.
  11. Did you try mounting the synology shares using the IP of the synology rather then name? ie // I ask because unless you have an internal DNS resolving the name NAS to the IP, smb mounting by name get a little weird at times. Windows does the resolution with some helpful side channels, but it will drive unsuspecting users up the wall if you are not aware of it.
  12. Definitely not. unRAID requires direct (block-level) access to the drives. NAS drives (shares actually) are file-level access devices. which is exactly the service unRAID will provide from its drives.
  13. high chance it will work (hardware raid controller with simple JBOD) suggest the parity and cache drive be inside the main case if you can as the shared USB 3 bus might not be able to handle all the drives at the same time with severely degrading performance.
  14. Qualified yes. Depends really on what the JBOD is using to present multiple disks on what interface. SAS connectors (8088, 8644, etc) can only present 4 per cable (unless expanders are in between) SATA connectors can only do 1 per cable, unless there's a Port Multiplier in use and your STAT controller supports Port Multipliers USB can present only one device unless there are hubs which share the bandwidth. thunderbolt? not really sure here. others - Like I said, really depends on the protocol...
  15. USB 3.0 has some possible bandwidth issues once you use more than a few drives. (Most motherboards have only one controller and that will limit the overall bandwidth) Not sure about thunderbolt. Haven't gotten any and haven't seen any user with it and a drive or so connected. Should work... if the thunderbolt card was supported by Linux (and LT decides to add the drivers)
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.