Tuftuf

Members
  • Posts

    248
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Tuftuf

  1. I have an older unraid version 6.9.1, but it's been stable for the purpose it serves. Installing any game server docker, fails at updating streamcmd. I've used this before and always been quite straightforward. I had this docker running on another server I shutdown, I expected to start it up easily on this machine. Focus was on installing ich777/steamcmd:valheim but tried 5 others at least. Console log only shows ---Ensuring UID: 99 matches user--- ---Ensuring GID: 100 matches user--- ---Setting umask to 000--- ---Checking for optional scripts--- ---No optional script found, continuing--- ---Taking ownership of data...--- ---Starting...--- SteamCMD not found! steamcmd.sh linux32/steamcmd linux32/steamerrorreporter linux32/libstdc++.so.6 linux32/crashhandler.so ---Update SteamCMD--- If I use the console on Valheim docker and run steamcmd manually, I get the following error. root@b6c7065c4195:/serverdata/steamcmd# ./steamcmd.sh Redirecting stderr to '/root/Steam/logs/stderr.txt' threadtools.cpp (3409) : Assertion Failed: Failed to create thread (error 0x1) If I install steamcmd/steamcmd from the command line using the cs go example, only changing the local paths. This error keeps popping up in the log. src/clientdll/cminterface.cpp (2861) : Assertion Failed: m_VecNetAdrNetFilterCMs.Count() > 0 I also tried cm2network/steamcmd which seemed to install and download all the initial steam files. Am I missing something really simple here?
  2. I've gone to the effort of building an 11th gen NUC system to replace my main unraid server party due to needing to troubleshoot this bug! The 11th gen was a bit of a failure as it seems driver support for 11th gen cpu quick sync is shocking right now. So I found a cheap used 10400T mini pc! Everything is running from there as of last night. Now it's time to unplug some hard drives and start testing this bug again! Good (well bad) but good to see others are still seeing the issue.
  3. @DZMM I moved over from plexguide to your script over a year ago. Using the old version of the script without cache settings works as expected. If I use the new version with cache defined, I get an extra folder created within my mount point the same name as my mount point. Am I missing something or should the configure below valid? The paths have all changed as I moved it to a new system. I'm not certain if I want the cache setting or not but I dislike the new script not working correctly for me, I've read before that it was not getting maintained within the rclone code. I've also always been mounting mine as gdrive & tdrive. Looking at it again recently, I see I don't ever use the gdrive sections and they don't seem to be required. 0.96.4 # REQUIRED SETTINGS RcloneRemoteName="tcrypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files you want to upload without trailing slash to rclone e.g. /mnt/user/local RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART in docker settings page MountFolders=\{"movies,tv"\} # comma separated list of folders to create within the mount 0.96.9.2 # REQUIRED SETTINGS RcloneRemoteName="tcrypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone RcloneMountDirCacheTime="720h" # rclone dir cache time LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable RcloneCacheShare="/mnt/storage/firefly/rclone_cache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="250G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page MountFolders=\{"movies,tv"\} # comma separated list of folders to create within the mount I have gdrive & gcrypt, I carried the config over but recently noticed I don't use them or even mount them. Ok to remove? Do you use gdrive or just team drives (now shared drives) I'm missing scope = drive but its the default option (just checked) [gdrive] client_id = clientid@google client_secret = AAAAAAAAAAAAAAAAA type = drive token = {"access_token":""} [gcrypt] type = crypt remote = gdrive:/encrypt filename_encryption = standard directory_name_encryption = true password = PASS1 password2 = PASS2 [tdrive] client_id = clientid@google client_secret = AAAAAAAAAAAAAAAAAAAA type = drive token = {""} team_drive = AAAAAAAAAAAAAAAAAAA [tcrypt] type = crypt remote = tdrive:/encrypt filename_encryption = standard directory_name_encryption = true password = PASS3 password2 = PASS4
  4. I'm setting up another system and changing how my paths are arranged. The main question here is, are people using the cache setting? I'm reading on other forums and places that the cache setting shouldn't be needed and hasn't been for a long time, since the ranged gets were added. Do I need this cache mount? Can I just remove the 3 lines defining it? /mnt/storage is a SSD cache pool. EDIT - I have changed the /mnt/remotes/rclonefs to be on the SSD. I was going to place the rclone mount in /mnt/remotes as I expected this to be read only, remote filesystem mounted. RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone RcloneMountDirCacheTime="720h" # rclone dir cache time LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable RcloneCacheShare="/mnt/storage/firefly/rclone_cache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="250G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
  5. Can you expand on what you mean by share the same interface and virto-net? I'm trying to understand if this is or is not been viewed as a bug. I try to keep intervlan routing to a minimal as Unraid is in my shed and router is in the house. What I'm doing seems a very simple configuration and these errors are happening when upgrading to 6.9.x. The solution shouldn't be to not run a VM or Docker instance on br0. I'm running 1 VM running on my system and it's in a separate vlan (br0.50). *EDIT* Wow, I did not expect to see posts going back to 2018 talking about this and solutions. I know saying I've never had an issue doesn't meant I didn't miss one. But prior to 6.9, I had a stable system for months. How has this only just started effecting me. I've always had core internal services on br0 and external stuff on other vlans.
  6. Thanks for confirming you are able to run it another vlan. It doesn't fit with how I have things configured, but I can look at moving these to another vlan.
  7. I believe I'm seeing the same issue when I upgraded to 6.9.0, I have not tried 6.9.1 yet. Following a thread on a facebook group, someone else has this issue seen 6.9.0 and 6.9.1. After pointing to macvlan issues, he found the errors in his syslog. Similar to attached here. He believes its related to anything that is assigned a static IP on BR0.
  8. Thanks, I have no idea at this point other its stopping a vfio stage on a GPU that is used for passthrough but that is completely unrelated at this point and the VM is not set to auto start. Recently added all my drives from my main server into this, so I'm trying to get everything working right before I leave it alone again.
  9. I'm trying to find out why this system will not shut down correctly. Suggestions on where to look for what is keeping the mount active. I need to use fusermount -uz /mnt/user for the stop command to complete. EDIT - Added open files plugin.. Also it stops here at login but works otherwise ok. firefly-diagnostics-20200618-1826.zip
  10. The system is working, but the boot up stops here? Just checking that this is not normal? If I go the GUI route then the desktop does appear. Thanks,
  11. The issue is that I can no longer boot from USB, I think if I take the NVME out everything will be back to normal. I installed windows as a VM with the NVME passed through, this was working great until I rebooted the machine. It rebooted to windows, following that I rebooted it myself selecting the USB sandisk and unraid started to boot. I wish I took a screen shot, but it locked up. Maybe was showing vfio errors. Rebooted again I can't boot from the USB stick, can't see the boot manager anymore. Created a new USB stick, can't boot from that either. On the flip side windows runs really fast and it's the first time I've really run it on bare metal with anything installed. I've Ordered a LSI card and plan to move the disks into my other server and then will see what actually happened here.
  12. I use vlans at home and this caused all the traffic to leave via the management address even after binding it to an interface within the rclone upload script. The fix was to add a route second routing table and route for the IP I assigned it to. The subnet is 192.168.100/24 The gateway is 168.168.100.1 The IP assigned to rclone upload is 192.168.100.90 echo "1 rt2" >> /etc/iproute2/rt_tables ip route add 192.168.100.0/24 dev br0.100 src 192.168.100.90 table rt2 ip route add default via 192.168.100.1 dev br0.100 table rt2 ip rule add from 192.168.100.90/32 table rt2 ip rule add to 192.168.100.90/32 table rt2
  13. My array was not stopping and I blamed this when I couldn't quite work out where the fuser command was, I'll have to see if there is something else causing it not to stop as it looks to be unrelated. I don't plan on stopping it just yet, its running its purpose. Main focus is getting things ready to back it all up.
  14. @watchmeexplode5 It's good to see someone else state we didn't need all the extra mount points. Also used Plex guide for awhile, plex left my Unraid system for about a year. I'm not having much luck with the unmount script on array stop, having to manually use fusermount -uz command each time. I let people start using plex again so I don't plan to stop it again just yet.
  15. @DZMM great info thanks, it'll help when I finish this as I need to move some disks around.
  16. It's the time to think, I previously moved my whole plex and related setup to a hosted dedicated server 1gb/1gb as with gdrive my upload is not good enough to keep up 400/35. Cost wise it would now work out around the same for me to upgrade to a business connection which gives me options of 400/200 or 750/375. I recently built a 2 in 1 gaming pc on a 7700 and since I've had an Intel CPU begging me to use quicksync I've been looking at options to bring my plex setup back home. Right now it only has 1 SSD and 1 NVME but that will change soon. Did you place your pool as "LocalFilesShare="/mnt/disks/NVMEpool and array as LocalFilesShare2="/mnt/user/local" I'm checking its just a case of placing the shares in the order you want them to be used or was there more to it?
  17. @DZMM Do you mount the mergefs within /mnt/user? I had read some recommendations to place it in /mnt/disks and then use RW,Slave option for dockers however I'm not certain if that was old information or not. Previously I had used service accounts, but I've not set that here. Is the 750GB limit an upload limit, or does it include streaming (or is that just API limit) I don't expect to be uploading more than 750gb per day. I have some concerns that my array may not keep up with downloads, extraction etc etc. I thought about putting the 'local' mount point on an NVME/SSD. Have you or anyone done such configurations? I have almost everything working (*plex is misbehaving). Added mergefs mount to /user within docker and Plex scanned all the movies and tv over night. However I now can't access the Plex UI locally but accessing movies and files is fine, accessing from Plex.tv is fine. Accessing Plex UI directly gives connection closed or timed out. @neow I only just started using this whole process on Unraid, started on the original scripts and then moved onto the new ones. I only had to change the settings in the new scripts near the top of the file to match my requirements and used the same paths or names within the different scripts. Then it worked. I believe it's referring to the name of your share within rclone.conf
  18. Thank you That's a really good start. root@Firefly:~# rclone lsd tcrypt: -1 2019-04-09 20:41:53 -1 movies I was copying the encrypted part from it but it has many service accounts defined as well starting fresh seemed good, finally trying to understand this. Yes using Unraid and the rclone plugin. I guess I can look into the next bit now.
  19. thanks I've seen that now, but I'm still getting stuck at almost the first step. Tried a working client id/key to test and created a new one. Completed the remote auth and provided response. I've selected the correct team drive once it was listed. But verifying the mount fails. root@Firefly:~# rclone lsd tdrive 2020/03/06 22:14:06 ERROR : : error listing: directory not found 2020/03/06 22:14:06 Failed to lsd with 2 errors: last error was: directory not found
  20. I'm already using rclone encrypted with tdrive on another os/app (plexguide) but I'm just not quite following how to mount my library on to Unraid. Watching the video and there are some differences. It could just be my head is hurting. What goes as the root folder? I can make the service keys, I have another system I can look at its rclone.conf that's mounting this tdrive. Just need to get the final pieces together to get it mounted on Unraid. root@Firefly:~# rclone config No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> 13 Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value Storage> 13 client_id> 1.....................................apps.googleusercontent.com Google Application Client Secret Setting your own is recommended. Enter a string value. Press Enter for the default (""). client_secret> ....................... Scope that rclone should use when requesting access from drive. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Full access all files, excluding Application Data Folder. \ "drive" scope> 1 ID of the root folder Enter a string value. Press Enter for the default (""). root_folder_id> Some progress Current remotes: Name Type ==== ==== tcrypt crypt tdrive drive
  21. I've only just started following running this on Unraid, mainly seeing everything on getting the gdrive side of things setup. Can someone point me something to read on setting up tdrives on Unraid. I have a media collection already in tdrive I'm looking to mount here instead. Edit - reading the 5/2/20 UPDATE now!
  22. After spending a year or so enjoying passthrough VM's on Ryzen, I had a good idea what I could get out of it. Both the users access all the games via Steam clients or Steamlink type devices. I'm only looking for around 60fps and really not looking at 4k gaming or anything. Ended up building a system with the following X-Case XK445S 4U 7700k 32GB Ram 1070TI 1080TI Array Disk - 128GB SSD (This is just a temp solution and will put some disks in it at some point) Unassigned Disk - Sabrent 1TB Rocket NVMe PCIe M.2 2280 I know it's not advised to share cores between VM's, however. VM1 1070ti, Cores 2+6 & 3 VM2 1080ti Cores 1+5 & 7 Currently getting 60 FPS in Quake Champions, Overwatch, Minecraft & C&C RA 3. GPU's will be cpu limited but they do work. Same results on both machines, need more testing to see if any real slowdowns.
  23. I think I've been told this before. I should of checked the system I put the ram into, it shows the same thing. Thanks. My bad.
  24. Removed 32GB ram from the system (2x16), now the dashboard shows Maximum size 64 GiB Ram was 64GB, now 32GB.
  25. I went with a cheaper option due to needing a spare computer at a later date to run CCTV somewhere, for the moment I'll try running two VM's. This let's me sit and plan a new build properly. Managed to get a 7700k, ram mobo and cooler for reasonable price. Already had a spare board and intend to take 32GB out of my Ryzen system. It'll be a 7700k with either an Asus Prime-A 270 or Asus Strix 270F. Will have both boards. I'll get to test how well the VM's run, hoping to get away with giving each VM 1 real core and splitting a 3rd between them. Which may well fail since he has 6 vcores on the Ryzen system. One good thing about this is freeing up my Ryzen system for other tasks.