Herdo

Members
  • Posts

    101
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Herdo's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

1

Community Answers

  1. As I said in the title, I tried to replace my cache drive as my current one has been showing some SMART errors. I followed the instructions, but it says a "btrfs device replace will begin" but that never happened. I was left with a blank unmounted drive that needed to be formatted. I formatted it but it still didn't copy over the old cache drives data. Now I'm stuck because trying to remount my old cache drives tells me that it will overwrite all data on the disk when I try to start the array. What do I do now? EDIT: Nevermind on the not being able to remount my original cache disk part. I realized what I did wrong and was able to remount the old disk. Now I'm just still not sure how to proceed with replacing the drive as the instructions given in the FAQ don't seem to work. EDIT 2: Nevermind again. I saw that in 6.9 this feature didn't work automatically so I followed the instructions to do it through the command line and it worked perfectly!
  2. Yes. I'm just saying, limit the scope of exposed ports. If I understood your post correctly, you essentially opened every port on your router from 1 - 65535. Instead, designate one port. So src port 34854 - 34854 and dst port 34854 - 34854, as an example. Then on deluge do the same. Change it from "use random port" to 34854 as you did in your router. Again, that is just a random port I'm using as an example. You can set it to whatever you want.
  3. Exposing a docker container to the internet isn't any less safe than simply exposing Deluge to the internet through an open port. That being said, no that's not correct. You do not want to open every port to the internet. In Deluge select a port (or range of ports if you prefer, maybe like 5 -1 0) and open those. Then ensure nothing else will use those ports. What you've essentially done is told your router to accept all/any traffic from anywhere and forward it to your unRAID box. This is very bad. You want to fix that immediately. EDIT: Also in case you weren't aware. Ports 1 - 1024 are what are known as "well-known ports" and those should be avoided. I'd just pick something in the 10s of thousands.
  4. I just had to make a change to crontab because an old script was interfering with some recent changes I had made. Previously "crontab -l" displayed this: # If you don't want the output of a cron job mailed to you, you have to direct # any output to /dev/null. We'll do this here since these jobs should run # properly on a newly installed system. If a script fails, run-parts will # mail a notice to root. # # Run the hourly, daily, weekly, and monthly cron jobs. # Jobs that need different timing may be entered into the crontab as before, # but most really don't need greater granularity than this. If the exact # times of the hourly, daily, weekly, and monthly cron jobs do not suit your # needs, feel free to adjust them. # # Run hourly cron jobs at 47 minutes after the hour: 47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null # # Run daily cron jobs at 4:40 every day: 40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null # # Run weekly cron jobs at 4:30 on the first day of the week: 30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null # # Run monthly cron jobs at 4:20 on the first day of the month: 20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null I found the old script located under /etc/cron.d/root so I used the "replace crontab from file" function with "crontab root". This allowed me to use "crontab -e" to remove the old script and save. However, now when I use "crontab -l", it's only displaying the "root" files crontab. It looks like this: # Generated docker monitoring schedule: 10 */6 * * * /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php check &> /dev/null 10 03 * * * /boot/config/plugins/cronjobs/medialist.sh >/dev/null 2>&1 # Generated system monitoring schedule: */1 * * * * /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null # Generated mover schedule: 30 0 * * * /usr/local/sbin/mover &> /dev/null # Generated parity check schedule: 0 3 1 * * /usr/local/sbin/mdcmd check &> /dev/null || : # Generated plugins version check schedule: 10 */6 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugincheck &> /dev/null # Generated speedtest schedule: 0 0 * * * /usr/sbin/speedtest-xml &> /dev/null # Generated array status check schedule: 20 0 * * 1 /usr/local/emhttp/plugins/dynamix/scripts/statuscheck &> /dev/null # Generated unRAID OS update check schedule: 11 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/unraidcheck &> /dev/null # Generated cron settings for plugin autoupdates 0 0 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplications.php >/dev/null 2>&1 I guess I just want to make sure this is OK, and that this isn't going to mess anything up. Obviously the "root" file crontab was working even though it wasn't loaded, so I'm guessing the hourly/daily/weekly/monthly scripts will still work, but I don't know. Am I correct in assuming crontab is just used to manage and display cronjobs, and that they will work regardless of which crontab file is loaded? EDIT: crontab -d and then a reboot reverted the crontab to the default settings.
  5. I just bought an E3-1275v6 for my Supermicro X11SSM-F, and I've upgraded from a G4400. That being said, I'm planning on selling this and upgrading to a Ryzen 9 3900x or 3950x depending on how much I wan't to spend when the 3950x launches. I've got two VMs running currently. Both are running Ubuntu Server 18.04; one with Wireguard/Deluge and the other with a highly customized Feed the Beast Minecraft server. Both have 1 CPU and 1 thread (the same pair) as I read I should be keeping the CPU and threads together. Is this true? I've noticed in the CPU Pinning settings I can designate the same CPU/thread to two different VMs. Is this a good or bad idea? The reason I ask is because Wireguard and Deluge can really hammer those 2 CPUs when they are actively downloading, but that only happens maybe once a day or every other day for 15 - 20 minutes. I think both the Minecraft server and the Wireguard/Deluge server would greatly benefit from having access to 4 CPUs (2 cores and 2 threads). Like I said, for 95% of the day it would mostly be the Minecraft server utilizing the CPUs, so I don't think they'll be fighting for resources too much. Thanks in advance.
  6. Thank you. It's about to finish with the post-read, but I think I'll just do one at a time. I'm not in any huge rush or anything. Thanks again for the help!
  7. I just got myself 2 more 4TB Ultrastar drives and they are currently preclearing. Once this is done, what's the best way to go about adding these? I'm adding a second parity drive and another (5th) data drive. Should I add them both to the array at the same time, or one at a time? If one at a time, in which order makes the most sense? I'm trying to avoid doing 2 parity rebuilds if possible, but I'm not sure if that is an option. I know adding the second parity drive is going to need a parity rebuild, but I believe adding another data drive will as well. Thanks in advance!
  8. The intel 4xxx series is no joke when it comes to single core performance. I still have an i7 - 4790k that I refuse to upgrade because for my gaming machine it's hard to beat. This is really going to come down to your use case mostly. The two I'd be between are the 4770k or the threadripper. Generally, if all you're doing is running some dockers and transcoding through Plex, I'd say go for the 4770k, although it sounds like you're using this for more than just a media server. I'm kinda in the same boat. I literally just bought (like two weeks ago) a new Xeon E3 - 1275 v6 processor and I think I'm going to sell it and upgrade to a Threadripper 2950x. Previously I had a G4400 and it worked wonderfully for sonarr/radarr/plex/syncthing/etc, but I've started to virtualize some stuff and I'm already wanting more than 4 cores. Like you, I've got a minecraft server running on a VM as well as a VPN and deluge running on a second VM, and I'm realizing the need for something beefier. That being said, if you aren't running any of this under a VM, the 4770k is probably plenty.
  9. I know there are plenty of guides on doing this, but I'm just wondering if simply specifying a tag at VM creation, and then mounting that inside the VM is the proper way to do this. The reason I ask is because generally you never want to have 1 disk mounted under two separate systems, correct? Doesn't that just guarantee file system corruptions? Maybe I'm not fully understanding the process here, but after reading several guides I'm a bit worried to just follow this advice blindly. I'm trying to mount all of my shares, so is the best way to do this to specify each one separately, e.g. /mnt/user/Movies tag: Movies Or can I just do /mnt/user/ tag: shares ?
  10. I have the official Plex docker container installed and I'm using the Live TV and DVR functionality. It's working great, but I'd like to be able use the post processing script functionality to encode the over the air recordings into something smaller and more compatible with my devices (h.264, MKV). I need to link a script that will run ffmpeg or handbrake-cli. I can install an ffmpeg docker container, but I'm not sure how to communicate between the two. My thinking is that I put the script somewhere accessible by both docker containers (somewhere like /boot/config/plugins/scripts) and then mount that directory in both the Plex Media server and the ffmpeg docker to something like /scripts/. From there, in Plex Media Server, I would call the script with /scripts/myscript.sh, and then in the script itself I would use some sort of docker command to call ffmpeg within the docker container? For instance: docker run dockerhubRepo/ffmpeg -i localfile.mp4 out.webm Am I on the right path here, or am I way off? My first thought was to just install ffmpeg onto the PMS docker container, but my understanding of docker containers is that when updated, they are completely wiped and reinstalled, which is why all the configs are saved in /appdata/ because that isn't touched when the image gets nuked. Obviously the script would be more complicated than that, but you get the idea. Any help would be appreciated. EDIT: OK, I figured it out. I've been testing it and I am getting an error 127 (key expired) on my Plex server. In testing I've learned that one cannot pass spaces through the docker container, which is a problem because of Plex's naming convention. There is no way for me not to have spaces or uppercase letters in my folder structures... I guess I'm back to square one here. I really wish I could just install ffmpeg directly onto the unRAID server. FINAL EDIT: I solved this by creating my own docker container. It's just the official plexinc/pms-docker image with ffmpeg installed as well. This is an automated build which is linked directly to the plexinc/pms-docker image, so you won't be reliant on me to update the container as an update will be triggered automatically when the official plex docker is update. https://hub.docker.com/r/herdo/plexffmpeg/
  11. Squid, thank you so much for the reply and I'm sorry it took so long to reply back. After looking at my shares settings I can see what you're talking about. I'm not sure how I didn't realize this, as I intentionally set it up myself, haha. Thanks again!
  12. Can someone explain to me what exactly this is, and why one of my users has "3" under the "write" column?
  13. Hey thanks for the reply. That makes perfect sense. I've been testing it periodically and it's still all good and my parents can stream without any buffering.
  14. OK, I've solved it. Disabling the "Static" IP setting in unRAID under "Network Settings", and then assigning the IP as static in my router instead seems to have completely solved the packet loss issue. Just did a test of 500 50 byte packets, and I had 0 lost packets. The speeds I'm getting to the unRAID server are still lower than expected. It's anywhere from 1/2 to 1/3 what I am getting on my desktop, but at least the Plex stream should work now. The latency is also quite a bit higher. It's also possible the cli version of speedtest tests differently. I'm not sure why this solved the issue, I just saw it mentioned on a thread where someone was having a similar problem. I'm guessing my routers DHCP server was getting confused? Thanks again for the help bonienl. EDIT: Nevermind about the latency and speeds being worse. I must have reset the server settings and I wasn't comparing the same speedtest server between my desktop and server tests. They are testing about equal now.
  15. I had another NIC on my motherboard so I just tested that. I thought I fixed it because it seemed to work fine at first, but after a longer ping test I can see I am definitely still dropping packets. 24% packet loss as of the latest test. Is it possible the CPU can't keep up? I don't know how much the CPU would affect something like this, but it's definitely the weak link in my server. It's a Pentium G4400.