darrenyorston

Members
  • Posts

    321
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

darrenyorston's Achievements

Contributor

Contributor (5/14)

3

Reputation

  1. Hello. Yes I can reinstall the container. It is a brand new install though. Interestingly there remains a gluetun fodler within appdata but it only contains a servers.json file. Are there any other unRAID logs which would record the containers deletion. By the system log GluetunVPN was updated at 020122 Jan 16 and restarted at 020144. Community Applications ran autoupdate at 020145. Could the Community Applications update have removed the container?
  2. Hello. I have been using GluetunVPN successfully for awhile now. However, this morning when I went to use a browser which was using GluetunVPN I received an error. Upon looking at my containers it appears the container has been removed from my system, though I did not do it. Looking at the unRAID logs I can see the following: Jan 16 01:00:01 Tower Plugin Auto Update: Checking for available plugin updates Jan 16 01:00:06 Tower Plugin Auto Update: unassigned.devices.plg version 2022.01.15 does not meet age requirements to update Jan 16 01:00:06 Tower Plugin Auto Update: Checking for language updates Jan 16 01:00:06 Tower Plugin Auto Update: Community Applications Plugin Auto Update finished Jan 16 02:00:01 Tower Docker Auto Update: Community Applications Docker Autoupdate running Jan 16 02:00:01 Tower Docker Auto Update: Checking for available updates Jan 16 02:01:19 Tower Docker Auto Update: Stopping Thunderbird Jan 16 02:01:21 Tower kernel: br-92a4b7e2cd0e: port 15(vetha2f1c80) entered disabled state Jan 16 02:01:21 Tower kernel: veth48e0c6f: renamed from eth0 Jan 16 02:01:21 Tower kernel: br-92a4b7e2cd0e: port 15(vetha2f1c80) entered disabled state Jan 16 02:01:21 Tower kernel: device vetha2f1c80 left promiscuous mode Jan 16 02:01:21 Tower kernel: br-92a4b7e2cd0e: port 15(vetha2f1c80) entered disabled state Jan 16 02:01:21 Tower Docker Auto Update: Stopping GluetunVPN Jan 16 02:01:22 Tower kernel: docker0: port 1(veth33f7aaf) entered disabled state Jan 16 02:01:22 Tower kernel: vethc33b19e: renamed from eth0 Jan 16 02:01:22 Tower kernel: docker0: port 1(veth33f7aaf) entered disabled state Jan 16 02:01:22 Tower kernel: device veth33f7aaf left promiscuous mode Jan 16 02:01:22 Tower kernel: docker0: port 1(veth33f7aaf) entered disabled state Jan 16 02:01:22 Tower Docker Auto Update: Stopping redis Jan 16 02:01:22 Tower kernel: br-92a4b7e2cd0e: port 9(veth5b87852) entered disabled state Jan 16 02:01:22 Tower kernel: veth6afa664: renamed from eth0 Jan 16 02:01:22 Tower kernel: br-92a4b7e2cd0e: port 9(veth5b87852) entered disabled state Jan 16 02:01:22 Tower kernel: device veth5b87852 left promiscuous mode Jan 16 02:01:22 Tower kernel: br-92a4b7e2cd0e: port 9(veth5b87852) entered disabled state Jan 16 02:01:22 Tower Docker Auto Update: Installing Updates for Thunderbird GluetunVPN redis Jan 16 02:01:44 Tower Docker Auto Update: Restarting Thunderbird Jan 16 02:01:44 Tower kernel: br-92a4b7e2cd0e: port 9(vethf1f6747) entered blocking state Jan 16 02:01:44 Tower kernel: br-92a4b7e2cd0e: port 9(vethf1f6747) entered disabled state Jan 16 02:01:44 Tower kernel: device vethf1f6747 entered promiscuous mode Jan 16 02:01:44 Tower kernel: eth0: renamed from veth15ea6ed Jan 16 02:01:44 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf1f6747: link becomes ready Jan 16 02:01:44 Tower kernel: br-92a4b7e2cd0e: port 9(vethf1f6747) entered blocking state Jan 16 02:01:44 Tower kernel: br-92a4b7e2cd0e: port 9(vethf1f6747) entered forwarding state Jan 16 02:01:44 Tower Docker Auto Update: Restarting GluetunVPN Jan 16 02:01:44 Tower Docker Auto Update: Restarting redis There is no mention of GluetunVPN beign removed however it ius no longer showing as installed. How could this happen? I am a little concerned that modifications to my server could happen this way, without atleast being prompted.
  3. I ended up solving the issue of drives not chowing up. I edited the scrutiny.yaml file in appdata. There is a field called "disks:" I added all my drives and they showed up straight away in the UI. I dont know why but editing the drives in the docker template makes no difference to. The UI always shows what's in the .yaml file.
  4. Hello all. Following up on a previous post re Scrutiny. How do you get it to show all the drives in the array? Mine only shows three, though one popped up after a week or so of running. My config has /dev/sda, /dev/sdb, /dev/nvme1n1, and /dev/nvme2n1 in the template but when I start the UI /dev/sdb, /dev/sdh, and /dev/sdi are displayed. Is there another config somewhere I have to change to have it show all 11 drives?
  5. Still trying to work out my ongoing issues with VMs on unraid. At the moment I have been able to get my VMs to stop freezing. I have been disabling/enabling the VM manager and rebooting. Its taken a few restarts but at the moment VMs seem to not freeze. Though I am having an issue now where the mouse does not fucntion correctly when passed through. Everythign works fine when a VM boots into the live CD but once I restart the VM I find that my mouse activates on the opposite screen, I have two, to where the mouse pointer is. I can open the menu on my left screen by clicking in the position where the mouse would be on the other screen. Anyone else experienced this and found a solution? I have tried a variety of Linux VMs and its always the same.
  6. Anyone able to explain how to get the container to see all the drives on my system? At the moment it will only show the two in the /dev/sda and /dev/sdb fields. Nor will it show my NVMe drives.
  7. I had checked the cabling for each drive. They didnt appear to be out of place; they didnt move when I pressed them. And the system had been restarted as I have been trying to resolved an issue with VMs randomly freezing.
  8. One of the spinning disks in my array has been reporting errors. Recently it went offline. As a result I replaced the disk. After the new disk was rebuilt I ran pre-clear on the failed disk. Unassigned devices is reporting the pre-clear finished successfully. I cant see anything notable in the log file, though I do not really know what would be considered a problem. I have attached the log. As a result I wondering about the reliability of disk error reporting. Is the disk good or bad? What should I have confidence in? The array reporting problems and taking the disk offline or the pre-clear check? Oct 31 00:50:30 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: dd if=/dev/sdc of=/tmp/.preclear/sdc/fifo count=2096640 skip=512 iflag=nocache,count_bytes,skip_bytes Oct 31 00:50:31 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: verifying the rest of the disk. Oct 31 00:50:31 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: cmp /tmp/.preclear/sdc/fifo /dev/zero Oct 31 00:50:31 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: dd if=/dev/sdc of=/tmp/.preclear/sdc/fifo bs=2097152 skip=2097152 count=3000590884864 iflag=nocache,count_bytes,skip_bytes Oct 31 01:23:48 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: progress - 10% verified @ 148 MB/s Oct 31 01:58:04 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: progress - 20% verified @ 141 MB/s Oct 31 02:33:47 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: progress - 30% verified @ 135 MB/s Oct 31 03:11:30 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: progress - 40% verified @ 129 MB/s Oct 31 03:51:17 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: progress - 50% verified @ 120 MB/s Oct 31 04:33:51 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: progress - 60% verified @ 114 MB/s Oct 31 05:19:12 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: progress - 70% verified @ 106 MB/s Oct 31 06:08:59 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: progress - 80% verified @ 94 MB/s Oct 31 07:04:27 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: progress - 90% verified @ 84 MB/s Oct 31 08:08:01 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: dd - read 3000592982016 of 3000592982016 (0). Oct 31 08:08:01 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: elapsed time - 7:17:28 Oct 31 08:08:01 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: dd exit code - 0 Oct 31 08:08:02 preclear_disk_WD-WCC4N1AKP9J6_127711: Post-Read: post-read verification completed! Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: Cycle 1 Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: ATTRIBUTE INITIAL NOW STATUS Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: Reallocated_Sector_Ct 0 0 - Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: Power_On_Hours 47106 47128 Up 22 Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: Temperature_Celsius 34 31 Down 3 Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: Reallocated_Event_Count 0 0 - Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: Current_Pending_Sector 0 0 - Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: Offline_Uncorrectable 0 0 - Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: UDMA_CRC_Error_Count 0 0 - Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: Cycle: elapsed time: 21:33:48 Oct 31 08:08:06 preclear_disk_WD-WCC4N1AKP9J6_127711: Preclear: total elapsed time: 21:33:53
  9. The streamdeck? No. Never tried. I also moved away from using ControlR. I found it easier just to use the tablet browser.
  10. How do I get the container to show all my drives? I followed the instruction to add '/dev:/dev' to the field '/dev/sda' however it reports server error when I try to start the container. Also adding `--cap-add=SYS_ADMIN` for the NVME drive results in server error.
  11. Its obviously a straight forward process to do periodic backups to a Synology NAS from unraid. Keeping two Synology devices in sync over the internet is built into Synology NAS so that isnt a problem. I was hoping someone might have containerised Synology's Cloud Station app which would allow unraid to sync files directly to a remote site.
  12. I had looked at a few of those. I was looking to avoid using cloud services for obvious reasons, which necessitates a local solution, either an OTS NAS, a home brew NAS or a PC. I was against a PC, particularly for the remote site as I want something which would be unobtrusive and require no interaction from people at the host location. Hence why I was thinking of a OTS NAS like the Synology. I know I could have two Synology NAS, one remote and a second in the same physical location as my main server but as I already have a local backup this would be a duplication. I was wondering whether there was a docker container which allowed for the mirroring function used by Synology. Essentially that I could connect a remote NAS to the docker container for the data sync.