Leaderboard

Popular Content

Showing content with the highest reputation on 09/22/17 in all areas

  1. All, Should you wish to recreate your Docker virtual disk image from scratch, but retain your application data to avoid requiring reconfiguration for certain apps, the process is simple. Step 1: Delete your previous image file Login to your system from the unRAID webGui (http://tower or http://tower.local from Mac by default). Navigate to the Docker tab. Stop the Docker service if it is not already. Click the checkbox next to the Docker image that says Delete Image File. Click the Delete button after clicking the checkbox to delete the image (this may take some time depending on the size of your image). After the file has been deleted, you can simply re-enable the Docker service and the image will be created in the same storage location with the same name. Step 2: Redownloading your applications With the Docker service restarted, click Add Container. From the Template drop down, select one of your previously downloaded applications from the top of the list under User defined templates. If none of your volume mappings or port mappings have changed, you can click create immediately to start the download process. Repeat this process for each application you wish to re-download. Toggle the Autostart for each application after it downloads (only if desired). Step 3: There is no step 3... Seriously... What, you expected more steps? Nope! You're done!
    1 point
  2. 1 point
  3. Nothing anymore, when I originally created it was before hexparrot had set up his automatic build on his docker, His docker was months behind git..... so I wanted my own to ensure it was always updated. But a few months ago he finally connected his github & dockerhub accounts so its all automated.
    1 point
  4. You don't need to remove it, just don't assign it after the new config.
    1 point
  5. Not possible, unRAID can't mount NTFS disks, I have a feeling you're leaving something out of what exactly you did.
    1 point
  6. First off, a big shout out to the entire unRaid community. It's funny how much more stuff beyond my media library was stored away. Being able to recover was a life saver. I sincerely appreciate all those who took the time to give me some actionable suggestions and thoughtful approaches. One thing I did when loading the drives was to put an Avery sticker with "row column" indicator (i.e. A1 was in the uppermost left slot, B1 was the uppermost right slot). I knew A1 was parity and had high (but not absolute) confidence A2 was disk1, A3 was disk2, etc.. I used the plugin to document the disk layout, but the copy I saved on my desktop machine was also destroyed by flood. Using the instructions provided I was able to get the "swamp" drives to reconstruct the dead drive. While it's fresh in my mind, here a few random thoughts that may be useful to others pondering disaster recovery. 1) Make a backup of your machine. Parity helps in fault tolerance, but it's not the same as a full backup. I used my old unraid server to backup my production server. I also had a mutual protection agreement with a buddy to swap hard drives with selected shares backed up. Where I went wrong was in mounting my old unraid server in an unused closet. While it was higher than my production server, it wasn't high enough to escape destruction. I still think mutual protection is the right way to go, just choose a partner that is not in the same flood plain. 2) Label the disks as they go in. Be deliberate about how you assign them so you can document the disk assignment on new drives. 3) Email yourself a copy of the output from the drive layout plugin. It will lower the stress level. 4) When you do your monthly parity, email a copy of the backed up USB drive to yourself. 5) If you have a friend in the medical equipment field, the CFC bath is a great way to go in cleanup. Full disclosure - I'm not sure this is a "true" CFC bath like was used with circuit boards back in the day (EPA outlawed). But I think it is a close approximation that does wonders for electronics. 6) The rice baggie approach to dry out may not be the most optimum, but rice is ubiquitous and will buy you time while you deal with the 1001 simultaneous crises that happen in a disaster. 7) In Texas there are DryBox kiosks that provide self serve drying for electronics. While it is targeted to phones, a hard drive will fit. I wish I had tried the DryBox on the failing drive at the very beginning. It helped get it working well enough to get many files off, but I imagine I contributed to it's demise by trying to spin it up directly out of the rice bag. 8). If you can plan your disasters and select for flood then helium drives would be a good preventative measure:-). However with my luck the next disaster will be an earthquake. It's been said before but bears repeating - don't panic. Take a deep breath and reach out to the fantastic people on this forum. Take your time and be deliberate.
    1 point
  7. Bad news - changing the advanced settings makes little / no difference. Backups still "stop" randomly. I've opened another ticket with CloudBerry and they quickly resonded: "Hello Bertrandr,Thank you for reporting. That is a known issue. We are working on the fix and will let you know once it's ready.We apologize for inconvenience caused." I'm still on the trial version (which I like), but until this issue is fixed I'm not buying... BR
    1 point
  8. I am writing torrents/downloads to a directory on my cache pool which is using SSD, so I don't think I should have any problems with that. I'm only running one VM and the rtorrentvpn docker + its downloads at this point on the SSD so I don't think it should be overworked already either, I would hope. I suppose I could try to move the downloads to a separate SSD mounted with Unassigned Devices, but pretty much out of space on my server so that would be complicated to physically arrange. It actually got worse and I think brought networking to its knees yesterday on my entire unRAID server and had to reboot the whole server. Posted in this thread about it: Since rebooting entire server it has been running ok, though I still get the timeout errors frequently in ruTorrent. Keeping an eye on it for now. As binhex said, I think this is a prime usage for Unassigned Devices. Put in a dedicated non-array drive and mount with UD so that you aren't constantly reading from/writing to entire array and causing drives to be spinning 24/7 (unless you are ok with that, then just put on array somewhere).
    1 point
  9. Update CA and then reboot. From the release notes for CA: Fixed: If multiple browser tabs opened to apps tab, detect if app database is out of sync between windows and update tab accordingly. If still doesn't get you anywhere, post in the CA thread EDIT: Further catching up on the unread posts lead me to another thread of yours which is probably what caused this:
    1 point
  10. Sep 17 11:35:58 MBFS01 kernel: BTRFS error (device loop0): bdev /dev/loop0 errs: wr 12, rd 0, flush 0, corrupt 0, gen 0 Sep 17 11:35:58 MBFS01 shfs/user: err: shfs_write: write: (28) No space left on device Sep 17 11:35:58 MBFS01 kernel: loop: Write error at byte offset 11915517952, length 4096. Sep 17 11:35:58 MBFS01 kernel: blk_update_request: I/O error, dev loop0, sector 23272448 Sep 17 11:35:58 MBFS01 kernel: BTRFS error (device loop0): bdev /dev/loop0 errs: wr 13, rd 0, flush 0, corrupt 0, gen 0 Sep 17 11:35:59 MBFS01 kernel: BTRFS: error (device loop0) in btrfs_commit_transaction:2227: errno=-5 IO failure (Error while writing out transaction) Sep 17 11:35:59 MBFS01 kernel: BTRFS info (device loop0): forced readonly Sep 17 11:35:59 MBFS01 kernel: BTRFS warning (device loop0): Skipping commit of aborted transaction. Sep 17 11:35:59 MBFS01 kernel: ------------[ cut here ]------------ Sep 17 11:35:59 MBFS01 kernel: WARNING: CPU: 3 PID: 10719 at fs/btrfs/transaction.c:1850 cleanup_transaction+0x8c/0x238 Sep 17 11:35:59 MBFS01 kernel: BTRFS: Transaction aborted (error -5) Sep 17 11:35:59 MBFS01 kernel: Modules linked in: xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables vhost_net vhost macvtap macvlan tun xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod igb ptp pps_core fbcon ast bitblit fbcon_rotate fbcon_ccw fbcon_ud fbcon_cw softcursor ttm font drm_kms_helper cfbfillrect x86_pkg_temp_thermal cfbimgblt coretemp cfbcopyarea kvm_intel drm kvm agpgart mpt3sas syscopyarea sysfillrect sysimgblt i2c_i801 fb_sys_fops ahci i2c_algo_bit i2c_smbus fb raid_class libahci i2c_core fbdev scsi_transport_sas ipmi_si video backlight [last unloaded: pps_core] Sep 17 11:35:59 MBFS01 kernel: CPU: 3 PID: 10719 Comm: btrfs-transacti Not tainted 4.9.30-unRAID #1 Sep 17 11:35:59 MBFS01 kernel: Hardware name: Supermicro X10SL7-F/X10SL7-F, BIOS 3.0 04/24/2015 Sep 17 11:35:59 MBFS01 kernel: ffffc9001239bcd0 ffffffff813a4a1b ffffc9001239bd20 ffffffff8196b262 Sep 17 11:35:59 MBFS01 kernel: ffffc9001239bd10 ffffffff8104d0d9 0000073a1239bd88 ffff8806a8c8cc08 Sep 17 11:35:59 MBFS01 kernel: ffff8807f8227800 ffff8807cca37f00 00000000fffffffb 0000000000000000 Sep 17 11:35:59 MBFS01 kernel: Call Trace: Sep 17 11:35:59 MBFS01 kernel: [<ffffffff813a4a1b>] dump_stack+0x61/0x7e Sep 17 11:35:59 MBFS01 kernel: [<ffffffff8104d0d9>] __warn+0xb8/0xd3 Sep 17 11:35:59 MBFS01 kernel: [<ffffffff8104d13a>] warn_slowpath_fmt+0x46/0x4e Sep 17 11:35:59 MBFS01 kernel: [<ffffffff812eeff1>] cleanup_transaction+0x8c/0x238 Sep 17 11:35:59 MBFS01 kernel: [<ffffffff8107c0fd>] ? wake_up_bit+0x25/0x25 Sep 17 11:35:59 MBFS01 kernel: [<ffffffff812f0d59>] btrfs_commit_transaction.part.11+0x912/0x927 Sep 17 11:35:59 MBFS01 kernel: [<ffffffff812f0db4>] btrfs_commit_transaction+0x46/0x4d Sep 17 11:35:59 MBFS01 kernel: [<ffffffff812ebe74>] transaction_kthread+0xf0/0x19c Sep 17 11:35:59 MBFS01 kernel: [<ffffffff812ebd84>] ? btrfs_cleanup_transaction+0x479/0x479 Sep 17 11:35:59 MBFS01 kernel: [<ffffffff81063939>] kthread+0xdb/0xe3 Sep 17 11:35:59 MBFS01 kernel: [<ffffffff8106385e>] ? kthread_park+0x52/0x52 Sep 17 11:35:59 MBFS01 kernel: [<ffffffff8167f785>] ret_from_fork+0x25/0x30 Sep 17 11:35:59 MBFS01 kernel: ---[ end trace 09fe540a639a0b58 ]---
    1 point
  11. Your docker image is corrupt, delete and re-create. https://forums.lime-technology.com/topic/36647-official-guide-restoring-your-docker-applications-in-a-new-image-file/
    1 point
  12. The SATA splitters you bought should fit on the power side of the existing SAS connectors since they're backward compatible with SATA.
    1 point
  13. It was updated 5 hrs ago: https://hub.docker.com/r/emby/embyserver/tags/ At the same time the update was published on github: https://github.com/MediaBrowser/Emby/releases I'm not sure how much faster you want it.
    1 point
  14. I'd do things one step at a time. First, stop mover. Then, for the shares where there's data on the cache drive that you want moved to the array set cache to "Yes". For everything else set it to No. Then run mover and make sure that your cache drive is cleaned off. Then set Movies, etc to "No" and set the shares that you want on the cache drive to "Prefer" and run mover again - it should move things off the array back to the cache drive for Docker, etc. Don't make any changes to share settings while Mover is running - let it run to completion.
    1 point
  15. At the SSH, type diagnostics and that will write the diagnostics file to the logs folder/directory on your flash drive. Upload that file with your next post.
    1 point
  16. @Taddeusz This option is active (Sorry is a German screenshot) and it was possible for me to connect to the machine. But from today something might be changed (Windows update)? It is also possible for me to connect from my laptop via remmina rdp client to connect to the virtual machine (Windows 10). But via guacamole docker image it is not possible for me. But on the other hand, it is possible to connect to a windows 7 virtual machine. EDIT: Ok I found the solution. The problem is that currently I get a certificate error at the login. When I now activate the option to ignore the certificate the connection is working. Thanks for your help.
    1 point