Leaderboard

Popular Content

Showing content with the highest reputation on 09/26/17 in all areas

  1. Support for MKVToolNix docker container Application Name: MKVToolNix Application Site: https://mkvtoolnix.download/ Docker Hub: https://hub.docker.com/r/jlesage/mkvtoolnix/ Github: https://github.com/jlesage/docker-mkvtoolnix Unlike other containers, this one is based on Alpine Linux, meaning that its size is very small (at least 50% smaller). It also has a very nice, mobile-friendly web UI to access application's graphical interface, has the latest version of MKVTooNix and is actively supported! Make sure to look at the complete documentation, available on Github ! Post any questions or issues relating to this docker in this thread.
    1 point
  2. ?Huh What are you looking to achieve so I can get on the same page, as not sure I understand the question. Use the user.scripts plugin
    1 point
  3. No need to bribe anyone, it is already included in 6.4.
    1 point
  4. Look at the Unraid DVB plugin to get drivers installed and how to passthrough a tuner to Plex
    1 point
  5. Hi there, I’ve not switched from screen when the problem started happening. But tried diffrend screens with diffrend ports. (VGA, DVI and HDMI) Nothing works. I think your problem is maybe related but not exactly the same. Let me know if you have any luck.
    1 point
  6. Preclear is really only required to test drives now. unRAID will clear drives without taking the array offline. So if you trust the drives or have used another utility to test the drives (like one from the manufacturer) you can just add them to the array and let unRAID clear it for use. I use preclear to test all of my drives even those not going in the array. I used it because you can use it on unRAID.
    1 point
  7. 1. I haven't done "New Config" often enough (thankfully!) to know the answer, but I figured I'd put something here for completeness. 2. unRAID will create directory structures on the fly on the cache drive - if you were to go look at it right now with Krusader, assuming no data written to the server, you would see your cache is empty. When you write to a cache enabled share, unRAID will write to the cache first, creating directory structures as necessary, then transfer from there to the array share(s) in the wee hours as applicable. This is most helpful later in your server's life when there are multiple drives because they will all be spun up & read to calculate parity. On your initial data build, and without parity, it doesn't buy you much. 3. There's no need to set up the cache drive for the initial bulk copy of data to the server. unRAID will start writing directly to the user share when the cache drive fills up, but you know you'll be massively overflowing your cache, so just leave it out of the equation at the start. 4. I'm not a networking guru, but I believe you would get your maximum throughput if both machines were attached to the same switch. However, you can do some testing with your current physical setup to see if your speed is acceptable. You don't have to run pre-clear - unRAID will automatically clear every new disk when you add it to the array. (The pre-clear writes the appropriate signatures to the disk so that the OS knows the disk has been cleared so it doesn't do it again.) However, many people (myself included) like to run a pre-clear pass or two or three to help weed out infant mortality in a new drive before entrusting data to it. There's some discussion as to how many passes are enough - I think general consensus is that one is probably sufficient, two might give you an extra bit of warm-fuzzy, but at the expense of some extra wear and tear on the drive, while three is plenty for all but the most paranoid.
    1 point
  8. I guess I'll have it do it myself. If anyone is interested, do tell me so I get the impetus to do sooner.
    1 point
  9. Just wanted to chime in and say I am having the exact same issues as samtrois with MusicBrainz
    1 point
  10. Language support added. You can update your container image!
    1 point
  11. 1 point
  12. How do I create a vdisk snapshot on a btrfs device? There are two methods of creating instant copy/snapshot with btrfs, if it's a single file like a vdisk use the first one, if you want to snapshot an entire folder use the 2nd, e.g., I have all my VMs in the same folder/subvolume so with a single snapshot they are all backed up instantly. Method 1 - Creating a reflink copy: Simple way for making an instant copy of a vdisk creating a reflink copy, which is essentially a file-level snapshot: cp --reflink /path/vdisk1.img /path/backup1.img Requirements: Both the source and destination file must use the same BTRFS volume, can be a single device or pool. Copy-on-write on (enable by default) Method 2 - Snapshot: Btrfs can only snapshot a subvolume, so first thing we need to do is create one, example below uses the cache device, it can also be done on an unassigned device adjusting the paths. btrfs subvolume create /mnt/cache/VMs You can use any name you want for the subvolume, I use one for all my VMs. The subvolume will look like a normal folder, so if it's a new VM create a new folder inside the subvolume with the vdisk (e.g: /mnt/cache/VMs/Win10/vdisk1.img), you can also move an existing vdisk there and edit the VM template. Now you can create a snapshot at any time (including with the VM running, but although this works it's probably not recommended since the backup will be in a crash consistent state), to do that use: btrfs subvolume snapshot /mnt/cache/VMs /mnt/cache/VMs_backup If at any time you wanted to go back to an earlier snapshot, stop the VM, and move the snapshot or edit the VM and change the vdisk location to a snapshot. To replace the vdisk with a snapshot: mv /mnt/cache/VMs_backup/Win10/vdisk1.img /mnt/cache/VMs/Win10/vdisk1.img or edit the VM and change vdisk path to confirm this is the one you want before moving it: e.g., change from /mnt/cache/VMs/Win10/vdisk1.img to /mnt/cache/VMs_backup/Win10/vdisk1.img Boot the VM, confirm this is the snapshot you want, shutdown and move it to the original location using the mv command above. Using btrfs send/receive to make incremental backups: Snapshots are very nice but they are not really a backup, because if there's a problem with the device (or even serious filesystem corruption) you'll lose your VMs and snapshots, using btrfs send/receive you can make very fast copies (except for the initial one) of you snapshots to another btrfs device (it can be an array device or an unassigned device). Send/receive only works with read only snapshots, so they need to be created with -r option, e.g.: btrfs subvolume snapshot -r /mnt/cache/VMs /mnt/cache/VMs_backup Run sync to ensure that the snapshot has been written to disk: sync Now you can use send/receive to make a backup of the initial snapshot: btrfs send /source/path | btrfs receive /destination/path e.g.: btrfs send /mnt/cache/VMs_backup | btrfs receive /mnt/disk1 no need to create destination subvolume, it will be automatically created. Now for the incremental backups, say it's been some time since the initial snapshot so you'll do a new one: btrfs subvolume snapshot -r /mnt/cache/VMs /mnt/cache/VMs_backup_01-Jan-2017 Run sync to ensure that the snapshot has been written to disk: sync Now we'll use both the initial and current snapshots for btrfs to send only the new data to the destination: btrfs send -p /mnt/cache/VMs_backup /mnt/cache/VMs_backup_01-Jan-2017 | btrfs receive /mnt/disk1 A VMs_backup_01-Jan-2017 will be created on the destination but much faster than the initial copy, e.g., my incremental backup of 8 VMs took less than a minute. A few extra observations: To list all subvolumes/snapshots use: btrfs subvolume list /mnt/cache You can also delete older unneeded volumes/snapshots, e.g: btrfs subvolume delete /mnt/cache/VMs_backup To change a snapshot from read only to read/write use btrfs property, e.g.: btrfs property set /mnt/cache/VMs_backup_01012017 ro false A snapshot only uses the space required for the changes made to a subvolume, so you can have a 30GB vdisk and 10 snapshots using only a few MB of space if your vdisk is mostly static.
    1 point