Iormangund

Members
  • Posts

    34
  • Joined

  • Last visited

Recent Profile Visitors

1262 profile views

Iormangund's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Thanks, good to know encrypted is untested. Will tread carefully. It's a btrfs raid 6 array of 4tb disks I use as a scratch disk and steam/gaming library (awesome load times), nothing that isn't easily replaced and I don't waste space backing it up. If it was anything important I sure as hell wouldn't use btrfs 5/6 😆 Was more of a hypothetical really, nothing on there I need to be immutable. Thanks, good idea about setting immutable on external backups, must remember to do that next time I do a cold storage backup.
  2. Ah ok. Guess I will have to wait to use it properly. In the process of encrypting a 24x8tb disk array that is almost full so everything is being scattered by unbalance all over the place as I empty a disk at a time. Going to need some reorganising when that's all done and then can safely immute my files. I have a unassigned devices btrfs array mounted at /mnt/disks/btrfs_share, can the script be used on a share outside the array, modified to do so, or would I just be better off learning about chattr and doing it manually?
  3. Nice work on the script, great way of protecting files. I was wondering if you can set a share, not a disk, immutable using this script how does that effect the file on the disk level? For instance if I was to use unBalance to scatter/gather files across disks. I'm not exactly clear on how Unraid treats disks -> shares. Hardlinks? Would the 'real' file on disk be immutable or just the linked one on the share? (As a side note, I got pretty lucky on timing as I only realised today I had nothing setup for ransomware protection on my server, cheers!)
  4. I have some unassigned drives set in a btrfs pool mounted as a share. Has been working perfectly until I applied the recent updates at which point the drives will no longer auto mount or manual mount through the ui. This is the log error I get when attempting to mount with plugin: Server kernel: BTRFS error (device sdj1): open_ctree failed Server unassigned.devices: Error: shell_exec(/sbin/mount -t btrfs -o auto,async,noatime,nodiratime '/dev/sdj1' '/mnt/disks/btrfspool' 2>&1) took longer than 10s! Server unassigned.devices: Mount of '/dev/sdj1' failed. Error message: command timed out Server unassigned.devices: Partition 'DISK' could not be mounted... Mounting works as normal when done through terminal using commands: mkdir /mnt/disks/btrfspool /sbin/mount -t btrfs -o auto,async,noatime,nodiratime '/dev/sdj1' '/mnt/disks/btrfspool I assume this is due to the changes made around update "2019.11.29a" where timeout was added? Is it possible to change the timeout or do a check for btrfs pools and extend the timeout so auto mount works again? Is there a fix that I can manually apply to get it working again the same way as before until an update comes out?
  5. Ok, here goes, I may miss something as I did it all a while ago and it's pretty late here, but this is what I did. First off I installed Ubuntu LTS with the minimum install iso to a standard Ubuntu VM in undraid, didn't install any extra packages during the install process. You could get away with only a 5gb image for the install but I would recommend closer to 10gb if you have the space, just to be on the safe side, atmo my install only uses up about 3gb (4.6gb including swap/efi etc). Set the configuration you want, I set it to use my 10gbe bridged NIC with 256mb ram and only one cpu core/thread. Edit the VM XML and copy the disk name/identification from unassigned devices into the xml as a new disk (just put 'ata-' infront of the name), you will want to make sure the disk type is block, the device is disk and the driver is raw. Also I found the best performance and stability was using no cache and native io. Here is an example from my config xml: <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source dev='/dev/disk/by-id/ata-Hitachi_HDS722020ALA330_JK2938562935963'/> <target dev='hdd' bus='virtio'/> </disk> Repeat for each disk, make sure to change target dev to hde, hdf, hdg etc. Be aware, if you change any setting using the UI, ie not directly in xml, after this then it will reset some stuff like io. So if you want to set up bridges etc do that before adding the disks to the xml. If you have a raid/sas/hba card you can just use passthrough and skip adding each disk, but I haven't tried doing passthrough with pcie cards in unraid yet so you will need to ask someone else for accurate instructions (though it looks much easier to do it that way). Run the VM and install iscsitarget: sudo apt-get install iscsitarget Also install mdadm if you are going to run software raid. Check iscsitarget is enabled with 'sudo nano /etc/default/iscsitarget' and make sure enable is set to true. Now for the most important bit, setting up iscsi target config. sudo nano /etc/iet/ietd.conf At the bottom of that file add in your device details, I forget if there is already an example there, if so just edit it to your liking. Mine looks like this: Target iqn.2017-01.local.iSCSI:iscsi.sys0 Lun 0 Path=/dev/md0,Type=fileio,ScsiId=lun0,ScsiSN=lun0 Obviously the most important bit is to change '/dev/md0' for the disk you want to use (probably /dev/vdb if you aren't doing raid). Now just run: sudo service iscsitarget restart Go to your Windows machine and add the target device as your would any other iscsi target. I think that's pretty much it, I know it's a pretty poor tutorial but I need to sleep and I think I covered most of it. I left out the raid stuff but happy to cover that too if you need. In terms of resources, it practically uses nothing, on average the ram usage is about 60-65mb, which doesn't really change much when idle vs full r/w access. You could probably set the VM ram to only 128mb, I went for 256mb for a bit of room just in case, and I have plenty of ram anyways. CPU usage in the vm (running on only 1 thread of a 8 threaded 2ghz cpu) sometimes spikes to 40% usage when I do stuff like disk benchmarks, most of the time with normal use it sits around 0-5% cpu. On unraid itself the cpu usage barely hit's 2% for the iscsi vm. Overall, I think the only resource use will be with mdadm, not iscsi target itself as that seems to use barely anything. Either way, it's super light, doesn't impact the system and works great. Run my entire steam/origin/uplay library from it and haven't had a problem with any gaming. Edit: Btw, just a reminder, I did not use Ubuntu server install, used the minimal that's like 70mb download or something.
  6. Sure thing, on my way out so will have to do it later today. I'll also provide some info of system resources used.
  7. How did you manage to get the release notes btw? I am still waiting for them to update the bios for X11SSH-CTF and it's getting a bit ridiculous now considering every other X11SSH variant now has the update. Never gotten a reply from them whenever I try to contact them to ask about it.
  8. Hello, I am trying to set docker (and all docker containers) to use a specific gateway on my network (Unraid uses dhcp gateway, 192.168.1.1, want containers to use a different, non dhcp gateway, same subnet though). From the information I have found (https://docs.docker.com/v1.7/articles/networking/)it seems the way to do this is to start docker with the command line option "--default-gateway=IP_ADDRESS" I edited /boot/config/docker.cfg and added: DOCKER_OPTS="--default-gateway=192.168.1.254" However, docker will not start when that it added, when I remove it docker starts as normal. Does anyone know what I am doing wrong or if there is another way to achieve what I am doing that survives reboots and can be easily reverted if needed? Would be great to be able to set the gateway only on specific containers but afaik there is no way to do that without multiple docker bridges and that seems a much more complicated solution, if it's even viable.
  9. Wow, I think that's the first time I have actually seen anyone get release notes from supermicro, good job! Unfortunately it seems they are still taking longer with X11SSH-CTF, maybe it's some sort of issues with the 10gbit nic holding up the update.
  10. Ty, unfortunately no 2.0b yet for my board, X11SSH-CTF. X11SSH-LN4F appears to be the only one of the X11SSH range with 2.0b so far. Guessing it can't be long now till its out for my one. It's notoriously difficult getting release notes for Supermicro boards, no clue why. There should be some info in the bios update download, but it's pretty limited. Good luck getting any info from them, I've emailed them before with bios/mobo questions and never got a reply to any of them. May only be for a few boards, but I found it under support bios updates for the specific board, at the very bottom listed as beta. (H270 mobo)
  11. Asus seems to have released new bios for most of there mobo as have asrock, though atmo the asrock one is a beta. Unfortunately so far there has been nothing from supermicro and I get no replies when I send them support emails asking for details on bios update (specifically in my case X11 series mobos, current downloadable bios 2.0a is many months old). Supermicro seem to be pretty iffy when it comes to bios updates, wouldn't be surprised if it doesn't happen till next year. Although the fix 'should' be applied to unraid as well as having an updated bios, I suspect it's more likely unraid will just rely on bios update and not bother actually including the software fix. I hope to be wrong though, really could do with enabling threading again...
  12. Ok, doing an unbalance op atmo. when that's done in a day or so ill do some testing. Btw with my previous comment, I had spun up all drives and disabled spin down delay (was 30mins before) and had manually enabled turbo write before enabling the plugin, so even if the gui was reporting wrongly, the drives 'should' have been spun up.
  13. Nice plugin, however it always get's the number of spun down disks wrong. Even with polling under 10 seconds. Ie with all disks spun up, invoke setting of 2 and poll of 5 seconds it reported 6 disks spun down and disabled turbo, then 2 disks next poll, then 8, then 1 and enabled turbo, then disabled and reported 4 etc etc etc, all while every disk was spun up and active. (15 disk array btw) Wonder if it's something to do the plugin not polling from the sas hba properly? Anyway, look forward to when it's integrated into unraid or fixed. Keep up the good work
  14. If anyone cares or finds this useful, I ended up going with a minimal ubuntu server vm running iscsitarget ontop of a mdadm array with native virtio passthrough. Works great, got a nice 400mb/s network drive that i can install and run stuff from without any issues and is completely separate from my unraid data array.