alphazo

Members
  • Posts

    109
  • Joined

  • Last visited

Everything posted by alphazo

  1. @jonathanm I tried to run Memtest with a brand new power supply and no disks spinning and it fails after 5 minutes during the test #7 [block Move]. I also tried swapping around the memory modules but it doesn't help.
  2. Tested several combination but it always fails when mixing 1+3 and 2+4 banks so I ended up just keeping 4G of memory (1+3)...A little bit weird but I'm not in a mood for swapping motherboard and CPU.
  3. Hello, My unRAID installation has been stable for many years, running on a C2SEA motherboard with the latest 6.2.4 release. During the past three weeks I have two kernel panics related to USB. The first one occured once when I came back from vacation and power up the server. It did boot up correctly and the kernel panic occured about 10 minutes after the reboot. The last one occured this morning at 7:10am without anything special triggering it. Looking at the error message it seems to be related to USB. The only USB peripherals that are actually connected are : 1) the USB flash drive running unRAID 2) UPS using the NUT extension. Has someone seen that kind of kernel panic? Thanks alphazo Well Kernel panic kicked in again an this time not related to USB. It then became hard to just boot the server or with some weird video artifacts. I went through a memtest campaign and found mixed results: All 4 memory modules: memtest freezes after 5 minutes Memory modules 1+3 installed: memtest runs fine Memory modules 2+4 installed: memtest runs fine Memory modules 1+2+4 installed: memtest freezes after 5 minutes Memory modules 1+3+4 installed: memtest freezes after 5 minutes Memory modules are 99U5471-002.A01LF (KVR1333D3N9K2/4G). I'm going to run longer memtest tests and also swap memory modules around but at this point the motherboard seems to be dammaged. What's your pick?
  4. I wrote about encFS many years ago on this forum and wanted to provide an update. A number of new alternatives have emerged and unRAID architecture has changed quite a bit. Block device encryption remain the fastest way to protect a hard drive especially if the CPU provides AES-NI instructions. Will unRAID be ever able to mount external dm-crypt encrypted USB drive used for backups? Now for NAS/unRAID storage and also Cloud storage a per file encryption is prefered and encFS has been around for quite some time but has never been perfect security and performance wise. I recently cam across a couple of new projects that aim to be an alternative to encFS such as : gocryptfs https://github.com/rfjakob/gocryptfs securefs https://github.com/netheril96/securefs I ran a simple benchmark on a desktop and a SSD and wanted to share it with the unRAID community. https://gist.github.com/alphazo/09a2e523e22e7aa00d491ab67678dd80
  5. My bad. It was also an Adblocker issue in my case (uBlock Origin). Since I recently went through a full re-install of my laptop I guess I forgot to copy over the adblocker whitelist.
  6. I have the exact same issue since I updated to 6.1.7. While the user shares appears in the CLI they no longer show up in the "User Share" GUI. Like in the original post I do have an extra drive assigned as cache that I don't use for cache purposes. All my drives are XFS formatted including the cache drive.
  7. That was an easy one. I might have interrupted one of the update (docker operations can be frustrating sometimes since the GUI becomes very unresponsive). I have been able to remove all the orphans. Thanks Alphazo...Just posted my 100th post
  8. I recently noticed that my docker container list now shows a number of applications I have not installed. Where do they come from and how can I get rid of them ? They seem to be linked to my borgbackup application but can't figure out. Thanks Alphazo
  9. Updated the docker file so it now uses /sourcedir instead of /B. BTW borgbackup v0.25 has been released and brings fast lz4 compression algorithm.
  10. Have you been able to fix that issue? I'm also using a custom VPN provider and get the exact same result even with the righ .ovpn files and all the parameters set. Thanks I am also trying to get this to work with SlickVPN (without any luck). I set the provider to "custom", and put my .ovpn file in place. However, when the container starts, I see this error occurring over and over: 2015-062015-06-27 23:49:42,866 DEBG 'start' stdout output: Sat Jun 27 23:49:42 2015 UDPv4 link local: [undef] Sat Jun 27 23:49:42 2015 UDPv4 link remote: [AF_INET]208.92.235.23:8080 Sat Jun 27 23:49:42 2015 write UDPv4: Operation not permitted (code=1) I can't even connect to deluge! However, if I disable OpenVPN and Privoxy, and then run the container, I can at least use Deluge. Can anyone suggest anything to try?
  11. Fully agree on this one. I don't know what was the motivation of the original author.
  12. Just to post another success story on moving from ReiserFS to XFS. Well it took some time (14 disks) and I only used rsync, hashdeep (found in md5deep package) and vim (not on unRAID). Basically I did for each disk and in parallel using multiple screen sessions. diskX is the source (rfs), diskY is the destination (XFS) diskZ is a scratch area to save the checksums. mkdir /mnt/diskY/diskX rsync -av --stats --progress /mnt/diskX/ /mnt/diskY/diskX cd /mnt hashdeep -r -l -e diskX > /mnt/diskZ/hash-diskX-source cd /mnt/diskY hashdeep -r -l -e diskY > /mnt/diskZ/hash-diskX-copy Then it was time to compare the hashes before wiping out the source disk. Since the hashes are not written in the same order. There might be more elegant way to sort and clean the top of the file but I just used vim (on my PC and not unRAID) for that. vim /mnt/diskZ/hash-diskX-source Then I needed to remove the 4-5 top lines 5dd Then sorted the hash list :sort :wq Same Vim operation needs to be done with /mnt/diskZ/hash-diskX-copy After that diff can be used to verify that the copy is perfect. diff /mnt/diskZ/hash-diskX-copy /mnt/diskZ/hash-diskX-source If the diff operation doesn't return anything bad then you are good to go for: Formatting diskX Moving everything from /mnt/diskY/diskX to the root of /mnt/diskY (this can be done with mv and doesn't take any time Move to the next drive I recommend to keep a spreadsheet handy so you can track the actions to run especially if you run concurrent rsync and hashdeep operations. I do recognize that going through the hashing is a bit extreme and paranoid but heck it doesn't hurt as well.
  13. Hello, borg-backup (https://borgbackup.github.io/borgbackup/) is a fork of the excellent Attic (https://attic-backup.org/) that provides deduplicated and optionally encrypted backup. Pretty similar to bup that I have been using extensively. borg bring many new exciting features over Attic including configurable chunk sizes to accommodate lower RAM (important with very large backups) and different password based encryption scheme. The latest git version also brings lz4 compression scheme (in addition to zlib). One the reason for moving away from bup is the impossibility to prune older backup. I'm planning to use it on unRAID both internally to do periodic backup/snapshot of important data (doesn't take additional space if nothing has changed) and also remotely from clients. Based upon the work done by Silvio Fricke I published two projets on Dockerhub: - Latest git version: https://hub.docker.com/r/alphazo/borgbackup-git/ - Latest released version: https://hub.docker.com/r/alphazo/borgbackup/ You can find and install them using the new extended search feature found in the Docker Community Application plugin. I quickly tested it and was able to perform backups. I haven't gone through the generation of the unRAID template yet. Hope this will be useful to the unRAID community. PS: this could also be used in a distributed encrypted incremental and deduplicated backup scheme where you store some of your content to another (untrusted) remote unRAID machine.
  14. Thank you very much. I guess the thread can be closed then.
  15. Hello, I completely missed all the progress that have been made on this topic. I'm on 6.0.1 so it looks like I have the xfsprogs 3.2.2. ;D My last question before I do the big jump is: Are the additional flags (-m crc=1,finobt=1) used by default when I click on the Format a new drive button using XFS filesystem? Thank again for the effort. Alphazo
  16. Does someone know if the Slackware team has any plan to migrate xfsprogs to 3.2.x (I tried to get in touch in them but without much luck)? Would it be possible to update it on unRAID without waiting for the Slack team? unRAID v6 uses a pretty recent kernel that has provision for the latest XFS improvements such as v5 format (metadata checksum) and free inode btree. All considered to be in a stable phase. The only missing piece is xfsprogs and to make the new settings default when formatting. Those are some really nice safety features that should be worth considering for a multi disk array. Thanks for your understanding.
  17. One more piece of information from the XFS mailing list dated April 2014 and talking about stable V5 format for Linux 3.15 kernel release (and xfsprogs 3.2.0) http://oss.sgi.com/archives/xfs/2014-04/msg00721.html
  18. The XFS website doesn't advertise much on development progress. Now you can see bellow that the new v5 format with metadata checksum and free inode btree is stable since Linux 3.15. Kernel 3.10 http://kernelnewbies.org/Linux_3.10#head-a067455cdad0bf4e5285255ecfec5a538d930eb8 In this release, XFS has a experimental implementation of metadata CRC32c checksums. These metadata checksums are part of a bigger project that aims to implement what the XFS developers have called "self-describing metadata". This project aims to solve the problem of verification scalability (fsck will need too much time to verify petabyte scale filesystems with billions of inodes). It requires a filesystem format change that will add to every XFS metadata object some information that allows to quickly determine if the metadata is intact and can be ignored for the purpose of forensic analysis. metadata type, filesystem identifier and block placement, metadata owner, log sequence identifier and, of course, CRC checksum. This feature is experimental and requires using experimental xfsprogs. For more information, you can read the self-describing metadata Documentation Kernel 3.15 New v5 format is now considered stable http://kernelnewbies.org/Linux_3.15 See commit: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=c99d609a16506602a7398eea7d12b13513f3d889 Kernel 3.16 http://kernelnewbies.org/Linux_3.16#head-42e03cfe84cece01b59e07958221dc509de9715a XFS free inode btree, for faster inode allocation In this release, XFS has added a btree that tracks free inodes. It is equivalent to the existing inode allocation btree with the exception that the free inode btree tracks inode chunks with at least one free inode. The purpose is to improve lookups for free inode clusters for inode allocation. This feature adds does not change existing on-disk structures, but adds a new one that must remain consistent with the inode allocation btree; for this reason older kernels will only be able to mount read-only filesystems with the free inode btree feature. **** The feature can be enabled with finobt=1 switch when formatting a XFS partition but requires the metadata checksum to be enabled as well via the crc=1 switch. Therefore the following options are required to take advantage of both the free inode btree and metadata checksum mkfs.xfs -m crc=1,finobt=1 /dev/target_partition According to the developers, the above -m crc=1,finobt=1 option is not only safe to use now but will be the default mkfs option with the upcoming xfsprogs 3.3 release.
  19. Well it looks like that current beta12 only ships with xfsprogs 3.1.1 so no metadata checksum support. I just posted a feature request to address that: http://lime-technology.com/forum/index.php?topic=36619.0
  20. I was about to migrate my entire array to XFS but the current xfsprogs version in beta12 only contains the version 3.1.11 that doesn't have the new XFS on-disk format (v5) introduced in version 3.2 and that includes a metadata checksum scheme called Self-Describing Metadata. Based upon CRC32 it provides additional protection against metadata corruption during unexpected power losses for example. It also supposed to speed up filesystem checks. Checksum is not enabled by default when using mkfs.xfs tool for formatting a drive. It can be easily done using the -m crc=1 switch when calling mkfs.xfs. Lastly, the XFS v5 on-disk format has been considered stable for production workloads starting Linux Kernel 3.15. Is there any plan to upgrade xfsprogrs to version 3.2 in upcoming beta releases? I truly believe that this extra level of protection would be very beneficial for unRAID builds. I'm probably going to wait before migrating over to XFS. xfsprogs 3.2.2 has been added to unRAID 6.0 release with the additional flags (-m crc=1,finobt=1) as default when formatting a drive
  21. Thanks for the tiny script. I also added a line to the script to copy my custom sshd_config to /etc/ssh/ since I usually disable any kind of password access over SSH.
  22. I must admit I have no idea if XFS supports Trim. According to http://xfs.org/index.php/FITRIM/discard I believe XFS does supports TRIM.
  23. Title says it all. I'm considering moving to 6.0 so I can use my preferred backup system "bup" in a more efficient way. Today I cannot run it with a large data set on a 32-bit system today. New filesystem options got my interest too. I'm wondering if formatting a disk under unRAID selecting XFS uses the new self-describing metadata option (aka CRC32)? It should be the "-m crc=1" option for the mkfs.xfs command (be default it is 0). Thanks Alphazo
  24. Following my issue when expanding my array that in fact had a faulty drive http://lime-technology.com/forum/index.php?topic=36444.0 I swapped my super.dat back to the orignal state and put back the original (working) drive I initially wanted to expand. I then replaced the faulty drive by a precleared drive. Data-rebuilding has just finished and it looks like my files are on the new disk however the reported free space is strange: # df -h |grep md12 /dev/md12 3.7T -923G 4.6T - /mnt/disk12 Web GUI reports 4.99TB available on a a 4TB drive that should be almost full ! Any idea on what would be causing such message? Few more data: fdisk -lu /dev/sdh WARNING: GPT (GUID Partition Table) detected on '/dev/sdh'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdh: 4000.8 GB, 4000787030016 bytes 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdh1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. root@babylon:~# gdisk /dev/sdh GPT fdisk (gdisk) version 0.8.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): p Disk /dev/sdh: 7814037168 sectors, 3.6 TiB Logical sector size: 512 bytes Disk identifier (GUID): 40428B67-8DC3-437D-80B2-2E56F66C2778 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 7814037134 Partitions will be aligned on 64-sector boundaries Total free space is 30 sectors (15.0 KiB) Number Start (sector) End (sector) Size Code Name 1 64 7814037134 3.6 TiB 8300 mdcmd status | grep -i size diskSize.0=3907018532 rdevSize.0=3907018532 diskSize.1=1465138552 rdevSize.1=1465138552 diskSize.2=1465138552 rdevSize.2=1465138552 diskSize.3=1465138552 rdevSize.3=1465138552 diskSize.4=1465138552 rdevSize.4=1465138552 diskSize.5=1953514552 rdevSize.5=1953514552 diskSize.6=1953514552 rdevSize.6=1953514552 diskSize.7=1953514552 rdevSize.7=1953514552 diskSize.8=1465138552 rdevSize.8=1465138552 diskSize.9=1953514552 rdevSize.9=1953514552 diskSize.10=1465138552 rdevSize.10=1465138552 diskSize.11=1953514552 rdevSize.11=1953514552 diskSize.12=3907018532 rdevSize.12=3907018532 diskSize.13=0 rdevSize.13=0 diskSize.14=0 rdevSize.14=0 diskSize.15=0 rdevSize.15=0 diskSize.16=0 rdevSize.16=0 diskSize.17=0 rdevSize.17=0 diskSize.18=0 rdevSize.18=0 diskSize.19=0 rdevSize.19=0 diskSize.20=0 rdevSize.20=0 diskSize.21=0 rdevSize.21=0 diskSize.22=0 rdevSize.22=0 diskSize.23=0 rdevSize.23=0 Found a similar post here http://lime-technology.com/forum/index.php?topic=30923.0 I got to minor error when running reiserfsck in maintenance mode: root@babylon:~# reiserfsck --check /dev/md12 reiserfsck 3.6.24 Will read-only check consistency of the filesystem on /dev/md12 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --check started at Wed Nov 26 18:08:19 2014 ########### Replaying journal: Done. Reiserfs journal '/dev/md12' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. finished Comparing bitmaps..vpf-10640: The on-disk and the correct bitmaps differs. Checking Semantic tree: finished 2 found corruptions can be fixed when running with --fix-fixable ########### reiserfsck finished at Wed Nov 26 22:14:37 2014 ########### I then ran reiserfsck again with the fix option root@babylon:~# reiserfsck --fix-fixable /dev/md12 reiserfsck 3.6.24 Will check consistency of the filesystem on /dev/md12 and will fix what can be fixed without --rebuild-tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --fix-fixable started at Wed Nov 26 22:40:10 2014 ########### Replaying journal: Done. Reiserfs journal '/dev/md12' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. finished Comparing bitmaps..vpf-10630: The on-disk and the correct bitmaps differs. Will be fixed later. Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 1042052 Internal nodes 6534 Directories 79986 Other files 564399 Data block pointers 946146929 (216 of them are zero) Safe links 0 ########### reiserfsck finished at Thu Nov 27 02:26:18 2014 ########### Restarted the array in normal mode and correct size was reported: # df -h |grep md12 /dev/md12 3.7T 3.6T 113G 97% /mnt/disk12
  25. Thanks for the advice. That particular drive is not going back to the array. Next time I expand the array I will 1) backup super.dat just to make sure 2) Go through the SMART reports of all drives to check for possible issues. What's strange on this one is that the parity check did run fine few weeks ago. Funny that an almost new drive from the WD RED serie (dedicated to NAS) is dying so fast