WonderfulSlipperyThing

Members
  • Posts

    55
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

WonderfulSlipperyThing's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Hi, I've had my unraid setup using HTTPS for a while now using Settings>Identification>Management Access' settings. It works well for the most part, however if the internet connection goes down the web UI is inaccessible. I don't think this is a bug - obviously things like Let's Encrypt are dependent on an internet connection. However, I was wondering if there was a way around this for when the internet connection goes down? I've tried connecting using http://x.x.x.x:80, but that just redirects me to the HTTPS version. Right now the only way around this that I can tell is to SSH into the system and changing USE_SSL auto to no, then restarting the server, but that's fairly cumbersome. Does anyone have any nifty solutions to this issue or am I stuck being dependent on an internet connection for smooth operation of the server's web UI? Thanks!
  2. I'm getting the same thing - hit the button and nothing happens. I just emailed the dev about it then realised I could just check the forums. Out of interest what device are you using? I'm using a Galaxy S9+
  3. Thanks for the writeup. Presumably auto in the future will use turbo write if all the drives happen to be spinning already. All of my drives are currently spinning but auto is still using read/modify/write which is why I came to this post, didn't realise that feature wasn't actually implemented yet. It's a great feature and works very well but I'm really looking forward to being able to get better control over it. If all the drives are spun up already it would be great if it just used reconstruct write. Also I think for many people they don't really care that much about write speeds unless they're actually actively doing something themselves so having it just enabled for SMB shares or something would also be really useful. Either way thanks for the explanation, makes a lot of sense and seems to make a huge speed difference!
  4. That's a good idea. Looked around and found the command (b2sum), you can specify the hash algorithm (running b2sum with no args even gives you something to copy paste into a shell script for loop). So I gave it a go with an 11GB file and these are the results I got: blake2b - 1 core: 93MB/s blake2s - 1 core: 167MB/s blake2bp - 4 cores: 315MB/s blake2sp - 8 cores: 620MB/s Judging by those stats the plugin is using standard blake2b. All of them maxed out whichever cores they were using (except for blake2sp which seemed to use around 85% of each). It would be great to have the option to use a different blake2 algorithm as it clearly makes quite a large difference, at least on my system (which I believe is fairly popular). Of course I'd rather not make writing files to the array quite that intensive so I'd probably use the blake2bp just so there's a little CPU wiggle-room left for other tasks, but it's always nice to have options!
  5. Interesting, probably just a quirk of my processor then. I did a quick Google search for "C2750" and "BLAKE2" and came up with this https://github.com/minio/blake2b-simd/issues/11 Not sure if that's the blake2 algorithm this is using (or if the problems posted there are related to my slow speeds) but I would guess that it's something to do with the "Seems to indicate that there's some kind of performance penalty on Atom when executing SSE with 64-bit operands" comment. Thanks for the benchmarks, always useful to have. So I guess for most people, BLAKE2 is probably the best option but for us Atom users, probably best stick with MD5.
  6. To anyone in the future interested in this, I did some very basic testing of this by creating a user share with a few files in and excluding every other share, then doing a build in order to find out which was fastest on my processor. Interestingly it doesn't seem like any of them are multi-core optimised (and I guess the build only does one file at a time, at least if they're all on one disk). I got 100% CPU load on one core of my processor whichever algorithm I used. At the end, the build gives you an average speed. I ran all the tests a couple of times with all different files and this is what I got: SHA1: Was around the 90 MB/s mark (unfortunately can't remember the exact results) BLAKE2: 93 MB/s MD5: 323 MB/s So if anyone wants to install this plugin and their primary concern is speed, at least on the C2750 8 core atom, MD5 is by far the fastest to use. I find it crazy that BLAKE2, which is supposed to be the fastest, is less than a third of the speed of MD5, but this may well just be a quirk of the C2750 processor.
  7. Thanks for all the input guys. Just a quick update here but I have now moved over to unRAID and I'm loving it - it's such a user friendly OS and really it's perfect as a home NAS solution. The thing I love about it is that it can be as simple or as complicated as you want it to be and the learning curve isn't particularly steep. When I used it before I always thought it was pretty good but it has improved so much in the past couple of years. Then yesterday I saw that 6.2 final is out and it looks fantastic (haven't updated yet though). I love that there's now an officially supported turbo write option as for many people the ONLY disadvantage to unRAID is the (relatively) slow write speeds. Now that it can saturate gigabit, write speeds aren't going to be much of an issue for most home users. Now I'm just debating whether to move some disks over to my Marvell controller so that I can spread the bandwidth a bit more equally. Currently my only disk on the Marvell controller is my SSD cache disk (that I'm not using as a cache disk, just using it for docker etc). I have looked around and I see that there's a thread about Marvell issues... it seems reports are somewhat conflicting as some people say that doing a firmware update doesn't fix the issue. But maybe that's just for other boards. dmacias said that all that's needed is to update the controller and mobo to latest so I guess I'll give that a go. Haven't spotted any issues on the SSD but that's not hit as much as the array disks. Either way, incredibly happy with it all, and all of my dockers run really well. The community applications makes life so much easier and I like that they all actually update properly (and issue I had on FreeNAS was that many of the plugins, of which there weren't very many, had various issues with things like updates).
  8. I have a basic license so 6 devices. I understand why it's happened it just seemed a bit silly to me that something like an external USB drive is limited by your license. I didn't realise that USB disks were allowed to be part of the array so that makes a little more sense, but the fact is, while the device was present, it wasn't set to be part of the array or a cache drive, it was just connected to my system (and preclearing but not doing anything array-ey). It was a bit alarming when I stopped the array only to find I couldn't restart it! In the end I just stopped the preclear, disconnected the USB disk, and started the array again. I imagine this is a bigger problem for people who have a 6 drive array (or 5 drive array and a cache disk) then have a failing disk as they can't preclear a disk at all if they want to start their array.
  9. This is awesome! What a great list of features to suddenly (from my perspective) arrive so shortly after I've started using unRAID again. I was hoping some kind of vlan functionality would be added and as I've only recently set up unRAID I hadn't really looked into the 6.2 RC or anything so all of these features are a nice surprise to me. Quick question - the post says "If you are using plugins, please see this thread regarding plugin support." I see no mention of any changes between 6.1 and 6.2 so I assume I'm OK to just go ahead and update from 6.1 (6.1.9..) straight to 6.2 without affecting my existing setup?
  10. This looks like it'll be a great plugin and I've installed it but not set it up yet (as my server is currently doing some fairly intensive stuff and I don't want to complicate matters). As this plugin has been out a little while now, I was just wondering if anyone had any experience with which hashing algorithm to use on an Atom processor with the least performance impact? I have an Intel C2750D4I - it's not the most powerful processor ever but it's perfect for my usage scenario. The last thing I want, however, is for something to start writing to the array while I'm watching or transcoding something in plex and for it to interfere with that so it's pretty important to me that I use the hashing algorithm with the least performance impact. I see that blake2 is supposed to be the fastest of the bunch, but I also saw somewhere that you have to make sure your processor is compatible (plus sometimes things are fast on some processors but not on others). If anyone has any input on this, especially if they've used it with an Intel Atom C2750, I'd really appreciate it as obviously hashing all of my storage 3 times to find the fastest one would take a long time when someone's probably already got some information available!
  11. Ah thanks, that's basically the answer I was looking for. So technically it's not locked to the original repository it was installed from, and the developer has the option to allow the update to use the non-beta repo. Good to know!
  12. Hi, I have an array set up (without a parity disk currently, that's fine, I'm aware of the risks etc. it's just that the parity disk is large and takes a while to preclear, plus copying data to the array is way quicker pre-parity). I have had it all up and running just fine for a while, however, yesterday I decided to attach a faulty disk via a USB dock to preclear it and force some sector reallocations before it's RMAd. Earlier, I stopped the array, only to find that now I can't start it again as I've gone over the limit of "attached devices". I understand that the license is about how many devices arr attached before the array starts but surely it's a little over the top to prevent additional devices that aren't part of the array from being attached, especially if it's a USB disk that I'm preclearing? Effectively this means this - if you're at the device limit and you get a failing drive you want to replace, if you decide to attach a USB disk to pre clear it, whatever you do DON'T STOP THE ARRAY while it's pre clearing as it'll count as an attached device, preventing you from starting the array again until it's precleared. FYI this is the only bad thing I've actually had to say about unRAID since installing it - I used to use unRAID before until a couple of years ago and I'd forgotten how much I loved it, very glad I came back. There have been a lot of noticeable improvements since I last used it.
  13. Thanks, didn't realise that existed. Will check it out. Although looking on unRAID, Nextcloud on unRAID is also in beta so my question still applies.. will it automatically update to the non-beta version when that's available or will I have to set everything up again? If I do have to set it up again, is it as simple as just deleting the docker, getting the new one, and using the same appdata config directory?
  14. I've just set this up because without it, Headphones is painfully slow... anyway, it completed it's stuff (from what I can tell) within a couple of hours and I can access the server's web interface just fine. But Headphones is still taking absolutely ages to actually get anything out of it. Just a few questions... When the web interface is available, databases searchable etc., is it done? Or is it still doing stuff in the background? Do I have to manually enter any commands to get the database to optimise or to build a search index? Is there any way to easily test API calls into it? ...or is it just that Headphones is insanely slow and there's nothing anyone can do about it? These are the last few lines I see in the log, and there hasn't been anything since: Mon Sep 12 13:53:41 2016 : Creating search indexes ... (CreateSearchIndexes.sql) Mon Sep 12 14:09:33 2016 : Setting up replication ... (ReplicationSetup.sql) Mon Sep 12 14:09:33 2016 : Optimizing database ... VACUUM Mon Sep 12 14:11:09 2016 : Initialized and imported data into the database. Mon Sep 12 14:11:09 2016 : InitDb.pl succeeded INITIAL IMPORT IS COMPLETE, MOVING TO NEXT PHASE LOG: received fast shutdown request waiting for server to shut down...LOG: aborting any active transactions .LOG: shutting down .........LOG: database system is shut down done server stopped [cont-init.d] 30-initialise-database: exited 0. [cont-init.d] 40-config-redis: executing... [cont-init.d] 40-config-redis: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. [614] 12 Sep 14:11:20.992 # Server started, Redis version 2.8.4 [614] 12 Sep 14:11:20.993 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. Thanks in advance. All of your plugins for unRAID are awesome by the way!