• Content count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About extrobe

  • Rank
    Advanced Member
  1. No, I do have problems, that's my point. A0 + A1 are not being recognized, regardless of what config I use. Following the recommended layout for slots does not help. (If I only had 1 dimm, then sure, I'd be fine - but I have 6, and want to have 8). The 1 dimm ECC I've bought is merely for troubleshooting. Whilst there is indeed a recommended slot, it should work in any slot. Of course, ideally I'd buy 6/8 ECC sticks, but I have to draw the line somewhere
  2. Thanks Kevin D1 has been ok, but the single Dimm I've bought is really just as a test. If it works in all slots, then I'll replace all the non-ECC with ECC. If I still have issues, I'll have to reach out to asus, as not sure what else I can try.
  3. Could someone perhaps help me understand this bit too? (Top bullet point). Is it suggesting I'm better off using the same size dimms within the same channel? I have 4x4gb + 2x8gb Should I aim to ensure I have matching size in say b0+b1 etc?
  4. Just in case anyone else is following/interested... Despite having a new board, different ram, removing all additional hardware, I was still having issues. I've now even installed a different CPU - a Xeon e5-2658v3 (es) - but this is even worse, with more slots unrecognised- though it does vary depending on what slot configuration I use. (Eg, if I put a stick in a0/A1, then not only would a0/a1 not register, but that seemed to disable other slots too,) This got me thinking - assuming it's not another faulty board, everything has been swapped with another part, and the issue has remained. The only constant through this is that I've been using non-ecc ram with the Xeon processor. My understanding is that this should not be an issue, but the mobo manual does state use non-ecc for i7 & ecc for e5. So I've ordered a single 4gb of ecc ram to see if it shows up in the slot. I'll report back once I've tried it, but is it plausible that the mobo won't play nicely with non-ecc ram paired with an e5 processor?
  5. There's no existing docker in the CA store for R / R Studio, but there are generic Dockers on Docker.com I went through the process of getting this working with unRaid, and wanted to share how I achieved this. **This is a work in progress!** I'll confess to not really knowing what I'm doing, therefore, some bits may not be quite how they should / could be, but this should get you to a working R Studio Environment. There are a few bits I either think I need / know I want to change - I'll outlines these as well. Prerequisites Already have the Community Applications plugin installed Outstanding: Create a pre-defined XML file (including web gui link & icon) Web gui link now added Icon url now added https proxy via letsencrypt/nginx *turns our https is a R Studio Pro feature only Understanding if the Packages installation directory should be / needs to be a user-defined directory Some progress made on this Outline of the process: Enable Community Applications to search Docker.com Create the share/directory Find the right R Studio Docker Configure the docker template access the web Ui Detailed Process Enable Community Applications to search Docker.com from unRaid, select the Settings tab Under Community Applications banner, select General Settings Near the bottom, find the entry stated 'Enable additional search results from dockerHub?' Change to 'Yes' Apply Done 2. Create the share/directory Create a share or directory dedicated to your R files / Projects. If you have an SSD cache drive, you may want to utilise it, so I would suggest a dedicated share set to use the cache drive only. I used a dedicated share, name r 3. Find the right R Studio Docker There are a variety of R Studio Dockers available. The best ones appear to link back to the Rocker Project. Within the Rocker project, there are various dockers, but I'm using 'tidyverse', which includes the base R code, R Studio and a good selection of the most popular R library's available already added to the docker. From the Apps tab, search 'tidyverse' you should get no results back select 'Get more results from DockerHub' locate the docker 'tidyverse' from the author 'rocker' select 'add' 4. Configure the container You should now be at the container configuration screen. We need to manually map the ports & paths to the container Give your docker a name (if you wish) select 'Add another Path, Port or variable' Config Type: Port Name: HTTP Port Container Port: 8787 Host Port: 8787 connection Type: TCP Add Repeat above step to add another port Config Type: Port name: Shiny Port Container Port: 3838 Host Port: 3838 Connection Type: TCP Add Add another item, but this time a path - this will hold your user-content and workspace session files Config Type: Path Name: Workspace Container Path: /home/rstudio Host Path: /mnt/user/r/ (or whatever share/directory you want to use) Add Add another path item - we will use this space as an install directory for extra library's you add Config Type: Path Name: Custom Library Install Path Container Path: /usr/local/lib/R/custom-library Host Path: /mnt/user/appdata/rstudio At the top-right of the docker add screen, there's a toggle switch that says 'Basic View'. Click this to go to advanced view Where it says 'WebUI', enter http://[IP]:[PORT:8787]/ Where is says 'Icon URL', enter https://cdn.rawgit.com/extrobe/un-r/c7b98d12499aef04180a6bd4f18c77dd2c1155bb/rstudio-icon-s.png Apply this should now pull down and install the docker 5. Access the webUi Browse to http://<yourip>:8787 Login using rstudio/rstudio run the R command .libPaths( c( "/usr/local/lib/R/custom-library",.libPaths()) ) This adds your custom folder to the list of library directories, and sets it as the default. You may want/need to keep this to run as part of your regular R code to ensure the additional libraries are always available. there you have it. Your workspace files should all save into your /mnt/user/r share. I assume the directory where the package files get saved / installed to should be a mapped directory as well. I tried setting this by mapping /usr/local/lib/R to the appdata folder, but this broke it. I'll have another look at it at a later date! That said, I've installed a couple of extra packages, restarted the server and so far they've been persistent. The underlying Docker File does not appear to provide support to install the config/library files to a custom location. Therefore, any packages we add and any settings we change will not be persistent (they won't always get overwritten, but can / does happen, eg with a new docker file being released). We get around the library/package install location by adding a custom library directory.
  6. Installing non-CA docker images (R/R-Studio)

    Well, it's taken a bit of trial & error, but I appear to have it working! Few bits I'd like to fix / change, but 95% there I reckon. Would there be merit in me creating a new thread as a bit of a tutorial for other users? It's still Work In Progress to an extent, but sure I won't be the last one to want to do this. Edit: Created a guide on how I got it working for anyone else interested https://forums.lime-technology.com/topic/57600-guide-installing-r-r-studio-as-a-docker/
  7. Installing non-CA docker images (R/R-Studio)

    Just the ticket, thanks! Now to get it working!
  8. Hi, sorry for what could well be a really dumb question... I was looking to see if there was a ready made docker for R/R Studio to save me messing around getting a full vm going. There was nothing in CA, but came across the Rocker Project on docker.com / github. Are these compatible with unRaid - and is so, what's the process to install them? (I did try adding the github repository to the docker templates repository's list, but this didn't seem to do anything) Is this possible, or am I chasing a dead end? Thanks!
  9. [Support] Linuxserver.io - Nextcloud

    Wasn't particually suggesting it as a working around - just trying to offer some perspective that whatever has caused the issue is reversible, as it sorted itself out for my without having to rollback to an earlier version. (ps, I'd have also updated the config.php file to add the new IP address to the trusted domains. Those two tweaks are the only things I changed in either docker)
  10. [Support] Linuxserver.io - Nextcloud

    For what it's worth, I was having the same issue. Coincidentally, I was going through the process of changing the IP address (and Subnet) for the server, and updating the dockers etc accordingly. After I updated the NGINX conf file for nextcloud and restarted the letsencrypt docker, everything started working again - and this has continued to be the case since updating both the LE & NC dockers as well. Edit: I also updated the config.php file in nextcloud to add the new IP to the trusted domains list
  11. [Support] binhex - SABnzbd

    Funnily enough, I've just upgraded too, and also having issues getting it working again. Slightly different errors from yours mind you - mine appears to start fine, but when I log in I get; [24/May/2017:22:32:04] ENGINE Error in HTTPServer.tick Traceback (most recent call last): File "/opt/sabnzbd/cherrypy/wsgiserver/__init__.py", line 2024, in start self.tick() File "/opt/sabnzbd/cherrypy/wsgiserver/__init__.py", line 2091, in tick s, ssl_env = self.ssl_adapter.wrap(s) File "/opt/sabnzbd/cherrypy/wsgiserver/ssl_builtin.py", line 67, in wrap server_side=True) File "/usr/lib/python2.7/ssl.py", line 363, in wrap_socket _context=self) File "/usr/lib/python2.7/ssl.py", line 611, in __init__ self.do_handshake() File "/usr/lib/python2.7/ssl.py", line 840, in do_handshake self._sslobj.do_handshake() error: [Errno 0] Error If I go into config, it's set to listen on port 8080 - even though it's actually on port 8085 - changing this back to 8085 doesn't fix it either. Also tried unticking https (as I don't use it), but this a) didn't change anything, and b) didn't seem to save the setting anyway. If I look at the logs, it's the same errors as above - no other errors, but various references to starting up the server on port 8090 On my config page (winthin SabNZB), it states the parameters are /opt/sabnzbd/SABnzbd.py --config-file /config --server --https 8090
  12. [Support] Linuxserver.io - Nextcloud

    Worked a charm, thanks (once I'd unpicked all the changes I'd made from the previous attempts!)
  13. [Support] Linuxserver.io - Nextcloud

    Just going through that now - it was my original preference to use dedicatedsubdomain.mydomain.co.uk - only all the guides I found seemed to use this /nextcloud method Is there any reason I should go back and redo the original mariadb & nc setup? It works fine in itself - think it was your tutorial on linuxserver.io i followed in the first place (just didn't go as far as getting apache working at the time)
  14. [Support] Linuxserver.io - Nextcloud

    Hi, after a little bit of help on the config side - but happy to go to a more specific NC group if I'm better off asking there... I've been trying to get SSL / HTTPS access working, so I can access via oc.mydomain.com/nextcloud. I'm using the letsencrypt docker, and been broadly following this walkthrough. As it stands, I've got HTTPS working, and I can browse to https://oc.mydomain.com/nextcloud and it connects fine However, the desktop client now can't connect - on either the original internal address, or the new https address The only address I can get to work in the sync client is The error suggest it's looking for the folder/file /owncloud/status.php. What is a little odd, is that whilst I understand that NC is a fork of OC, I can't see that owncloud folder myself anywhere. My nginx config (from the letsencrypt docker) is ... location /nextcloud { include /config/nginx/proxy.conf; proxy_pass; } and in nextcloud\nginx\site-confs\default , I changed the install location (as per the walkthrough) # Path to the root of your installation #root /config/www/nextcloud/; root /config/www; I also updated the config.php file to add the below, but didn't seem to impact anything before or after I changed it (I'd already added oc.mydomain.com as a trusted domain) 'trusted_proxies' => [''], 'overwritewebroot' => '/nextcloud', 'overwrite.cli.url' => '/nextcloud', #'overwrite.cli.url' => '', Have I missed a step out somewhere? I also tried this - which didn't make any difference either way (could still access via web browser fine) location /nextcloud { include /config/nginx/proxy.conf; proxy_pass; } My hunch is that I need to change the value of the 'overwritewebroot' value, but tried a few options here to no positive effect
  15. I finally got around to giving this a go. Pulled everything from the system, except the ram. Still can't utilise the 2 slots. Given the board is a new replacement board, and the ram itself I know to work fine (and have also tried known good ram from another system) - the only common part is now the cpu. Either that or just very unlucky and received two duff motherboards. Edit: Only other point of info I can think of to mention... The existence of the slots themselves is recognised. If I select 'Info'-->'More' from unraid, it lists all 8 slots (with A0 & A1 marked as DIMM_A1 & DIMM_A2), but nothing is seen in plugged into them.

Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.