kreene1987

Members
  • Posts

    36
  • Joined

  • Last visited

Everything posted by kreene1987

  1. I am having this issue on 6.10 stable on my R510. edit: fixed extra space in first try and now I am up and running... phew!
  2. I watched a few SpaceInvaderOne video's last night but didn't get any closer unfortunately. Back to the original setup per post 1. Any help or guidance is appreciated!
  3. I should add that everything internal IP associated with this setup is static IP. Nothing is moving around that is associated with this issue as far as IP assignments.
  4. Thanks in advance for the assistance. I am completely confused now on my setup. I am attempting to use SWAG to reverse proxy anything I want, and I had it set up great, but with my latest setup I've put myself into the pit of misery. Here is a general layout of my network for associated devices: Google Fiber ISP (External IP) --> Google Fiber Router/DHCP (192.168.86.0/24) --> 16 port switch (unmanaged) --> UNRAID 192.168.86.6 UNRAID has DNS set to PiHole, which is set up in docker and uses br0 connection, assigned IP is 192.168.86.3. Works great for traffic in/out of house as Google Fiber Router is set to that DNS. Also on Unraid is docker with all of my apps (Sonarr/Radarr/Jackett, etc.) on the Host network (.86.6:####) which each have settings to proxy THROUGH delugeVPN port (which also has the proxynet network set up, but I just use the proxy port and it works great). All apps are connected through VPN outbound as far as I can tell. Everything works as intended/great, and each app can see each other (Sonarr to deluge and back to Sonarr, to Plex, etc.) Now comes the problem, I used to have a working SWAG setup via duckdns, and for the LIFE of me I can't get it working again to access all of my apps from outside of my network. I have tried multiple different networks (br0, Host, proxynet) all of which either fail or give me the error that port 80 and 443 can't be mapped because they are already in use (not home otherwise would post exact phrasing). So how does SWAG play into this equation. I am port forwarding port 80 and 443 to 180 and 1443 on my .86.6 (UNRAID Host) and tried Host, but I think SWAG is still trying to grab 80 and 443 despite me setting http: as 180 and https: as 1443 in the docker app settings. Perhaps there is an advanced setting I am missing? It seems like the Unraid UI and SWAG are both trying to grab .86.6:80 and 443. EDIT: Adding that duckdns docker app is working flawlessly to update public IP and SWAG was properly set up to use those credentials to work, so I see no issues/log reports related to duckdns aspect of SWAG. Please help with a few things: 1) What is SWAG supposed to be set as, should I be using a dedicated IP (br0) for it and mapping external 80 and 443 to that, or is my current setup with Host network and mapping 180 and 1443 better? 2) Why would SWAG not be able to grab 80 and 443? Is Unraid UI already using that (I assume yes, just want to confirm). 3) Does the proxynet that delugeVPN sets up come into play here? Am I actually exposed when I don't want to be? Also, what the heck is Bridge mode and when would I use it? I'm trying to understand, read the documentation, but am not sure when I should use that network type. Thanks, Kevin
  5. I've thrown a recommendation in the upgrade notes that people change their ports away from 443 and 80 due to the conflict with the new Unraid SSL cert system. Perhaps that will alleviate a lot of the common issues seen above.
  6. I'd recommend adding something about those with LetsEncrypt using port 80 and 443 now conflicting with the new Unraid SSL cert system. Most people are posting over in the nextcloud and LE dockers so perhaps they would see it here first and prep accordingly. Note that port 444 (or 445?) is already reserved for something else so a different port (I used 442) must be used.
  7. Glad to see this implemented! I think it will work great.
  8. I can access my stuff through LetsEncrypt reverse proxy. Are you using that?
  9. Yes thank you so much binhex. I'll let you guys know if any further mail shows up, but for now everything seems to be working as expected. Lesson learned: USE the privoxy for your meta gathering apps!
  10. I've now incorporated socks5 with auth proxy for all 4 levels
  11. See responses above. Thanks for responding so quickly. I suspect the socks might be the issue?
  12. Just upgraded to 6.4.0 after 108 days of uptime. So glad I gave this a second chance! Everything is working spectacularly (minus a nastygram from ISP the other day which I've posted about in delugeVPN)!
  13. So I just got a nastygram from my ISP, and I'm wondering how in the heck it happened. I shut it down for now but plan to bring this back online. I've confirmed my IP and DNS are not leaking via ipleak, I am getting a good VPN connection per the logs, and I've confirmed via magnet link from ipleak that my external IP matches the logs. Is there anything else that I can do to test for leaks? I'm not entirely sure how it happened. Is there any way to add a killswitch or is this already a part of the build? Wanting to be as safe as possible considering the circumstances. TIA.
  14. Me either. I will state that I am using www.domain.com/nextcloud settings, so I'm unsure if this is because I went with a bit different approach?
  15. Phew this one is throwing me for a loop. I port forwarded my IP 80 --> 81 and now I can VPN in and get to all of my internal links and everything is working great, but the Unraid GUI connection is refused. Any reason the 2 would be related?
  16. I hope someone sees this soon as I basically live out of nextcloud (calendar, documents, etc.) and this has brought this to a screeching halt without VPN'ing into my network! Is there a way to roll back to a previous revision?
  17. Mine is dead this morning as well. Same error log as above with different webserver address (clearly).
  18. He doesn't have the USB though with the original drive mappings using the identifier. Unfortunately you can only NOT have one of the following in single parity: 1) USB drive for unraid 2) Parity Disk 3) Array Disk(s) I believe you are going to have to try to rebuild the lost disk using the "swamp" drives and "swamp" parity, then that disk could be directly implemented in your new installation. Better than starting from scratch at least!
  19. Dolphin and/or a Windows 10 VM with root share access has been my best document management solution. Keeping the files all within the box (not to outside comp and back) is key. I peg my WD external at 110-125 MB/s in large transfers with either solution. Thanks for the review!
  20. I am getting a notification that 12.0.2 is ready for installation, is this something I should wait for the docker to be updated or is it ok to update within the software? Usually I like to hold for a docker update.
  21. My i5-3570K onboard graphics isnt showing up as a Graphics card option in 6.3.5. Do I need to peruse in the bios or reboot? 6.3.X should have this feature correct? This is a big deal for me as I don't have VT-d to allow graphics card passthrough. Or maybe that is the issue, I need VT-d in order to use this feature? Confirmed 00:02.0 Xeon E3-1200 2/3rd gen core processor graphics controller. Otherwise everything is working great with remote desktop, etc. Also the wiki needs to be updated to reflect the ability to assign onboard graphics with 6.3 (at least in theory?): https://wiki.lime-technology.com/UnRAID_Manual_6#Using_Virtual_Machines
  22. As someone recently and now fully committed to Unraid and still generally aware of what other options exist (most of the time), I was reading into WSS in WS12 and looked at some of the new featured and wondered if they had ever been considered for the Unraid roadmap: Feature/functionality New or updated? Description Storage tiers New Automatically moves frequently accessed data to faster (solid-state drive) storage and infrequently accessed data to slower (hard disk) storage. Write-back cache New Buffers small random writes to solid-state drives, reducing the latency of writes. Parity space support for failover clusters New Enables you to create parity spaces on failover clusters. Dual parity New Stores two copies of the parity information on a parity space, which helps protect you from two simultaneous physical disk failures and optimizes storage efficiency. Automatically rebuild storage spaces from storage pool free space New Decreases how long it takes to rebuild a storage space after a physical disk failure by using spare capacity in the pool instead of a single hot spare Just wondering of thoughts. Personally the last item (pool space rebuild) and storage tiers based on data usage sound extremely enticing. I have (albeit manual) cache only setup for my faster needs, dual parity is already done (and is amazing). Just thinking about automating some of the data manipulation for speed and safety got me thinking... Thoughts? Original webpage: https://technet.microsoft.com/en-us/library/dn387076(v=ws.11).aspx
  23. Also if and admin wants to change title to "BACK after giving up, couldn't be happier" that would be great.