Forusim

Members
  • Posts

    73
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Forusim's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. Thought it to be more of general issue. However after disabling of docker and vm, I was able to set a static IP and that seem to solve the issue.
  2. Hello, I used the wireguard plugin for vnp tunneled access to a commencial provider over my onboard 1 Gbit NIC without issues. The 1 Gbit onboard NIC is connected to my router and over it with the rest of my home network. The otherday I installed additional 5 Gbit PCIe card to connect directly to my main computer with also 5 Gbit PCIe card. This shall speedup big (100GB) file transfers, which I periodically need to do. Since not all systems need 5 Gbit, I do not want to buy a 5 Gbit switch just for this single use case. However as soon as I plug in the ethernet cable into 5 Gbit PCIe card, the active tunnel stops handshake and looses connection. When I remove the cable, the handshake reoccur and the VPN tunnel is connected. I have not found any setting in Unraid GUI, which interface is to be selected. How can I configure, that Wireguard and other services shall use the onboard NIC?
  3. @primeval_god Thanks for clarification. Non-intuitive is an understatement to say at least. Maybe there is a good reason, why it is working the way it is, but I find my use case reasonable. I have now solved it, like you suggested in the entrypoint.sh.
  4. From the linked reference I understood, that you can "put" files into the volume during the build time via Dockerfile. Somehow this only works only during the runtime, but then I do not see a point in volume declaration in Dockerfile and can directly map it on docker run. However I want to have all required files routed to one location and map in the Unraid gui only this one /config location.
  5. Hello, I appologies, if my question is too nooby, but I was not able to figure out, what I am doing wrong. I am running Unraid 6.9.2 and was mostly a docker user, but not a developer. I want to create a custom docker, which pulls some python apps from git (not my repo) and run it there. This python app has some hardcoded config / logs paths, which I would like to route via a volume to a persistent store on /mnt/cache/appdata/my-docker. I tried it via Dockerfile, like it is described here, but the created files are not initialy in the volume, when I start the docker contrainer. I made a sample project to show my issue. Dockerfile: FROM alpine RUN apk add --no-cache bash RUN mkdir /config RUN echo "test" > /config/config.yaml VOLUME /config COPY entrypoint.sh /entrypoint.sh ENTRYPOINT ["bash", "/entrypoint.sh"] entrypoint.sh #!/bin/bash echo "Run in loop" while true; do sleep 30; done; Docker config in Unraid From above I would expect on first start of the container the "config.yaml" file to be inside /mnt/cache/appdata/test. However the file is not created in the contrainers /config and therefore is not inside /mnt/cache/appdata/test. When I execute the command 'echo "test" > /config/config.yaml' in the running container the file shows up in /mnt/cache/appdata/test. Do I missunderstand, how the volume is supposed to work?
  6. + for writing Chia plots to a second array without paritiy. Initially I had the idea to make a "no cache" share on multiple disks with "most free" allocation. But wrting the 101 GB plot to parity protected array is slowed down by 50-70%, which takes too much time in parallel plotting. So the only reasonable way of plotting is against unassigned devices and rotate the destination disks in the plotting script.
  7. Are there any benefits in migrating from btrfs docker.img to directory? Will it use less space, because a fixed 20 GB allocation is not required anymore? Is it possible to transfer all the data in existing docker.img to docker directory?
  8. Somehow I messed up the device names in UD with my formatting attempts. The disk, which was initially Dev 1 - ST8000AS0002-1NA17Z_Z840E2KD (sdh) is now only sdj. This would not bother me, but now I cannot spin down the disk anymore (the green dot is not clickable for this disk). And when I click on the sdj link to attributes and SMART info, it only display "Can not read attributes" sections. How can I fix this to initial behaviour?
  9. As of Unraid 6.7 or 6.8 XFS is formatted with reflink=1, which uses a lot of space (e.g. 69GB on 10 TB disk). I found this feature is not worth the space and keep formatting my new disks with custom approach as described in the linked thread. Since I run out of SATA ports, I would like to add additional disk via USB & UD. I was hoping to format them with same command as I did for the array. mkfs.xfs -m crc=1,finobt=1,reflink=0 -l su=4096 -s size=4096 -f /dev/mdX However UD refuses to mount from such disk and only offers me to format it. Is there a way to trick UD to accept my disk (with reflink=0), like it does work in Unraid 6.8 and newer? Edit: Solved it myself. I formatted the disk with XFS in UD and then run the command on the first partition: mkfs.xfs -m crc=1,finobt=1,reflink=0 -l su=4096 -s size=4096 -f /dev/sdX1 It took the dashboard some mounts and unmounts to recognise the reduced used space.
  10. You may try the netdata docker. What docker are you using for chia? Does it forward the gui to an outside port or you operating via console only?
  11. I would like to thank everybody, who helped me in this thread. Cooler Master accepted my RMA and sent me a new PSU, unfortunately a lower tier MWE 550 Gold V2. From my research this one has quiet bad reviews, so I decided not to risk my other components. I purchased a new PSU - Seasonic Focus PX 550W, which now is quietly powering my NAS.
  12. My PSU has single 12V rail and is semi-modular with 3x3 SATA connectors. I used 2x3 for my 6 disks for about a year now without issues. Tested also with 3x2 (3 cables), but the results are same, if I read test with dd on more than 4 disks, the system crash + reboot. It seems like that PSU dies exactly before 5 years warranty (purchased 02/10/2016). Hope that Cooler Master will accept the RMA.
  13. I did some stress tests on my systems: Mprime ran without issues for half an hour in hard test mode - no issues. Diskspeed.sh ran without issues for all disks - each disk sequentially. Then I tried to simulate parity check load with parallel call of dd: (dd if=/dev/sdd of=/dev/null bs=1G count=1 iflag=nocache) & (dd if=/dev/sde of=/dev/null bs=1G count=1 iflag=nocache) & (dd if=/dev/sdf of=/dev/null bs=1G count=1 iflag=nocache) & (dd if=/dev/sdg of=/dev/null bs=1G count=1 iflag=nocache) 4 out of 6 disks can run in parallel, when I add more -> crash + reboot. I will check all the cabling of all my disks now. Any recommendations how to test, whether a PSU is failing?