bmfrosty

Members
  • Posts

    174
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bmfrosty's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. I recently replaced most of my drives and am in the process of migrating data around to reformat and I've noticed my speeds rebuilding were slow, and copying data currently is about 73MB/s with unbalance. I'm using on-board SATA ports, and I was wondering if that may be why I'm getting such bad speeds. What are the usual culprits?
  2. Just hit this last night and it kept me up late thinking about it. 5 new drives for a ZFS array. I'm going to have to rethink my plan. I had no expectation that this would be a restriction. I guess every share that's not getting incoming writes goes on the ZFS for now.
  3. Did a preclear to the new drives for burn-in over the past 6 days. Added a new pool with 5 devices set to raidz2, compression off, autotrim on, enable user share assignment yes, and started my array. I then went to to the main tab, array, clicked the checkbox by format with the 5 drives showing, clicked the checkbox, acknowledged the warning, and hit format. I get started formatting, then it goes back to started. Pool shows a bad device. Am I doing something wrong here? edit: ooh `Nov 14 20:54:57 maxi root: wipefs: /dev/sdb: failed to erase dos magic string at offset 0x000001fe: Operation not permitted` ``` root@maxi:/dev# for i in sdb sdc sdd sde sdf ; do dd if=/dev/zero of=/dev/${i}1 bs=4096 count=4 ; done dd: error writing '/dev/sdb1': Operation not permitted 1+0 records in 0+0 records out 0 bytes copied, 9.1293e-05 s, 0.0 kB/s dd: error writing '/dev/sdc1': Operation not permitted 1+0 records in 0+0 records out 0 bytes copied, 3.7661e-05 s, 0.0 kB/s dd: error writing '/dev/sdd1': Operation not permitted 1+0 records in 0+0 records out 0 bytes copied, 3.8583e-05 s, 0.0 kB/s dd: error writing '/dev/sde1': Operation not permitted 1+0 records in 0+0 records out 0 bytes copied, 9.7745e-05 s, 0.0 kB/s dd: error writing '/dev/sdf1': Operation not permitted 1+0 records in 0+0 records out 0 bytes copied, 3.5366e-05 s, 0.0 kB/s root@maxi:/dev# ``` same for the raw devices I need to come back to this in the morning. Solved. Had to clear it with the preclear script.
  4. I've been (fairly passively) using unraid for about a decade, and my oldest drives (3/5 in my array) have hit the 5 year mark, and I'm thinking that it's time to replace them. In the process, I'm considering moving to a ZFS pool. My current setup includes a 2-drive SSD Cache using BTRFS and a 5 drive array using XFS on the drives with 1 parity drive. My goals are to increase storage and improve failure tolerance and reduce the occurrences of having to use unbalance to shift around files to make space. My current thoughts are to build a 5-drive RAIDZ2 ZFS pool to take place of my array (mostly) which should bring my usable space in the pool to ~56TB vs the usable space of my array of ~44TB. I'll keep the two newest disks in the array as still an array, and will keep (usually newer) frequently accessed files there. ZFS pool will keep lighter touch files. I guess my questions are - Does what I'm thinking about here make sense? When not being actively access for a time will the drives in the ZFS pool spin down? What might I not have thought about yet that I may need to think about?
  5. Just posting because I had been having an issue where where privoxy was very slow. Didn't matter, I only used it to log into a site or two, and qbittorrent worked well. That changed about 3 or 4 days ago, and I couldn't get entries the RSS feed working on this specific site that I log into with qbittorrent. Turns out my list of dns resolvers had too many bad entries. I switch to the cloudflare resolvers and suddenly everything is fast and the RSS works. Go figure.
  6. I switched from a Vega 56 to a 2060 today and I appear to be having better luck. I didn't think that I'd have that on a linux system with nvidia. I need some DX10/DX11 games to test that are known to work. Anyone want to throw out a couple of suggestions?
  7. A bunch of things are now working. The container definitely works, I can VNC in and it definitely works, and I can play games that are DX9 and they work. I was able to launch Saints Row IV and Portal 2. What's not working is anything past DX9. I'm running a Vega 56, which does have DX12 support, so I'm expecting it to be a drivers problem. I've tried GTAV and Darksiders (Warmastered Edition), and GTAV complains that my hardware doesn't support DX10/11, and Darksiders complains that the hardware doesn't support DX10. I'm not sure how to troubleshoot this. Older games *tend* to work as long as I'm running them through Proton and not relying on the Linux version. Can anyone point me in the right direction?
  8. Just got it working on my current motherboard (From 2014!). Turns out that my motherboard doesn't support PCIe bifurcation, so I had to use two cards in two different PCIe slots. I like the idea that btrfs will let me freely switch between jbod and raid1 as long as I have free space. I'm doing that to download a couple of very large things, and then I plan to turn it back to raid1 assuming that I can manage what I think it does. Being very careful about taking backups before I do operations like that though. I think the bridge is going to allow me to do some fancy things. 2x1tb is great for now, but I'd like to have raid1 and 2tb of usable space some time in the future with the option to double it using raid5 and still keeping the protection. For right now, this is a big improvement over the 600gb SSD that I've had in there since about when I built the thing in 2013 originally. 2014 motherboard didn't get installed until 2020 IIRC. Basically I need to understand what lanes are in use and which ones aren't. I think the 5900X has 24 lanes, 4 of which are for the bridge that then in turn provides 16 more lanes (that all have to feed through the 4, but that's alright for this application, I think), but I need to look at some docs and figure out how many m.2 I can effectively have while still getting enough SATA ports on the board. Fun, huh? A quick read shows that I can get one m.2 m-key slot on the motherboard direclty off of the CPU and one off the x570. Also there's an x16 (that's really x4) PCIe slot off of the x570 that I can get a 3rd m.2 slot out of. There's also an x1 slot that I can get a couple of SATA ports off of as well, I think. If I'm keeping my GPU (I am for steam-headless) then that's about as far as I can go on this motherboard and processor, which is just fine by me.
  9. I'm heading very much in this direction. I'm just questioning whether I can get two cache m.2 drives in. Still doing the research, but thinking that I may buy the 5900X while it's still on sale even if I end up with a different motherboard.
  10. Got everything working yesterday. Had to change a kernel parameter and had some conflict that I had to solve between this container and a VM - (I didn't fix it, I reinstalled the chart, which caused the problem to go away), and I updated this AM, but now in noVNC I get a popup that I didn't have yesterday: noVNC encountered an error: The play() request was interrupted because the media was removed from the document. https://goo.gl/LdLk22 I saw this occasionally before when I was testing and got the VNC for my container, so I don't know if it's a regression here or elsewhere.
  11. No luck on my side, but I'm on 6.9.2. I'll upgrade to 6.10 tomorrow. No luck there either, but found a problem of some sort in dmesg- [26167.409463] pcieport 0000:00:03.0: AER: Uncorrected (Non-Fatal) error received: 0000:02:00.0 [26167.409473] pcieport 0000:02:00.0: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID) [26167.409476] pcieport 0000:02:00.0: device [1022:1471] error status/mask=00100000/00000000 [26167.409478] pcieport 0000:02:00.0: [20] UnsupReq (First) [26167.409480] pcieport 0000:02:00.0: AER: TLP Header: 34000000 03000010 00000000 84288428 [26167.409489] [drm] PCI error: detected callback, state(1)!! [26167.409503] pci 0000:03:00.1: AER: can't recover (no error_detected callback) [26167.409511] pcieport 0000:02:00.0: AER: device recovery failed Repeats ~1000 times per second.
  12. I found that 4.3.x versions were memory hogs in docker. The move to 4.4.x reduced the memory usage down significantly.
  13. On the RSS Feed issue. It's a recent change to QT6. Info on it: https://github.com/qbittorrent/qBittorrent/issues/16879 EDIT: Easy fix is to change your repo to: binhex/arch-qbittorrentvpn:4.4.2-1-01 Assuming that it's fixed in a few weeks, you can change it back to binhex/arch-qbittorrentvpn
  14. Great. Makes things easier for me. Very good to know. I should be done during the weekend then.