ro345

Members
  • Posts

    7
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

ro345's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I'm looking to make a very easy NFS and Samba server for my lab at work. I work for a large tech company and I have inherited an old, but still useful 3Par T400 SAN. The SAN has about 90TB of storage on it. The SAN speaks only fiber channel and iscsi. I need to be able to serve up NFS and cifs from my SAN, so I will front end it with a server. I have access to HP DL and BL servers, and would prefer to use my BL server since I have a FC switch in my C3000 chassis already. My BL servers also have 10Gig nics and 8 Gig FC HBAs in them already. I have used unraid for a long time, and I think unraid would be great for this. I obviously wouldn't use a parity disk, and would just mount one large 10TB (or so disk) on unraid and serve from that. Whenever someone posts about FC or iscsi, everyone starts yelling at them that they don't understand what unraid is for. I do understand what Unraid is for. To be clear I do not need the operating system to provide disk fault tolerance at all, that is done in the SAN. I need the OS to provide ease of use and bridge file sharing protocols with SAN protocols. I'm using Unraid for its nice UI and nice statistics gathering and alarm functions. I don't think unraid supports FC HBAs or iscsi initiators, but if anyone has any ideas I would definitely like to hear them. I see that some RC candidates for unraid list qlogic adapters, so I'm not sure what that means. Thanks in advance.
  2. I think you are on to something. Even though I have nearly 1TB left, the filesystem is very full. root@Tower:/var/log# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 128M 1.9M 127M 2% /var/log /dev/sda1 7.5G 62M 7.5G 1% /boot /dev/md1 3.7T 3.6T 115G 97% /mnt/disk1 /dev/md2 3.7T 3.5T 147G 97% /mnt/disk2 /dev/md3 3.7T 3.6T 112G 97% /mnt/disk3 /dev/md4 2.8T 2.6T 149G 95% /mnt/disk4 /dev/md5 3.7T 3.5T 230G 94% /mnt/disk5 /dev/md6 3.7T 3.5T 230G 94% /mnt/disk6 shfs 21T 20T 981G 96% /mnt/user /dev/loop0 1.8M 80K 1.6M 5% /etc/libvirt Does that kind of utilization cause these issues in reiserfs? Anything that I can do about it (short of adding more disk)? Converting to XFS help (probably rather not do that)? Optimize reiserfs? I can't post the diagnostic.zip due to my company's security rules (serial numbers, IP address, etc). Thanks!
  3. I posted this a few months ago and got no response, I've since upgraded to Unraid 6 and I'm still seeing it. Load on the server will spike for 15 or 30 minutes at a time. The server basically becomes on responsive, shares disconnect, running any "hdparm -tT" commands on any drive leads to a hung session. There is nothing in the logs at all to indicate a problem. The top output below was when the server was doing a parity check. Its been running for about 6 hours with 6 more hours to go. The load spikes for 15 or 30 minutes at a time, and the server becomes unresponsive (i can still run top and the CLI is responsive, however hdparm -tT hangs and I cannot copy files to the server and the share is unresponsive). Notice the very high load. top - 10:22:37 up 7:13, 3 users, load average: 14.47, 9.39, 5.03 Tasks: 302 total, 2 running, 300 sleeping, 0 stopped, 0 zombie Cpu(s): 0.4%us, 0.6%sy, 0.0%ni, 90.6%id, 8.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 66037640k total, 55752540k used, 10285100k free, 612692k buffers Swap: 0k total, 0k used, 0k free, 54185592k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4603 root 20 0 0 0 0 S 9 0.0 138:39.53 unraidd 4550 root 20 0 0 0 0 D 2 0.0 34:48.37 mdrecoveryd 4530 root 20 0 89120 3452 2960 S 1 0.0 1:00.03 emhttp 8 root 20 0 0 0 0 S 0 0.0 0:48.83 rcu_preempt 1085 root 20 0 0 0 0 S 0 0.0 0:11.12 kworker/8:1 1166 root 0 -20 0 0 0 S 0 0.0 0:51.37 kworker/10:1H 4545 root 0 -20 0 0 0 S 0 0.0 0:50.38 kworker/11:1H 4693 root 20 0 619m 25m 552 S 0 0.0 80:30.11 shfs 7722 root 20 0 0 0 0 S 0 0.0 0:10.40 kworker/11:2 17944 root 20 0 26624 4772 4316 S 0 0.0 0:00.02 sshd 1 root 20 0 4368 1400 1300 S 0 0.0 0:14.69 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:18.13 ksoftirqd/0 5 root 0 -20 0 0 0 S 0 0.0 0:00.00 kworker/0:0H 6 root 20 0 0 0 0 S 0 0.0 0:00.01 kworker/u48:0 9 root 20 0 0 0 0 S 0 0.0 0:00.00 rcu_sched 10 root 20 0 0 0 0 S 0 0.0 0:00.00 rcu_bh Its a fairly powerful box. I have dual hex core xeons, and 64GB of ram in the server, a M1015 controller in IT mode. Even under high load, I would still expect that samba would respond in a timely fashion, like in a few seconds, as opposed to several minutes. Right now, I can't even do an "hdparm -tT /dev/xxx". It just sits and waits. When not in the degraded condition, I can copy files out of the server at wire speed, normally around 110 Megabytes per second or so. The degraded condition tends to come and go. It seems to happen most when running preclear or forced parity checks, but happens at other times randomly. No issues in the smart report. THanks for any help.
  4. Hi, I'm seeing very high IO wait times on my unraid server. It seems to happen randomly, and I can't really track it down to a single thing. The samba process on Unraid starts becoming very unresponsive. You cannot "open" a folder in a Windows client, it will just sit as "unresponsive" for 30 second to 1 1/2 minutes. It does seem to happen more often when the unraid server has more files being copied in or out. It always happens when running preclear. Below is a snapshot from top, the unraid server is running preclear, but this happens even when I'm not running preclear. top - 10:51:14 up 1 day, 19:12, 5 users, load average: 6.94, 6.27, 6.01 Tasks: 161 total, 1 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 7.4%us, 6.9%sy, 0.0%ni, 47.2%id, 37.9%wa, 0.0%hi, 0.6%si, 0.0%st Mem: 6226524k total, 6102460k used, 124064k free, 460520k buffers Swap: 0k total, 0k used, 0k free, 5349732k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 21132 root 20 0 2152 584 436 S 34 0.0 0:01.08 sum 21131 root 20 0 10508 8876 620 D 19 0.1 0:00.59 dd 3313 root 20 0 57900 9876 592 S 5 0.2 179:30.76 shfs 14332 xbox 20 0 20036 6968 5884 S 2 0.1 1:32.47 smbd 3238 root 20 0 0 0 0 S 1 0.0 49:45.87 unraidd 20735 root 20 0 3116 1284 744 S 1 0.0 0:00.07 rsync 20737 root 20 0 2972 616 228 S 1 0.0 0:00.06 rsync 21105 odenbach 20 0 20044 4632 3624 S 1 0.1 0:00.03 smbd 21125 xbox 20 0 20044 3760 2960 S 1 0.1 0:00.03 smbd 479 root 20 0 0 0 0 S 0 0.0 64:30.91 kswapd0 21124 root 20 0 2472 1020 756 R 0 0.0 0:00.03 top 1 root 20 0 828 284 240 S 0 0.0 0:07.00 init 2 root 20 0 0 0 0 S 0 0.0 0:00.06 kthreadd 3 root 20 0 0 0 0 S 0 0.0 1:24.87 ksoftirqd/0 5 root 0 -20 0 0 0 S 0 0.0 0:00.00 kworker/0:0H 7 root 0 -20 0 0 0 S 0 0.0 0:00.00 kworker/u:0H 8 root RT 0 0 0 0 S 0 0.0 0:01.48 migration/0 Its a fairly powerful box. I'm running unraid under esxi, I have dual hex core xeons, and 64GB of ram in the server. For the unraid VM i've given it 4 cores and 6GB ram. Running unraid 5.0.3 and passing through a M1015 controller in IT mode. THe high load is caused by IO wait, but I don't see anything specifically. Even if under high IO wait, I would still expect that samba would respond in a timely fashion, like in a few seconds, as opposed to several minutes. Right now when running preclear, my samba shares are completely unusable. I cant access them with a windows client. When not running preclear, it happens randomly at times. I probably have 70-100 open files at any given time on the server. Right now, I can't even do an "hdparm -tT /dev/xxx". It just sits and waits. When not in the degraded condition, I can copy files out of the server at wire speed, normally around 110 Megabytes per second or so. The degraded condition tends to come and go, but always happens when running preclear. I would expect samba to slow down when running preclear, but I don't think it should go completely unresponsive for minutes at a time. Any thoughts on this? Thanks.
  5. Unfortunately, this setting in mythtv (at least my version) only appears to go up to 200GB, and my cache disk is 300GB. I think what I will do is replace my 300GB cache disk with a 144GB 10k SAS cache disk. And that should solve my problem, but gives me a smaller cache disk.
  6. Thanks, I may have to do that, but I actually want to use the cache disk. Is there anyway to get Unraid to not report the size of shares to include the cache disk?
  7. Hi Everyone, I'm seeing an issue where the size of the share includes the size of the cache drive. I am recording programs to Mythtv on a share that I have created just for Mythtv. The share includes only a single 4tb drive. My cache driev is a 300GB drive. Here is what it looks like on my Unraid server: df -h /dev/md2 3.7T 3.4T 274G 93% /mnt/disk2 However, when I have the share mounted on my Mythtvbackend it shows a size that includes the cache drive: //192.168.0.11/myth_rec 4.0T 3.4T 530G 87% /var/lib/mythtv/recordings Notice that the size reported to the Mythbackend is 3.7TB +300GB, this is the size of 4tb drive + the cache drive. This causes problems with mythtv, because it does not delete files from the share when it should. The share could be completely full using the entire capacity of the single 4tb drive, but Unraid still reports to Mythtv that 300GB is free. This means that mythtv does not delete old recordings, and there is no place on the share to move the cached recordings from the cache disk to the share. Any ideas would be greatly appreciated! Thanks!