rumblefish1

Members
  • Posts

    14
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

rumblefish1's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hello All, I am setting up my 6th unraid server and for the first time I am running into a problems seeing the server on the network. All of my previous servers have been with Supermicro servers, this last one I decided to go with and Asus Maximus IX Formula board. I connected a monitor to the server to see what was going on and the last message I can see is Device "eth0" does not exist. I am assuming this is referring to the nic on the board. I proceeded to pop in an SSD drive and install windows 10 to see if I could access the internet through the OS just to confirm the NIC was working fine, everything works perfectly. I take the SSD Drive out, boot from the USB and I keep getting the same message, I have attached a couple of screen shots, I hope one of the more experienced people on this forum might be able to shed some light as to what is happening. Thanks in advance.
  2. sorry about that, not familiar with what I should be sending. This is what comes out on the screen after I run xfs_repair with nothing in the options box, and the system log.
  3. Ran xfs_repair, stopped maintenance mode, then started the array again. The drive still appears unmountable, I have attached screen shot and disk log.
  4. Hello Everyone, I just had a brown out while setting up a new server. I have been preclearing 23 drives for the last week but had not gotten as far as setting up a parity drive yet. I have been doing some data transfers to the drives for testing purposes. After the brown out one of the data drives now appears as unmountable Being new at this the only step I took was running the xfs repair with -n, and then the xfs repair. I started the array again and the drive still remains unmountable. I have attached the messages that appeared on the screen after running xfs repair as well as the error I found in the system log. Any help would be appreciated, unfortunately I have some data on that drive that is important for me to recover. Thanks
  5. If I'm reading correctly this applies if you already had the parity drive setup and running. I never got that far, I have been testing and array with no parity. So, I am looking to change out 2 smaller drives before I actually go through the process of setting up the parity drive for the first time.
  6. I have been doing some testing with Beta 5.0 beta 6 for the last 2 months. Following all of the messages going back and forth before migrating my normal box running 4.7 over. The test box initially slated to run (19) 3TB data drives (1) 3TB parity drive was scaled back to (17) 2tb drives, (2) 750gb Data Drives and (1) 2TB Parity Drive Since this was all new hardware, different from what I had been running on 4.7 I was initially concerned with testing for compatibility issues on the hardware side before even looking at the OS side. I therefore set up a simple array for (17) 2TB Data Drives and the (2) 750GB Data Drives. Set up shares, dumped a bunch of data on the drives, put them through their paces and all is working fine. I now want to change out the (2) 750GB drives with (2) 2TB Data Drives, and then proceed to bring the parity drive online. Can someone please give me some insight as to what the correct way to replace these drives, and then set up the parity drive is.
  7. Your right it does not show as 18 - 2 tb drives, I have a number of spare drives which I have been popping in and out since this problem started showing up. I was trying to see if it was a matter of the drives not being identified properly. The post after your solved the problem, thanks for the help.
  8. this is the info I pulled off to help to try and identify what is happening with my 0 byte share, thanks for any help in advance Shares.pdf
  9. Wrong word used, raid is not set up, the array was set up without a hitch and then a share across all the drives. I wanted to do testing on the array before I set up unraid. So the problem is actually creeping up on the share of the array. Where it shows 0 bytes available when it actually has 24TB of spave available.
  10. here is one I have not seen before, I have 18 - 2TB drives installed. No parity, no cache. Raid was set up with no issues, drives cleared properly, a share was set up as Movies which I have been testing r/w to extensively over the course of the last 2 weeks with no issues. Today all of the sudden I received an error while trying to copy a 1gb file over to the server, "not enough space available". When I go into //Tower and review unraid web gui it shows 24gb of disk space available, as it should. WHen I check in windows explorer it shows "0 bytes" of free space. Restarted the server, restarted my windows 7 machine yet the error still persists. I am running 6A by the way. Any ideas?
  11. I installed 5.0 beta 6a on a new server, precleared 9 drives and made a 9 drive array, no parity drive at this point. Everything fine to this point I was able to see the drives from my pc and r/w no issues. Set up a share called Movies, high water/ split level 1/ included disk (blank)/ excluded disk (blank). Share shows up correctly on each drive and I am able to access as Movies with no issue. Installed 3 more drives yesterday, precleared with no issue, stopped array and assigned the 3 drives to disk10 disk11 and disk12. Started the array again everything shows perfect, mbr all ok on all 12 drives. I go into User Shares and I click on Movies. Go to included disks and type disk1,disk2,disk3...disk11,disk12, click apply, then done. My expectation is that the Movie share will now also be on the 3 new disks 10,11,12 but it does not appear on those drives. I can access the 3 drives through their individual network shares with no issues but the expected Movies folder is not on those drives, when I write to the Movies share it only writes to the first 9 drives. Tried going back into shares and putting the included disks to (blank) but the problem persists. Am I doing something wrong or is this a bug in 6A.
  12. 3TB support? Any Idea when? Just purchased 16 Hitachi Drives and I am itching to try out with unraid.