SK

Members
  • Posts

    35
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed
  • Location
    Midwest

SK's Achievements

Noob

Noob (1/14)

0

Reputation

  1. FYI - I created topic asking to improve stock unRAID ESXi support in future versions - in Roadmap part of the forum. Please add your voice there to highlight the need and improve visibility of those issues. http://lime-technology.com/forum/index.php?topic=10669.0
  2. well, it doesn't take long before first hacking attempt for any device connected to internet. Few nat forwarded ports pinned in my home firewall regularly got connection/hacking attempts, ftp server constantly bombarded with login tries, even standard udp ports for my voip device been regularly probed from chinese provider IPs last time i checked. Also if new IP gets assigned from provider - who knows how it's been used before.. # md5sum unraid_4.6-vm_1.0.6.iso 82962143aac52908d34052fd74d73cf1 unraid_4.6-vm_1.0.6.iso Btw for those interested - my unraid VM reports 111,341 KB/sec during parity check for 3 x 1TB Seagate 7200.12 drives.
  3. For drives spindown to work it should be supported by controller (such LSI) in first place. For instance LSI1068 based cards I dealt with had no correct spindown support. To clarify a few things: 1) the patched version (unraid-VM distro) is intended for unraid in Vmware ESXi VM only, list of deviations from original one is in README file. Virtualization layer between hardware and unraid distro brings challenges partially addressed by the patch to open-sourced unraid driver. Other challenges are in management interface and needs to be addressed by Lime given closed source of unraid management code. 2) there are no additional code to support any unsupported by standard unraid version controllers other than additional handling of drives discovery. Also patched version based on newer linux kernel. 3) Running unraid-VM distro on physical server should be ok, but if controller doesn't handle spindown/spinup correctly then spindown needs to be disabled.
  4. Nfs client reboot is fine as soon as it remounts when gets back online, nfs server reboot - likely require remount on client(s) side to avoid stalled mount points. in my single server config I have logs and backup on NFS unRAID datastore, backup script is cron scheduled on ESXi which run off compact flash. Well, I changed from the persistant connection to allowing ghettoVCB mount and unmount the NFS connection. I saw errors in the system log about the onboard Realtek nic so I decided to disable it in bios. After some rebooting and fiddling all network connections were lost and based on help from the ESXi forum I ended up resetting the configuration and had to set up the networking from scratch and add the VM's back to inventory. Grumble. EDIT: 1/2/2011 I've changed to doing non-persistant NFS connections with ghettoVCB and that's workiing perfectly. Also, in case you haven't tried the Veeam Monitor application I'd just encourage you to try it out. It's amazing. thanks - will get a try with Veeam Monitor free version, preso on their site looks promising
  5. Nfs client reboot is fine as soon as it remounts when gets back online, nfs server reboot - likely require remount on client(s) side to avoid stalled mount points. in my single server config I have logs and backup on NFS unRAID datastore, backup script is cron scheduled on ESXi which run off compact flash.
  6. just vhettoVCB settings related to non persistent nfs connection, when enabled (ENABLE_NON_PERSISTENT_NFS, etc) it will setup nfs datastore on fly just for backup duration
  7. I'm getting set up with this script now also. My ESXi server and unRAID server are separate however. Currently I'm having ESXi create NFS datastore that points to one of my unRAID shares and that is working. But I'm not sure how to get the email part working. The ghettoVCD script doesn't take parameters for SMTP authentication and it's using nc internally and I don't know if it can handle auth either. Any suggestions or help would sure be appreciated. I have this working on single physical server, during backup window ESXi host mounts NFS backup share as datastore from unraid VM, perform hot backup (by doing VM snapshot and then clone) and then unmount it. There are some issue with enabling experimental gzip based compression - it did not work for me on large VMs, but works fine without compression. I have not look into email notification though, but don't think netcat can handle SMTP authentication. Perhaps using local SMTP proxy on another VM/host without authentication can help with this, something like sSTMP - http://wiki.debian.org/sSMTP
  8. FYI - under ESXi for VMs backup I start using script from http://communities.vmware.com/docs/DOC-8760, which pretty smart and skip physical RDMs (if someone want to backup unRAID VM). It even support gzip compression for small VMs
  9. Fresh version of unRAID-VM ISO based on unRAID 4.6 and stable 2.6.35.9 kernel available at http://www.mediafire.com/?2710vppr8ne43 check README for details
  10. the error means that LSI card doesn't support ATA spindown, if sg_start (from sg3_utils) works and really put those drives to standby mode then my mod may help.
  11. ...the AOC-SASLP-MV8 is useless for an ESXi environment (no native support in ESXi and vmdirectpath not working). Since that image is tuned for ESXi, it comes only natural that these kind of features/modules are being removed. ..my 2 cents. I believe the 2nd version was intended to be run outside ESXi, since it looks to have been built to specifically address problems with running the older supplied version directly on the hardware. either outside of ESXi or under ESXi with VMDirectPath I/O for LSI1068E controller
  12. I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point. I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing. both version are the same except unraid driver, second iso disables extended spindown code for BR10i LSI1068E that not conform to t10 standard. As far as mvsas - the only revenant change was is the upgrade of Fusion MPT driver from 3.04.15 (linux kernel stock) to latest 4.24.00.00 (from LSI site) which compiles and work under my ESXi configuration without issues (when using virtualized LSI SAS controller with SATA Physical RDM disks). Having look at logs would help, if this issue with updated Fusion driver we can certainly go back to previous one. Given that 4.6 stable release is out I need to update to it anyway..
  13. Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43 Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at http://www.mediafire.com/?zeajy4mmk8j868k
  14. Device not ready errors are the same issue jamerson9 experienced and caused by Br10i (based on LSI 1068E chip) not supporting part of T10 SAT-2 standard correctly in regards to drives spin down. As you noticed only devices managed by Br10i (sdh to sdo have those errors), ones on internal SATA - not. I wonder if LSI1068E chip does support spin down correctly on any card at all - so far I have not seen such evidence.
  15. I think the issue with that is the drives aren't exposed to unRAID inside the VM. In ESXi drives can be exposed to VM either using physical RDM or controller pass through (which require relevant supporting hardware with certain cpu features, etc). In first case not all features may be available such temp and spin down. As far as VM of unRAID running on full-slackware distro - my preference for unRAID VM is to keep it as small as possible and just do storage piece and leave other functionality for other VMs running on better suited OS distros such ubuntu, centos, windows and so on.