jetkins

Members
  • Posts

    83
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

jetkins's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Well it appears this may not be an issue with your image - I'm now having the same problem with another pyTivo installation as well. I'll update here if/when I find a solution.
  2. Hey there. first of all, thanks for putting the pyTivo image together. I have it installed on my Synology NAS and it's working great... with a couple of exceptions: I can pull from the NAS to the TiVo (a Bolte+ if that makes any difference), and the files transfer fine, but when I try to push from pyTivo to the Tivo, I get an error: ERROR:pyTivo.video.video:<error><code>internalError</code><debug>java.lang.InternalError: Deprecated Operation: bodyOfferModify at com.tivo.trio.mind.cds.BodyOfferModify.doItWithInitializedDb(BodyOfferModify.java:41) at com.tivo.trio.mind.core.ReadWriteOperation$1.call(ReadWriteOperation.java:65) at com.tivo.trio.mind.core.Job$Action.internalRun(Job.java:314) at com.tivo.trio.mind.core.Job.retry(Job.java:94) at com.tivo.trio.mind.core.ReadWriteOperation.doIt(ReadWriteOperation.java:74) at com.tivo.trio.mind.core.MindSession.internalDoIt(MindSession.java:540) at com.tivo.trio.mind.core.MindSession.doItWithMindRequest(MindSession.java:426) at com.tivo.trio.mind.toplevel.Mind.doItWithMindRequest(Mind.java:253) at com.tivo.trio.tomcatmind.TomcatMind.callMind(TomcatMind.java:1210) at com.tivo.trio.tomcatmind.TomcatMind.checkAndDoRequest(TomcatMind.java:1180) at com.tivo.trio.tomcatmind.TomcatMind.doIt2(TomcatMind.java:894) at com.tivo.trio.tomcatmind.TomcatMind.doIt(TomcatMind.java:740) at com.tivo.trio.tomcatmind.TomcatMind.doPost(TomcatMind.java:711) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:200) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) </debug><text>Deprecated Operation: bodyOfferModify</text></error> This error appears to be a result of specifying incorrect TiVo account and password details in the General Settings, but I have checked and double-checked that the information is correct. Perhaps this is a bug that has been addressed in a later version of PyTivo, and your image just needs a refresh? The pyTivo process runs under UID 99 / GID 100. On a Synology, this means that I have to grant group Users (basically everybody) read/write access to my Media share in order for PyTivo to work. Would it be possible (pleeease?) to externalize the GID/UID as Environment Variables so that the user could override them if necessary to better suit the hosting environment?
  3. Bad news, I'm afraid. The EX4350 detects the drive and exports it as JBOD, but unRAID doesn't detect the controller or the drive(s) attached to it. I guess it needs a driver, and while Promise make various flavor of driver - including open source - available on their web site, I don't have the time or expertise to futz with porting it to unRAID. I'll hang on to the card in case someone else with more time, skill, and enthusiasm takes up the challenge, but for now it's a non-starter. [EDIT] I found this thread which talks about the same issue, and comes to the same conclusion.
  4. Drive temp 25C?!? Are you in Alaska, or do you have so many fans that hearing protection is required? 25C is the ambient temperature in my "server room."
  5. No, but I just picked up an EX4350 (4-port) which I'll be trialling once my 3TB pre-clear completes overnight. I'll let y'all know how it goes.
  6. Ah, that could it. According to the VMware KB: Well, that's a PITA. Fortunately the ESXi host is the only thing using NFS on my LAN, so it's not really that big of an exposure in my situation, but I could see how it would render unRAID unusable in other environments with stricter requirements.
  7. Further info: I checked the system log to compare what unRAID is doing differently between public and private configurations: With sharing set to Private: With sharing set to Public: Now unless my eyes (and diff) are lying to me, the only difference is in the substitution of the desired IP address for the * wildcard. This should work, so I believe this should be categorized as a bug.
  8. I just migrated from 4.7 to 5.0-rc16c, and I'm having trouble getting my NFS exports to work the way they used to. In particular, I have one share that is used as a VMware datastore, which I want restricted to one IP address (my ESXi server) with read/write access. Under 4.7, I used "192.168.0.3(rw)", but when I set security to Private and use that same rule under 5.0, ESXi can't access the export. I found this thread which suggests a more convoluted rule, but even that doesn't work for me. The only ways I have been able to get working, are to set it to Secure, which allows read-only access to everyone, or Public, which allows everyone read-write. Neither is really an acceptable solution. Has anyone got IP-restricted NFS sharing working under 5.0? This seems like such a fairly fundamental limitation that I'm surprised there hasn't been more discussion of it; perhaps there are only a handful of us who actually use NFS?
  9. That sounds like it exactly. I'll run a couple of regular parity checks - one to fix and one to verify - before moving on. Thanks, Gary.
  10. I recently had a drive die, replaced it, and rebuilt the array. Now, not 24 hours later, I'm running a non-correcting parity check as a sanity/safety check before migrating to v5, but I'm getting four parity errors. These occur very early in the scan - literally within seconds of starting it - which makes me suspect the affected areas may be in the system data not my user data. I'm also puzzled where they came from, as the array was reconstructed from the parity just only yesterday. Is there a how-to that can help me identify the affected files given the addresses reported in the log? If I determine that the affected file(s) is corrupt, how does one force a rebuild of the data from the parity instead of the default vice versa? Aug 26 13:02:59 Tower kernel: md: parity incorrect: 18144 Aug 26 13:02:59 Tower kernel: md: parity incorrect: 18152 Aug 26 13:02:59 Tower kernel: md: parity incorrect: 18168 Aug 26 13:02:59 Tower kernel: md: parity incorrect: 18176
  11. Well, I discovered I had a spare identical 1.5TB Seagate drive, so I swapped it in for the dead drive but unRAID refused to use it because it was reporting a smaller capacity than the rest of them. A little digging found the forum article about HPA, and by using the referenced hdparm -N method, I was able to reset it to full capacity. The array is now rebuilding, and I'll look at migrating to unRAID v5 another day. Thanks again for your help.
  12. Well, I did as instructed, but after powering back up, I see Disk status Model / Serial No. Temperature Size Free Reads Writes Errors parity Missing - - - - - - ST31500341AS_9VS32171 1,465,138,552 disk1 ST31500341AS_9VS32171 38°C 1,465,138,552 - - - - disk2 ST31500541AS_6XW01X0V 33°C 1,465,138,552 - - - - disk3 ST31500541AS_9XW0DVCD 35°C 1,465,138,552 - - - - cache ST3160827AS_4LJ1JF63 34°C 156,290,872 - - - - Command area Stopped. Invalid configuration. Too many wrong and/or missing disks! The Parity drive has a red dot, Disk 1 has a blue dot, and the Start button is grayed out. If I go to the Devices screen, I see it has detected the new 3TB WD drive and has it selected as the parity drive, so it seems that it kinda knows what I want to do but hasn't automatically implemented the reconstruction process described in the manual. I'm open to suggestions where to go from here. I guess I need to somehow commit the new configuration?
  13. OK, never mind, I read the link you cited. Thanks, I'll give it a try tomorrow. Fingers crossed.
  14. With one drive failed, the parity drive is the only thing providing the data that used to be on that drive. How can I replace the parity drive without losing data?
  15. I have four 1.5TB drives in my array, and one has just thrown a bunch of I/O errors and gone DSBL. I'm going to shut it down and check the physical connections, etc, but assuming it's dead, can I replace it with a 2TB drive, i.e. larger than the current parity drive? I want to restore redundancy ASAP, and I can't find any 1.5's available locally.