How the heck do I delete cache disk contents


Recommended Posts

I'm so pissed off with htis. my cache disk mover has been running for hours and still says there is 114GB used: hasn't moved at all in hours. I've rebooted, I've run reiserfsck with every option I can find and yet every time I run rm-r ... I get this damn message

 

root@NAS:/mnt/cache/Movies# rm -r -f *

rm: cannot remove `A Good Day to Die Hard': Read-only file system

rm: cannot remove `Bourne/Bourne Legacy/movie.xml': Read-only file system

rm: cannot remove `Bourne/Bourne Legacy/BOURNE_LEGACY.iso': Read-only file system

rm: cannot remove `Bourne/backdrop.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop1.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop2.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop3.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop4.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop5.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop6.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop7.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop8.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop9.jpg': Read-only file system

rm: cannot remove `Bourne/folder.png': Read-only file system

rm: cannot remove `Bourne/backdrop10.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop11.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop12.jpg': Read-only file system

rm: cannot remove `Bourne/backdrop13.jpg': Read-only file system

rm: cannot remove `Skyfall/Skyfall.iso': Read-only file system

rm: cannot remove `The Jackal/movie.xml': Read-only file system

rm: cannot remove `The Jackal/disc.png': Read-only file system

rm: cannot remove `The Jackal/JACKAL.iso': Read-only file system

rm: cannot remove `The Jackal/logo.png': Read-only file system

 

I'm getting tot he end of my tether with unRaid

 

How the heck do i clear the damn cache disk?

 

TIA

 

Mark

Link to comment

root@NAS:/mnt/cache# ls -l

total 0

drwxrwxrwx 6 nobody users 168 2013-11-03 13:38 Movies/

 

root@NAS:/mnt/cache/Movies# ls -l

total 1

drwxrwxrwx 2 nobody users  48 2013-11-03 14:02 A\ Good\ Day\ to\ Die\ Hard/

drwxrwxrwx 3 nobody users 560 2013-11-03 07:51 Bourne/

drwxrwxrwx 2 nobody users  80 2013-11-03 00:44 Skyfall/

drwxrwxrwx 2 nobody users 160 2013-11-03 10:54 The\ Jackal/

 

Now I'm at 21 hours to finish parity check

Link to comment

Sounds to me like your cache disk essentially had a write failure and unRAID made it read only.  You could restore it to read/write access by removing the cache drive in the GUI and then starting the array once without it.  Then adding it back.  I don't believe unRAID will reformat a cache drive if it is already formatted with reiserfs so you would then need to go to the now editable cache drive and delete the contents.

 

Note this is a guess as I've never had a problem with a cache drive that wasn't self inflicted myself.  (I started a drive reconstruction on a full cache drive and stopped said reconstruction when I realized my mistake.  I then had to use the reiserfs tools to reconstruct most of the data on the cache drive again.  I managed to recover most of it since I stopped the reconstruction almost immediately)

Link to comment

Sounds to me like your cache disk essentially had a write failure and unRAID made it read only.  You could restore it to read/write access by removing the cache drive in the GUI and then starting the array once without it.  Then adding it back.  I don't believe unRAID will reformat a cache drive if it is already formatted with reiserfs so you would then need to go to the now editable cache drive and delete the contents.

 

Note this is a guess as I've never had a problem with a cache drive that wasn't self inflicted myself.  (I started a drive reconstruction on a full cache drive and stopped said reconstruction when I realized my mistake.  I then had to use the reiserfs tools to reconstruct most of the data on the cache drive again.  I managed to recover most of it since I stopped the reconstruction almost immediately)

 

Thanks. Will wait for array rebuild to finish and see where I am. I changed the min free on my cache drive yesterday all my share settings disappeared, so I'm very nervous about this now :-(

Link to comment

Ok Rebuild finished

Rebooted

array started

still can't delete cache drive contents. Same read-only message

ran lsof| grep mnt  nothing!

 

This is driving me mad

Did you try stopping the array.  Setting the cache drive to "no device".  Then starting the array.  Then stop it again.  Then attach your drive.  Then starting the array again?  Basically a similar principle to making a red balled drive rebuild upon itself is what I'm suggesting.  If that doesn't work then maybe try removing the drive and format it with NTFS and then add it back so that unRAID will then have to reformat it.  Or use mkreiserfs on it if you don't want to take it out.  After these I'm afraid I'm out of suggestions.
Link to comment

The I left mover for about 20 hours and not one byte has moved.

 

I stopped the array, still read-only. Was marked as read-only in proc/mounts. Couldn't remount rw. unmounted ran resierfsck, fixed errors, tried to remount, said no entry in proc/mounts, fstab, etc. for that device/mount point: where they heck did those go?

 

Got so fed up with it, I disabled the cache drive entirely and added as a regular drive, so I can reorganize my data prior to server move. This one instance has cost me 48 hours. I'm just building new server and think I'm going to move to FlexRaid. Had too many issues like this and don't like trusting my data to my dated/sketchy unix knowledge, especially in times of crisis. Will use the unRAID server (HP microServer) and a few cheap drives for backup in detached garage

 

Regards

 

mark

 

Link to comment

The I left mover for about 20 hours and not one byte has moved.

 

I stopped the array, still read-only. Was marked as read-only in proc/mounts. Couldn't remount rw. unmounted ran resierfsck, fixed errors, tried to remount, said no entry in proc/mounts, fstab, etc. for that device/mount point: where they heck did those go?

 

Got so fed up with it, I disabled the cache drive entirely and added as a regular drive, so I can reorganize my data prior to server move. This one instance has cost me 48 hours. I'm just building new server and think I'm going to move to FlexRaid. Had too many issues like this and don't like trusting my data to my dated/sketchy unix knowledge, especially in times of crisis. Will use the unRAID server (HP microServer) and a few cheap drives for backup in detached garage

 

Regards

 

mark

Sorry you couldn't get it to work.  Remember any help I give is not official.  To get official help you should contact Tom with an email.  He quite often isn't responding to the forums for long periods of time and it has been posted elsewhere that an email gets a quicker response.  FlexRAID is certainly an option if unRAID isn't working for you.
Link to comment

The I left mover for about 20 hours and not one byte has moved.

 

I stopped the array, still read-only. Was marked as read-only in proc/mounts. Couldn't remount rw. unmounted ran resierfsck, fixed errors, tried to remount, said no entry in proc/mounts, fstab, etc. for that device/mount point: where they heck did those go?

 

Got so fed up with it, I disabled the cache drive entirely and added as a regular drive, so I can reorganize my data prior to server move. This one instance has cost me 48 hours. I'm just building new server and think I'm going to move to FlexRaid. Had too many issues like this and don't like trusting my data to my dated/sketchy unix knowledge, especially in times of crisis. Will use the unRAID server (HP microServer) and a few cheap drives for backup in detached garage

 

Regards

 

mark

Sorry you couldn't get it to work.  Remember any help I give is not official.  To get official help you should contact Tom with an email.  He quite often isn't responding to the forums for long periods of time and it has been posted elsewhere that an email gets a quicker response.  FlexRAID is certainly an option if unRAID isn't working for you.

 

Hi Bob

 

Appreciate the help you gave me. My big problem is that I hardly ever touch my server, it's only when there are issues or I need to change the hardware. On a positive note, these occasions are few and far between (and usually at weekends -- when it's hard to get support). This is the crux of my problem though: I used to be really good at UNIX. (When I worked in R&D, I wrote a pretty complex stress testing framework for our QA dept that could load test an unlimited number of workstations, entirely in shell scripts.) Even if I re-learned it again, unless I use it on a regular basis, I'll be back in the same situation in 9-12 months when something goes wrong or I need to change the config and I've forgotten a lot. All it takes is one stupid error and my data is gone. I'm not concerned about my critical stuff, that's in the cloud, etc. and can be got back in a day or so (or a few minutes if I just need one or two docs), but pulling 10's of TB's of data from crashplan will be a very long process.

 

Thanks again

 

Regards

 

Mark

Mark

Link to comment

I get that, but there's no explanation about where the entries in fstab, etc. went. Those files were never touched.

 

As I said, I was down for 48 hours trying to get all this fixed and that's unacceptable for me (in addition to the hours I wasted) - it could have been a lot worse. For those who are more familiar with, and regular users of, Unix, Linux, etc. it's less of an issue, but the last thing I want to be doing is guessing or experimenting with so much data at stake.

Link to comment
  • 2 weeks later...

If the drive keeps going read-only it's because of corruption.

 

A reiserfsck should probably have fixed it.

 

I would suggest grabbing a smartctl report.

 

I would also suggest posting the syslog to see why it's read only.

 

How much data is still left on the cache drive?

 

If you can copy it via rsync, I suppose worst case is, you could reformat it from scratch to lay down a new filesystem. 

 

However, I would still be concerned about the drive's health. In that case I would run a smart long test first.

Link to comment

I'm so pissed off with htis. my cache disk mover has been running for hours and still says there is 114GB used: hasn't moved at all in hours. I've rebooted, I've run reiserfsck with every option I can find and yet every time I run rm-r ... I get this damn message

A disk can be marked read-only if a) its file system is found to be corrupted at mount time, or b) it gets write errors trying to update file system metadata.  Drives for which this happen in the array get disabled, but since this is the cache drive, it gets marked read-only in order to minimize the corruption.  You ran a 'reiserfsck' and it didn't report any errors?  That is very strange, please post a system log.

Link to comment

Sounds to me like your cache disk essentially had a write failure and unRAID made it read only.

It's the 'reiserfs' file system code doing that to minimize corruption.  This almost always implies a bad drive or other bad h/w.

 

You could restore it to read/write access by removing the cache drive in the GUI and then starting the array once without it.  Then adding it back.

That won't do anything to solve the corruption on the disk.

 

I don't believe unRAID will reformat a cache drive if it is already formatted with reiserfs so you would then need to go to the now editable cache drive and delete the contents.

 

Note this is a guess as I've never had a problem with a cache drive that wasn't self inflicted myself.  (I started a drive reconstruction on a full cache drive and stopped said reconstruction when I realized my mistake.  I then had to use the reiserfs tools to reconstruct most of the data on the cache drive again.  I managed to recover most of it since I stopped the reconstruction almost immediately)

This is one reason why I like reiserfs.  You can clobber quite a bit of it and still recover a lot of data.

  • Upvote 1
Link to comment
  • 3 years later...
1 hour ago, huntjules said:

@mark_anderson_us have you considerrreed allocating old cache drive to an array disk, the unraid re formats it, then unasigned that drive from array drives, stop array, reboot, yes this will involve a parity check, but gets the data off the cache

What this would actually involve if you attempted this approach would be:

 

Assign old cache to array and let unRAID clear it and format it.

Remove old cache from array, set a New Config, and rebuild parity.

 

Putting drives in and out of the array is not quick and will involve running without protection for part of the long process, so obviously not the best way to solve the problem of reformatting cache.

 

If you only have a single cache disk, you can reformat it just like you would an array drive.

 

Stop array.

Click on cache drive to get to its page.

Change filesystem.

Start array to format.

 

If you want to change it back to the original filesystem just repeat.

  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.