Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with Google

Our Sites

covecube.com 
community.covecube.com 
blog.covecube.com 
wiki.covecube.com 
bitflock.com 
stablebit.com 

Poll

No poll attached to this discussion.

In order to better support our growing community we've set up a new more powerful forum.


The new forum is at: http://community.covecube.com


The new forum is running IP.Board and will be our primary forum from now on.


This forum is being retired, but will remain online indefinitely in order to preserve its contents. This forum is now read only.


Thank you,

I/O device error when trying to remove a drive from the pool

edited November 2012 in DrivePool
Perhaps someone out there has seen this issue before. I've done some digging through the forums, Google and the DrivePool manual but haven't found anything related. I have a WHS 2011 server with 4x Seagate 1.5TB drives holding all of the pooled & duplicated data. In addition to that are 2x 750GB drives that maintain the actual WHS 2011 operating system in a RAID 1 array. No pooled data is kept on this RAID array, only the operating system. I have one of the four Seagate 1.5TB drives reporting bad sectors in the StableBit Scanner. All of my shared folders are duplicated across all four drives so I went through the process of having DrivePool safely remove the affected drive from the pool. It went through the process of calculating and migrating, only it eventually stopped and displayed an error.

I've attached a screen capture of the error which states "Drive not removed from pool" & "The request could not be performed because of an I/O device error". Is there anything in particular that might cause this error when trying to remove the drive? Is it because the drive is failing worse than originally thought so it can no longer move files around on it? I haven't tried removing a different drive yet to see if that also produces the same error but I'll try that next.

image

Comments

  • edited November 2012 Resident Guru
    Yes, the normal removal process for a drive in the pool attempts to move the files off it to the remaining drives; if the drive is failing in such a way as to prevent this, you'll get that error (or similar).

    In such a case, once you've made sure you've got at least one copy of the files elsewhere (e.g. via duplication) you can try either the "quick removal" option, or if that also doesn't work, shut down the server, pull the drive manually, then power the server back up and tell DrivePool to drop the missing drive without migrating its files.
  • Yeah I'm starting to believe the drive has failed to the point of not being recoverable. I was able to successfully remove one of the working drives without issue. However, trying to remove this particular one by any method results in the I/O error.

    The thing is, when I removed the I/O error-prone drive the first time, DrivePool stated that I had lost files, but I have duplication turned on for all shared folders. How could I lose files by removing one drive if everything is supposed to be duplicated to begin with?
  • edited November 2012 Resident Guru
    Hmm. I'm not sure how, but here's a way to check for yourself:

    1. Physically pull the drive, add it to a different machine, and rename its poolpart.string folder to poolpart.BAD.string or similar. (or you can stop both DrivePool services, rename it, then start them again).

    2. On the server, download TreeComp from http://lploeger.home.xs4all.nl/TreeComp3.htm

    3. Put the drive back into the server, and run TreeComp; do a time+size comparison between your pooldrive and the poolpart.BAD.string folder on the bad drive (don't do a byte-by-byte content comparison unless you're willing to wait a long time and anyway the drive has bad sectors so it may also hang during the process).

    4. The results should allow you to easily find any files on the bad drive that aren't in the pool (if the bad drive's pathname is coloured blue, then files that are missing from the pool will have solid blue circles next to them).

    EDIT 2012-11-29: in case someone reads this and is scratching their head about stopping "both" drivepool services, at some point while I wasn't looking it seems DrivePool was condensed into one service rather than two.

  • I appreciate the helpful feedback! However, I did not try it as I merely cleaned out old files from the pool then pulled the drive from the server without doing the safe removal process. I was still getting the I/O errors so I decided to just shut down the server and pull it, then remove it from the pool. It looks like the duplication is back up to 100% with very little space left on the server after I cleaned out the old files. I'm okay with that.
  • It would be best if the remove process could skip files that cause I/O errors and continue on. A failing drive may have areas that are going to cause errors and other areas that are fine and it would be extremely helpful to have any reachable files migrated through DrivePool rather than having to hack around manually.
Sign In or Register to comment.