Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with Google

Our Sites

covecube.com 
community.covecube.com 
blog.covecube.com 
wiki.covecube.com 
bitflock.com 
stablebit.com 

Poll

No poll attached to this discussion.

In order to better support our growing community we've set up a new more powerful forum.


The new forum is at: http://community.covecube.com


The new forum is running IP.Board and will be our primary forum from now on.


This forum is being retired, but will remain online indefinitely in order to preserve its contents. This forum is now read only.


Thank you,

Transfer Speed Drops

edited October 2011 in DrivePool

Is anyone noticing when copying files to the server that the speed sontinually drops down to nothing then speeds up again to full whack.

For example copying a 8 gig mkv the transfer starts at gigabit speeds for 10 seconds sometimes more the speed drops to 0, then pauses for about 5 seconds then heads back to gigabit speeds for a upto 30 seconds.

It does this about 5 or 6 times in an 8 gig transfer and I notice that the RAM usage goes past 70% (4 gig installed + I3 2100 cpu)

I cant remember seeing this in the earlier M2 Builds but noticed it later in the Builds but put it down to the optimization required.

Using Build 3956 at the moment with fast I/O and DirectI/O ticked.

 

These folders are not duplicated.

«1

Comments

  • Yes, I've noticed this, it's like clock work.  I've only been using M3 for a few weeks so i don't know if at one time it didn't happen or not.

    It seems like a buffer fills and needs to be emptied before more can be transferred across the network.  Hopefully it's something Alex will be addressing further with M4.

    Jeff
  • edited October 2011 Member
    Alex is already aware of this issue as I reported it a while back and he said its down for further investigation after the bug fixes are out of the way. It also affects transfers to non pooled drives as well.

    Its good to see it was nothing odd with my setup as you are reporting the same problem.

    Steve.
  • Hmm, did some further testing.  hopester, is correct it happens with the d: drive as well as the pool.  I've also tested the same copy to my windows server 2008 r2 server and it doesn't happen.

    All my intel nics are configured the same in my w7, r2 and whs machines.  So not sure what on whs is making this happen?  Anyone else have any ideas?
  • Is it easy to prove that it is caused by DrivePool on WHS rather than something in the base WHS?
  • I now think it's something in WHS.  Uninstalling drivepool would prove one way or another.
  • Covecube
    I still need to look into this in more detail but I've noticed that copying a file to a WHS (non-pooled location) will fill the memory buffer and offer faster speeds than the drive is capable of. At some point, Windows runs out of buffer space in memory and it decides to freeze the incoming write requests until the buffer is emptied out.

    So to the end user it looks like it copies really fast and then freezes for 30 seconds or so, then continuing, giving the impression that something got stuck. But when monitoring the actual disk I/O, it never stops and I've actually witnessed an increase in disk throughput during that pause, probably because Windows doesn't have to deal with the incoming write requests and is just focusing on flushing the memory buffer.

    I still have to do more testing to see what is causing this behavior. But my initial tests show that this is not causing slower sustained throughput.
  • But if you stop the stablebit services during a copy this behavior stops, and it just copies like it should, without the pauses. Or so it seems in the resource monitor.

    Bjørn
  • I dont know, I've seen this sort of behavior this on my WHS 2011 box without drivepool having ever been installed, so it may be something that drivepool's existence exacerbates.  I'm not entirely convinced it's strictly a drivepool *caused* problem.
  • Just tried a few 8GB file transfers to a non pooled ssd drive with drivepool enabled then removed (on other drives). With it enabled all the transfers were slower by around 20%. Its the file caching thats causing the problems as the cache is emptied, yes the disk IO does slightly increase but as soon as it starts to fill up again thats when it slows down and the overall speed is lost.
  • Has anyone tried this on SBS Essentials?
  • Has anyone noticed this still happens in M4?

     

     

  • Covecube
    I've observed this periodic flushing of the cache when testing M4.

    I then put in explicit code to flush the cache for every file, and that really killed the performance, so it's not that DrivePool is flushing the cache in that way.

    Also, I've noticed that this behavior can be avoided by using something like TeraCopy, which disables the Windows cache and uses its own.

    I plan to investigate this further before release.
  • Installed the latest build 5752 and the speed was constant, no drops like before.

     

    Will do some more testing but its looking good.

    Excellent work.

  • Installed the latest build 5805 and sadly the speed drops have returned?
  • Downgraded to last build and got a speed drop.

     

    Will do some more testing.

  • Member
    For me this is completely solved now with release 5824. Thanks again Alex, getting better all the time and looking forward to the future with this.
  • Member
    I'm seeing this in 5824 - for the first time, it has never happened before.  Write times go to zero, and after about 20 seconds kick back in.
  • edited March 2012 Member
    I'm seeing this in 5824 - for the first time, it has never happened before.  Write times go to zero, and after about 20 seconds kick back in.
    I am having horrible speed drops as well.  I have never had a slow down ever before.  Now I can't even copy a folder of files.  If I move files from a drive on the server to the pool everything goes quickly.  Every time I try to copy files across a network into the drivepool folder things freeze up horribly, like it is trying to cache or something.  The freeze time is about that for me too.  Every file that tries to be copied freezes for 20 seconds then copies.

    I can also 100% confirm the issue is with 5824.  I removed all traces of Drivepool from my computer and uninstalled it.  I have been keeping a folder of every version so I reinstalled version 5805 from my backups and now everything is back to full speed and my copy speeds are back up around 80MB/s.  On 5824 my speeds could not get over 27 KB/s.  I think I am just gonna stay on 5805 until this problem gets fixed.

    Also just to clarify, the slowdown ONLY happens when copying over the network.  If I copy between the drivepool and drives ON the server itself it is very fast.  When copying from one PC to the shared server folders is where the problem occurs.

    EDIT: I saw this in the changelog and this appears to be exactly what happens every time I try to copy files to the pool on 5824, so I think this issue may have been introduced and what EricC and I are seeing: * Fixed a longstanding issue that was causing periodic (30 sec.) disk cache flushing on all the disks, pooled and non-pooled. This was reported back when BETA M3 first went public.
  • Not stalling for me performance has been wonderful.
  • edited March 2012 Member
    Yep, I updated last night as well and just tested a transfer. Same thing for me as mjnitx02.
    ^ reverting back

    edit: discovered I can't revert back because I have a newer version.....

    I don't want to uninstall and lose the settings/setup I have now.
  • Resident Guru
    It should be possible to revert back by uninstalling the newer version yourself before reinstalling the older version?
  • I rolled back by: 1) uninstall the current version, then 2) you delete the DrivePool folder in Program Data, 3) Reboot the server and 4) reinstall the older version.

    I didn't lose any settings when I did it.  It took my system about 25 minutes to remeasure all of the drives (I have 10 in the pool) and then everything reappeared exactly as it was before and all of my shares returned to normal.  Only difference is I am able to transfer data at normal speeds using the older version.
  • edited March 2012 Member

    Isn't it strange that for some of you 5824 cause slow down and for others who had stalling prior to this build it cures it?

    Before 5824 all the way back to M3 my transfers would stall after about 30 seconds or so, speed would be around 110MB/s up till then, then stall for around 20 secs and resume for a about 10sec, etc and repeat till transfer done.

    Now I'm using 5824 speed is around 110MB/s for the full duration, it does vary slightly up and down to around 100MB/s but stalling has absolutely stopped.

    Oh and of course I'm referring to transfers over the network to and from pooled duplicated folders. 

  • @btb66:
    That makes me wonder if this is hardware related.  Maybe there should be a manual option to disable/enable caching or it should be setup for a specific server board type.  I am not sure of the specifics, as I have never programmed NTFS I/O software.  Since we are all running WHS 2011, it would almost have to be related to the hardware.

    Maybe if people start posting up their server hardware/controllers we could find some relation between the slowdowns and the hardware.
    I have an HP Proliant N40L with a Rosewill 2 Port Esata card in the PCI-E slot.  I have 2 HDDs hooked up to the internal SATA in the server, and 2 4-Bay ESATA enclosures hooked to the PCI-E Esata card.  There are 9 HDDs in Drivepool and 1 other drive for the OS.  Also all of external drives are the WD Green Drives, 6 are the 2TB models, 2 are 1TB and the 2 internal drives are 1TB Seagate Barracudas.
  • edited March 2012 Member
    I had to roll back to 5805.   Using 5824 my write speeds to a duplicated share hovered around 40-50 kbps.  Read speeds were fine as were writes to a non duplicated share.    With 5805 I'm over 50mb/s on writes regardless of duplication.
  • @btb66:
    That makes me wonder if this is hardware related.  Maybe there should be a manual option to disable/enable caching or it should be setup for a specific server board type.  I am not sure of the specifics, as I have never programmed NTFS I/O software.  Since we are all running WHS 2011, it would almost have to be related to the hardware.

    Maybe if people start posting up their server hardware/controllers we could find some relation between the slowdowns and the hardware.
    I have an HP Proliant N40L with a Rosewill 2 Port Esata card in the PCI-E slot.  I have 2 HDDs hooked up to the internal SATA in the server, and 2 4-Bay ESATA enclosures hooked to the PCI-E Esata card.  There are 9 HDDs in Drivepool and 1 other drive for the OS.  Also all of external drives are the WD Green Drives, 6 are the 2TB models, 2 are 1TB and the 2 internal drives are 1TB Seagate Barracudas.
    Well, it's certainly a software problem. It maybe correleated to the hardware controller being used, but everyone is noticing the problem or benefit after a software change. I am using an Intel 5/3400 series controller in mine with Samsung F4 2TB drives for the storage, a 250GB Seagate 7200.10 for the OS, and 2TB WD Green for backup. I will just wait and update once w he has the bug worked out.
  • edited March 2012 Member
    Well, it's certainly a software problem. 
    agree (strongly).
  • Tried out the new 5831 and the 30 second lockup problem on every file transfer still exists.  Its back to 5805 again, as these new updates are completely unusable.
  • Care to be a guinea pig for 5843?  Looks like Alex may have addressed the problem.
  • The problem is definitely fixed for me with 5843! Thank you for the timely fix Alex.
Sign In or Register to comment.