[NBLUG/talk] Samba and large files.

Scrappy Laptop scrappylaptop at yahoo.com
Fri Mar 20 11:24:13 PDT 2009


Funny this came up, I've been working a similar problem this week at work.  I'm trying to copy >TB database files, sometimes from one server to another, sometimes to another volume on the same server.  Always bombs out.  After a fair amount of research I found the following which may be pertinent to what you are seeing, if not it is something to keep in mind if you copy to or from Windows servers on a regular basis.

Windows from at least 2000 on has used an I/O cache to copy files.  The cache segments are tracked using 2k or 4k per MB of the file tracked.  The pool of tracking entries runs out and the copy fails.  All native windows file copy methods use the I/O caching (*shares*, xcopy, robocopy, explorer) with the possible exception of eseutil, an Exchange util that defrags exchange by doing a file copy.  The worse part is that the caching is done on both the sending and receiving ends of a copy.  There are also indications that Windows may at times either try to cache the entire file before completing the copy or is not releasing either cache segments or tracking memory.

The solution? Use FTP or in my case, DeltaCopy, a Windows rsync utility.  Neither use Windows buffering to do the copy.  And apparently DeltaCopy servers will accept files from Linux rsync clients!

links and Google terms (whole lotta not-Linux stuff):
"File copy fails with the error Insufficient System Resources"
KB259837
FILE_FLAG_WRITE_THROUGH
FILE_FLAG_NO_BUFFERING
CreateFile() flag FILE_FLAG_SEQUENTIAL_SCAN

site:blogs.technet.com "Slow Large File Copy Issues"

(similar issue, same cause, same workarounds)

site:blogs.technet.com "File copy fails with the error Insufficient System Resources"

(the blogs.technet.com "search"...doesn't.  Thanks, MS!)

Also:
Mark Russinovich's blog, slightly different topic but a full explanation of the file copy I/O cache, in eye-bleeding, mind-numbing detail:
http://blogs.technet.com/markrussinovich/archive/2008/02/04/2826167.aspx

Sorry for the long post on what is essentially a non-Linux problem, but it does affect us transferring *to* MS servers.

Frank




--- On Thu, 3/19/09, Steve Johnson <fratm at adnd.com> wrote:

> From: Steve Johnson <fratm at adnd.com>
> Subject: Re: [NBLUG/talk] Samba and large files.
> To: "General NBLUG chatter about anything Linux, answers to questions, etc." <talk at nblug.org>
> Date: Thursday, March 19, 2009, 1:57 PM
> That helps, thanks Chris,
> 
> I'm in a situation where changing the kernel is not an
> option.. So I have
> come up with another solution using tar and piping it into
> split to create
> smaller files.. This solves my problem for now.
> 
> If you are curious how I did it, you can read my blog entry
> about it
> http://blog.fratm.com/node/11
> 
> Thanks for the replies..
> 
> -Steve
> 
> (Thanks to Scott Doty for confirming the header issue I was
> having in this
> solution..)
> 
> 
> 2009/3/19 Christopher Wagner <waggie at waggie.net>
> 
> >  I remember struggling with this particular
> limitation, now that you
> > mention it, but don't recall how I solved it.  It
> was on a Redhat 7.2 box,
> > and I don't remember having to compile anything,
> but that was an awful long
> > time ago..  I did end up mounting with CIFS but
> don't remember precisely
> > how.  Some Googling suggests that a RHEL3 kernel
> upgrade to support CIFS is
> > in order.
> >
> > Sorry I'm not more help.
> >
> > - Chris
> >
> > Steve Johnson wrote:
> >
> > Steve, Chris
> >
> > NTFS From what I have been told by the other
> sysadmin.. This on on a Win2k3
> > Server box..
> >
> > I've been digging around google for this, and it
> looks like I may need to
> > mount the file system as CIFS, so I am going to see
> what is invovled in
> > getting that file system support in my RHEL3 system..
> It does not look like
> > RedHat has support for this.. so I may need to compile
> the modules my self..
> >
> > -Steve
> >
> >
> > 2009/3/19 Christopher Wagner <waggie at waggie.net>
> >
> >> This is kind of a stupid question, but what's
> the destination filesystem?
> >> I've encountered a similar problem in the
> distant past, and I believe it
> >> ended up being the filesystem that was causing the
> problem, not samba.
> >> Otherwise, I've never had issues transferring
> files larger than 2 GB with
> >> Samba.
> >>
> >> - Chris
> >>
> >> Steve Johnson wrote:
> >>
> >>  I'm trying to move a 6Gig file over to a
> Samba share from my linux box,
> >> and the system keeps bombing out at 2.1Gigs.. this
> appears to be a limit
> >> with samba, but I read that if you add -o lfs to
> the command line when
> >> mounting the samba stuff, that it will allow
> larger than 2.1G files..  But..
> >> I am using automount, and it seems to ignore that
> flag..
> >>
> >> Does anyone know a way to make this work?
> >>
> >> Thanks.
> >>
> >> -Steve
> >>
> >>  ------------------------------
> >>
> >> _______________________________________________
> >> talk mailing
> listtalk at nblug.orghttp://nblug.org/cgi-bin/mailman/listinfo/talk
> >>
> >>
> >> _______________________________________________
> >> talk mailing list
> >> talk at nblug.org
> >> http://nblug.org/cgi-bin/mailman/listinfo/talk
> >>
> >>
> > ------------------------------
> >
> > _______________________________________________
> > talk mailing
> listtalk at nblug.orghttp://nblug.org/cgi-bin/mailman/listinfo/talk
> >
> >
> > _______________________________________________
> > talk mailing list
> > talk at nblug.org
> > http://nblug.org/cgi-bin/mailman/listinfo/talk
> >
> >
> _______________________________________________
> talk mailing list
> talk at nblug.org
> http://nblug.org/cgi-bin/mailman/listinfo/talk


      



More information about the talk mailing list