[nflug] backups on an nfs server

eric eric at bootz.us
Fri Aug 4 08:00:52 EDT 2006


Funny, I remember reading that many many moons ago

Mark Musone wrote:

>FYI,
>
>Don't get me wrong, i'm a dump/restore lover myself, and until last week swore up and down that it's stil lthe best thing since sliced bread..
>
>HOWEVER..the following has gotten me seriously thinking otherwise..(you can google more info about it too..soem more recent)
>
>
>
>http://lwn.net/2001/0503/a/lt-dump.php3
>
>From:	 Linus Torvalds <torvalds at transmeta.com>
>To:	 Neil Conway <nconway.list at ukaea.org.uk>
>Subject: Re: [PATCH] SMP race in ext2 - metadata corruption.
>Date:	 Fri, 27 Apr 2001 09:59:46 -0700 (PDT)
>Cc:	 Kernel Mailing List <linux-kernel at vger.kernel.org>
>
>
>[ linux-kernel added back as a cc ]
>
>On Fri, 27 Apr 2001, Neil Conway wrote:
>  
>
>>I'm surprised that dump is deprecated (by you at least ;-)).  What to
>>use instead for backups on machines that can't umount disks regularly? 
>>    
>>
>
>Note that dump simply won't work reliably at all even in 2.4.x: the buffer
>cache and the page cache (where all the actual data is) are not
>coherent. This is only going to get even worse in 2.5.x, when the
>directories are moved into the page cache as well.
>
>So anybody who depends on "dump" getting backups right is already playing
>russian rulette with their backups.  It's not at all guaranteed to get the
>right results - you may end up having stale data in the buffer cache that
>ends up being "backed up".
>
>Dump was a stupid program in the first place. Leave it behind.
>
>  
>
>>I've always thought "tar" was a bit undesirable (updates atimes or
>>ctimes for example).
>>    
>>
>
>Right now, the cpio/tar/xxx solutions are definitely the best ones, and
>will work on multiple filesystems (another limitation of "dump"). Whatever
>problems they have, they are still better than the _guaranteed_(*)  data
>corruptions of "dump".
>
>However, it may be that in the long run it would be advantageous to have a
>"filesystem maintenance interface" for doing things like backups and
>defragmentation..
>
>		Linus
>
>(*) Dump may work fine for you a thousand times. But it _will_ fail under
>the right circumstances. And there is nothing you can do about it.
>
>
>
>
>On Thu, Aug 03, 2006 at 06:47:19PM -0400, Cyber Source wrote:
>  
>
>>eric wrote:
>>    
>>
>>>which would be better with a mounted nfs share from a nfs server.
>>>
>>>executing a tar and gzip command to package a home dir on the servers
>>>share from the client or
>>>executing a tar and gzip command to package a home dir on the client and
>>>then moving the package to the servers share?
>>>
>>>
>>>I will want to increment files once a full backup is created, so I'm not
>>>sure which would be better in the long run.
>>>What seems to be quicker is to execute the increment on the share across
>>>the network instead of always moving/coping a large package from client
>>>to server.
>>>
>>>Thank you for your input,
>>>Eric
>>>
>>>
>>>
>>>
>>>_______________________________________________
>>>nflug mailing list
>>>nflug at nflug.org
>>>http://www.nflug.org/mailman/listinfo/nflug
>>>
>>> 
>>>      
>>>
>>Gotta LOVE NFS!
>>Here is what we do, and it works very well for us. We have a backup 
>>server that has some drives lvm'd together so we have huge storage 
>>capability. Then we have our boxes in the shop mount the respective 
>>shares exported from the backup server, and inside those respective 
>>shares are daily folders, so each box in the shop mounts it's respective 
>>share from the backup server and they then all have say /backup/Monday, 
>>/backup/Tuesday etc..
>>I then run crons on the respective boxes to backup to that share nightly 
>>of what is important on that box and to each daily folder for it's 
>>respective day, as the days repeat, they get overwritten, so this gives 
>>me 7 days of redundancy. I like that alot better than worrying about 
>>incremental backups, etc., and hard drive space is cheap today. So on my 
>>servers per say, I backup /etc /home and /var and have full dumps 
>>elsewhere in case of emergency for the rest.
>>I use dump to backup to these shares. I LOVE dump and it's counterpart 
>>restore. Restore has an interactive flag (-i) that lets you cruise 
>>through the dumps and find the data you might need, in the same 
>>directory tree that it backed up. I also limit my dumps to 1gb each, 
>>then they make another dump001, dump002, and the restore command can 
>>parse them all with one command.
>>The only thing that might stop someone from doing it this way is if they 
>>actually use acl's, as dump does not do acl's. Other than that, it works 
>>beautifully.
>>
>>_______________________________________________
>>nflug mailing list
>>nflug at nflug.org
>>http://www.nflug.org/mailman/listinfo/nflug
>>    
>>
>_______________________________________________
>nflug mailing list
>nflug at nflug.org
>http://www.nflug.org/mailman/listinfo/nflug
>  
>

_______________________________________________
nflug mailing list
nflug at nflug.org
http://www.nflug.org/mailman/listinfo/nflug



More information about the nflug mailing list