[nflug] backups on an nfs server

Cyber Source peter at thecybersource.com
Thu Aug 3 18:47:19 EDT 2006


eric wrote:
> which would be better with a mounted nfs share from a nfs server.
>
> executing a tar and gzip command to package a home dir on the servers
> share from the client or
> executing a tar and gzip command to package a home dir on the client and
> then moving the package to the servers share?
>
>
> I will want to increment files once a full backup is created, so I'm not
> sure which would be better in the long run.
> What seems to be quicker is to execute the increment on the share across
> the network instead of always moving/coping a large package from client
> to server.
>
> Thank you for your input,
> Eric
>
>
>
>
> _______________________________________________
> nflug mailing list
> nflug at nflug.org
> http://www.nflug.org/mailman/listinfo/nflug
>
>   
Gotta LOVE NFS!
Here is what we do, and it works very well for us. We have a backup 
server that has some drives lvm'd together so we have huge storage 
capability. Then we have our boxes in the shop mount the respective 
shares exported from the backup server, and inside those respective 
shares are daily folders, so each box in the shop mounts it's respective 
share from the backup server and they then all have say /backup/Monday, 
/backup/Tuesday etc..
I then run crons on the respective boxes to backup to that share nightly 
of what is important on that box and to each daily folder for it's 
respective day, as the days repeat, they get overwritten, so this gives 
me 7 days of redundancy. I like that alot better than worrying about 
incremental backups, etc., and hard drive space is cheap today. So on my 
servers per say, I backup /etc /home and /var and have full dumps 
elsewhere in case of emergency for the rest.
I use dump to backup to these shares. I LOVE dump and it's counterpart 
restore. Restore has an interactive flag (-i) that lets you cruise 
through the dumps and find the data you might need, in the same 
directory tree that it backed up. I also limit my dumps to 1gb each, 
then they make another dump001, dump002, and the restore command can 
parse them all with one command.
The only thing that might stop someone from doing it this way is if they 
actually use acl's, as dump does not do acl's. Other than that, it works 
beautifully.

_______________________________________________
nflug mailing list
nflug at nflug.org
http://www.nflug.org/mailman/listinfo/nflug



More information about the nflug mailing list