[nflug] backups on an nfs server

Cyber Source peter at thecybersource.com
Fri Aug 4 11:33:33 EDT 2006


Cyber Source wrote:
> Mark Musone wrote:
>> FYI,
>>
>> Don't get me wrong, i'm a dump/restore lover myself, and until last 
>> week swore up and down that it's stil lthe best thing since sliced 
>> bread..
>>
>> HOWEVER..the following has gotten me seriously thinking 
>> otherwise..(you can google more info about it too..soem more recent)
>>
>>
>>
>> http://lwn.net/2001/0503/a/lt-dump.php3
>>
>> From: Linus Torvalds <torvalds at transmeta.com>
>> To: Neil Conway <nconway.list at ukaea.org.uk>
>> Subject: Re: [PATCH] SMP race in ext2 - metadata corruption.
>> Date: Fri, 27 Apr 2001 09:59:46 -0700 (PDT)
>> Cc: Kernel Mailing List <linux-kernel at vger.kernel.org>
>>
>>
>> [ linux-kernel added back as a cc ]
>>
>> On Fri, 27 Apr 2001, Neil Conway wrote:
>>> I'm surprised that dump is deprecated (by you at least ;-)). What to
>>> use instead for backups on machines that can't umount disks regularly? 
>>
>> Note that dump simply won't work reliably at all even in 2.4.x: the 
>> buffer
>> cache and the page cache (where all the actual data is) are not
>> coherent. This is only going to get even worse in 2.5.x, when the
>> directories are moved into the page cache as well.
>>
>> So anybody who depends on "dump" getting backups right is already 
>> playing
>> russian rulette with their backups. It's not at all guaranteed to get 
>> the
>> right results - you may end up having stale data in the buffer cache 
>> that
>> ends up being "backed up".
>>
>> Dump was a stupid program in the first place. Leave it behind.
>>
>>> I've always thought "tar" was a bit undesirable (updates atimes or
>>> ctimes for example).
>>
>> Right now, the cpio/tar/xxx solutions are definitely the best ones, and
>> will work on multiple filesystems (another limitation of "dump"). 
>> Whatever
>> problems they have, they are still better than the _guaranteed_(*) data
>> corruptions of "dump".
>>
>> However, it may be that in the long run it would be advantageous to 
>> have a
>> "filesystem maintenance interface" for doing things like backups and
>> defragmentation..
>>
>> Linus
>>
>> (*) Dump may work fine for you a thousand times. But it _will_ fail 
>> under
>> the right circumstances. And there is nothing you can do about it.
>>
>>
>>
>>
>> On Thu, Aug 03, 2006 at 06:47:19PM -0400, Cyber Source wrote:
>>> eric wrote:
>>>> which would be better with a mounted nfs share from a nfs server.
>>>>
>>>> executing a tar and gzip command to package a home dir on the servers
>>>> share from the client or
>>>> executing a tar and gzip command to package a home dir on the 
>>>> client and
>>>> then moving the package to the servers share?
>>>>
>>>>
>>>> I will want to increment files once a full backup is created, so 
>>>> I'm not
>>>> sure which would be better in the long run.
>>>> What seems to be quicker is to execute the increment on the share 
>>>> across
>>>> the network instead of always moving/coping a large package from 
>>>> client
>>>> to server.
>>>>
>>>> Thank you for your input,
>>>> Eric
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> nflug mailing list
>>>> nflug at nflug.org
>>>> http://www.nflug.org/mailman/listinfo/nflug
>>>>
>>>>
>>> Gotta LOVE NFS!
>>> Here is what we do, and it works very well for us. We have a backup 
>>> server that has some drives lvm'd together so we have huge storage 
>>> capability. Then we have our boxes in the shop mount the respective 
>>> shares exported from the backup server, and inside those respective 
>>> shares are daily folders, so each box in the shop mounts it's 
>>> respective share from the backup server and they then all have say 
>>> /backup/Monday, /backup/Tuesday etc..
>>> I then run crons on the respective boxes to backup to that share 
>>> nightly of what is important on that box and to each daily folder 
>>> for it's respective day, as the days repeat, they get overwritten, 
>>> so this gives me 7 days of redundancy. I like that alot better than 
>>> worrying about incremental backups, etc., and hard drive space is 
>>> cheap today. So on my servers per say, I backup /etc /home and /var 
>>> and have full dumps elsewhere in case of emergency for the rest.
>>> I use dump to backup to these shares. I LOVE dump and it's 
>>> counterpart restore. Restore has an interactive flag (-i) that lets 
>>> you cruise through the dumps and find the data you might need, in 
>>> the same directory tree that it backed up. I also limit my dumps to 
>>> 1gb each, then they make another dump001, dump002, and the restore 
>>> command can parse them all with one command.
>>> The only thing that might stop someone from doing it this way is if 
>>> they actually use acl's, as dump does not do acl's. Other than that, 
>>> it works beautifully.
>>>
>>> _______________________________________________
>>> nflug mailing list
>>> nflug at nflug.org
>>> http://www.nflug.org/mailman/listinfo/nflug
>> _______________________________________________
>> nflug mailing list
>> nflug at nflug.org
>> http://www.nflug.org/mailman/listinfo/nflug
>>
> Wow, and that coming from Linus himself. I myself have never seen it 
> bork and get email confirmations verbosely of the data that does get 
> dumped nightly (if that really means anything as far as backup 
> validity). I could just change the commands to tar but really love the 
> restore interaction command. I also imagine that Linus is a complete 
> stickler for having things perfect and in the case of backups, anyone 
> should be. I worked for years with an NT Veritas tape backup solution 
> that continually reported all the backups fine only to actually find 
> NOTHING on the tapes! What a complete abortion that Veritas thing was 
> especially in the manner of interaction between the software and tape 
> drive. Anywho, thanks for the info, I will pay closer attention to my 
> backups from now on (as anyone should) ;)
> _______________________________________________
> nflug mailing list
> nflug at nflug.org
> http://www.nflug.org/mailman/listinfo/nflug
>
I did some more diggin on dump. For a "stupid"(as was mentioned) 
program, dump seems to be kept up to date well. Here is an excerpt from 
the man page of dump that was just installed on an Ubuntu Dapper box 
(note the date of Jan 2 2006). Also, no mention of the cache problem in 
the bug list.

BUGS
It might be considered a bug that this version of dump can only handle
ext2/3 filesystems. Specifically, it does not work with FAT filesys-
tems.

Fewer than 32 read errors (change this with -I) on the filesystem are
ignored. If noticing read errors is important, the output from dump can
be parsed to look for lines that contain the text 'read error'.

When a read error occurs, dump prints out the corresponding physical
disk block and sector number and the ext2/3 logical block number. It
doesn't print out the corresponing file name or even the inode number.
The user has to use debugfs(8), commands ncheck and icheck to translate
the ext2blk number printed out by dump into an inode number, then into
a file name.

Each reel requires a new process, so parent processes for reels already
written just hang around until the entire tape is written.

The estimated number of tapes is not correct if compression is on.

It would be nice if dump knew about the dump sequence, kept track of
the tapes scribbled on, told the operator which tape to mount when, and
provided more assistance for the operator running restore.

Dump cannot do remote backups without being run as root, due to its
security history. Presently, it works if you set it setuid (like it
used to be), but this might constitute a security risk. Note that you
can set RSH to use a remote shell program instead.

AUTHOR
The dump/restore backup suite was ported to Linux's Second Extended
File System by Remy Card <card at Linux.EU.Org>. He maintained the initial
versions of dump (up and including 0.4b4, released in january 1997).

Starting with 0.4b5, the new maintainer is Stelian Pop
<stelian at popies.net>.

AVAILABILITY
The dump/restore backup suite is available from <http://dump.source-
forge.net>

HISTORY
A dump command appeared in Version 6 AT&T UNIX.



BSD version 0.4b41 of January 2, 2006 DUMP(8)

_______________________________________________
nflug mailing list
nflug at nflug.org
http://www.nflug.org/mailman/listinfo/nflug



More information about the nflug mailing list