NFS problem

Justin Bennett justin.bennett at dynabrade.com
Thu Oct 2 14:57:04 EDT 2003


Automounter uses NFS, they usually use NFS/NIS/Automounter together. You 
setup a pseudo mount point like /home. Then you have a auto.home in 
/etc/ (as well as a auto.master) file that has entries in it like

user nas:/export/mishome/&

the & just means fill in the username it could also be
user nas:/export/mishome/user

 then when someone/somthing tries to access /home/user it automatically 
mounts that to nas:/export/mishome/user. So it's not mounted by default 
but as soon as something tries to use that directory it mounts it. It 
has a timeout so it will unmount it when not needed. Then I set the 
users home in /etc/passwd to /home/user. So whenever a mail gets 
delievered to the maildir directory in my home, it mounts it off the NAS 
box if it's not mounted already.

so a df -k would looks like:

Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/sda1               497829    223904    248223  48% /
none                    644604         0    644604   0% /dev/shm
/dev/sda5               497829      8764    463363   2% /tmp
/dev/sda2              2522076   1125360   1268600  48% /usr
/dev/sda6              4648864   3199848   1212864  73% /var
nas:/export/home/user1
                      45358500  22874616  20179764  54% /home/user1
nas:/export/home/user2
                      45358500  22874616  20179764  54% /home/user2
nas:/export/home/user3
                      45358500  22874616  20179764  54% /home/user3

 It works fine for 200 users like us, not sure how automounting 4000 
home directories would work, it would make running a df -k a real adventure.

Some more info:
http://penguin.epfl.ch/athome/
http://www.linux-consulting.com/Amd_AutoFS/autofs-5.html



S. Johnson wrote:

> Hi Justin,
>
> At 11:39 10/02/03 -0400, you wrote:
>
>> I don't do nfs mounts that way, I mount my nfs vols using 
>> automounter. I don't have any greif, but I don't have that many 
>> users, only about 200. Automouter wil lexpire mounts and unmount them 
>> when not being used.
>
>
> I am not sure this would be a good option, given that the volume I am 
> sharing has 4500+ users constantly getting and checking mail.  This 
> share would have constant RW I/O.  What does automounter use to 
> connect the systems?
>
>> What os is on servers 1 and 2. Possible NFS version conflict? I used 
>> to have some greif with NFS when mounting shares on a solaris intel 
>> box from a linux box.
>
>
>
> All boxes are Redhat Linux.  Servers 2 and 3 are Redhat 8.0, server 
> one is Redhat 7.2 and is getting reloaded with 8.0 today.  They should 
> all support version 3 on the 2.4 kernel.
>
> Thanks,
>
> Sean Johnson
>
>
>
>
>> S. Johnson wrote:
>>
>>> I have 2 client systems that need to access a mail volume via NFS.  
>>> I believe it is an optimization/setup problem, but am unsure of what 
>>> to try to resolve it.  Here's the setup:
>>>
>>> Server 3 - NFS Server, redhat 8.0, exporting /users from a fiber 
>>> channel array it hosts.  Mail is sent to and picked up to users home 
>>> directories, so there is a lot of disk access happening with read 
>>> and writes (4500 users).  /etc/exports looks like this:
>>>
>>> /db     192.168.1.0/255.255.255.0(rw,no_root_squash)
>>> /isp    192.168.1.0/255.255.255.0(rw,no_root_squash)
>>> /users  192.168.1.0/255.255.255.0(rw,no_root_squash)
>>>
>>> For now, the main export I am concerned with is /users, however, all 
>>> these partitions are on the same fiber channel raid and are still 
>>> accessed by the clients.  Traffic on the other two shares in pretty 
>>> minimal, but may still be a factor in overall performance of the 
>>> system.
>>>
>>> Servers 1 and 2 are configured to be able to run Postfix or 
>>> courier-imap, and access the /users share from server 3 via NFS.  
>>> Here is the /etc/fstab the clients use:
>>>
>>> server3:/db    /db    nfs bg,nfsvers=3,rsize=8192,wsize=8192 0 0
>>> server3:/isp   /isp   nfs bg,nfsvers=3,rsize=8192,wsize=8192 0 0
>>> server3:/users /users nfs bg,nfsvers=3,rsize=8192,wsize=8192 0 0
>>>
>>> Servers 1 and 2 are able to mount and read the volumes fine when 
>>> there is little or no traffic.  However, when you move either 
>>> Postfix or Courier-imap services over to them, they eventually 
>>> (after several hours) start to have NFS problems.  After a while, 
>>> there will be hundreds of dead processes still hanging around and 
>>> the load average skyrockets (200 or more).  The mounts to /users or 
>>> the other two are not available.  Executing a df or mount command 
>>> hangs your terminal.
>>> Sometimes you can kill off processes and restart NFS services, other 
>>> times it requires a reboot of the client and usually means doing it 
>>> by powering off the machine because it hangs on the NFS processes 
>>> and will not shut them down.
>>>
>>> Is there a tried and true way to setup NFS between the server and 
>>> clients that will support high volumes of traffic?  If anyone knows 
>>> of a better way to setup things the client and/or server side, 
>>> please let me know.
>>>
>>> Thanks,
>>>
>>> Sean Johnson
>>
>>
>> -- 
>> Justin Bennett
>> Network Administrator
>> RHCE (Redhat Certified Linux Engineer)
>> Dynabrade, Inc.
>> 8989 Sheridan Dr.
>> Clarence, NY 14031
>>
>

-- 
Justin Bennett
Network Administrator
RHCE (Redhat Certified Linux Engineer)
Dynabrade, Inc.
8989 Sheridan Dr.
Clarence, NY 14031
 





More information about the nflug mailing list