NFS problem
Justin Bennett
justin.bennett at dynabrade.com
Thu Oct 2 14:59:39 EDT 2003
They're maybe some greif mounting shares from rh 8 on a rh 7.x box. What
versions of nfsutils are you running. Bringing them up to the same OS
may fix your problem.
Justin
S. Johnson wrote:
> Hi Justin,
>
> At 11:39 10/02/03 -0400, you wrote:
>
>> I don't do nfs mounts that way, I mount my nfs vols using
>> automounter. I don't have any greif, but I don't have that many
>> users, only about 200. Automouter wil lexpire mounts and unmount them
>> when not being used.
>
>
> I am not sure this would be a good option, given that the volume I am
> sharing has 4500+ users constantly getting and checking mail. This
> share would have constant RW I/O. What does automounter use to
> connect the systems?
>
>> What os is on servers 1 and 2. Possible NFS version conflict? I used
>> to have some greif with NFS when mounting shares on a solaris intel
>> box from a linux box.
>
>
>
> All boxes are Redhat Linux. Servers 2 and 3 are Redhat 8.0, server
> one is Redhat 7.2 and is getting reloaded with 8.0 today. They should
> all support version 3 on the 2.4 kernel.
>
> Thanks,
>
> Sean Johnson
>
>
>
>
>> S. Johnson wrote:
>>
>>> I have 2 client systems that need to access a mail volume via NFS.
>>> I believe it is an optimization/setup problem, but am unsure of what
>>> to try to resolve it. Here's the setup:
>>>
>>> Server 3 - NFS Server, redhat 8.0, exporting /users from a fiber
>>> channel array it hosts. Mail is sent to and picked up to users home
>>> directories, so there is a lot of disk access happening with read
>>> and writes (4500 users). /etc/exports looks like this:
>>>
>>> /db 192.168.1.0/255.255.255.0(rw,no_root_squash)
>>> /isp 192.168.1.0/255.255.255.0(rw,no_root_squash)
>>> /users 192.168.1.0/255.255.255.0(rw,no_root_squash)
>>>
>>> For now, the main export I am concerned with is /users, however, all
>>> these partitions are on the same fiber channel raid and are still
>>> accessed by the clients. Traffic on the other two shares in pretty
>>> minimal, but may still be a factor in overall performance of the
>>> system.
>>>
>>> Servers 1 and 2 are configured to be able to run Postfix or
>>> courier-imap, and access the /users share from server 3 via NFS.
>>> Here is the /etc/fstab the clients use:
>>>
>>> server3:/db /db nfs bg,nfsvers=3,rsize=8192,wsize=8192 0 0
>>> server3:/isp /isp nfs bg,nfsvers=3,rsize=8192,wsize=8192 0 0
>>> server3:/users /users nfs bg,nfsvers=3,rsize=8192,wsize=8192 0 0
>>>
>>> Servers 1 and 2 are able to mount and read the volumes fine when
>>> there is little or no traffic. However, when you move either
>>> Postfix or Courier-imap services over to them, they eventually
>>> (after several hours) start to have NFS problems. After a while,
>>> there will be hundreds of dead processes still hanging around and
>>> the load average skyrockets (200 or more). The mounts to /users or
>>> the other two are not available. Executing a df or mount command
>>> hangs your terminal.
>>> Sometimes you can kill off processes and restart NFS services, other
>>> times it requires a reboot of the client and usually means doing it
>>> by powering off the machine because it hangs on the NFS processes
>>> and will not shut them down.
>>>
>>> Is there a tried and true way to setup NFS between the server and
>>> clients that will support high volumes of traffic? If anyone knows
>>> of a better way to setup things the client and/or server side,
>>> please let me know.
>>>
>>> Thanks,
>>>
>>> Sean Johnson
>>
>>
>> --
>> Justin Bennett
>> Network Administrator
>> RHCE (Redhat Certified Linux Engineer)
>> Dynabrade, Inc.
>> 8989 Sheridan Dr.
>> Clarence, NY 14031
>>
>
--
Justin Bennett
Network Administrator
RHCE (Redhat Certified Linux Engineer)
Dynabrade, Inc.
8989 Sheridan Dr.
Clarence, NY 14031
More information about the nflug
mailing list